title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 4. Configuring the Load Balancer Add-On with Piranha Configuration Tool
Chapter 4. Configuring the Load Balancer Add-On with Piranha Configuration Tool The Piranha Configuration Tool provides a structured approach to creating the necessary configuration file for the Load Balancer Add-On - /etc/sysconfig/ha/lvs.cf . This chapter describes the basic operation of the Piranha Configuration Tool and how to activate Load Balancer Add-On once configuration is complete. Important The configuration file for the Load Balancer Add-On follows strict formatting rules. Using the Piranha Configuration Tool is the best way to prevent syntax errors in the lvs.cf and therefore prevent software failures. 4.1. Necessary Software The piranha-gui service must be running on the primary LVS router to use the Piranha Configuration Tool . To configure the Load Balancer Add-On, you minimally need a text-only Web browser, such as links . If you are accessing the LVS router from another machine, you also need an ssh connection to the primary LVS router as the root user. While configuring the primary LVS router it is a good idea to keep a concurrent ssh connection in a terminal window. This connection provides a secure way to restart pulse and other services, configure network packet filters, and monitor /var/log/messages during trouble shooting. The four sections walk through each of the configuration pages of the Piranha Configuration Tool and give instructions on using it to set up the Load Balancer Add-On.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/ch-lvs-piranha-VSA
Data Grid downloads
Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/using_the_resp_protocol_endpoint_with_data_grid/rhdg-downloads_datagrid
Appendix D. Overriding Ceph Default Settings
Appendix D. Overriding Ceph Default Settings Unless otherwise specified in the Ansible configuration files, Ceph uses its default settings. Because Ansible manages the Ceph configuration file, edit the /usr/share/ceph-ansible/group_vars/all.yml file to change the Ceph configuration. Use the ceph_conf_overrides setting to override the default Ceph configuration. Ansible supports the same sections as the Ceph configuration file; [global] , [mon] , [osd] , [mds] , [rgw] , and so on. You can also override particular instances, such as a particular Ceph Object Gateway instance. For example: Note Do not use a variable as a key in the ceph_conf_overrides setting. You must pass the absolute label for the host for the section(s) for which you want to override particular configuration value. Note Ansible does not include braces when referring to a particular section of the Ceph configuration file. Sections and settings names are terminated with a colon. Important Do not set the cluster network with the cluster_network parameter in the CONFIG OVERRIDE section because this can cause two conflicting cluster networks being set in the Ceph configuration file. To set the cluster network, use the cluster_network parameter in the CEPH CONFIGURATION section. For details, see Installing a Red Hat Ceph Storage cluster in the Red Hat Ceph Storage Installation Guide .
[ "################### CONFIG OVERRIDE # ################### ceph_conf_overrides: client.rgw.server601.rgw1: rgw_enable_ops_log: true log_file: /var/log/ceph/ceph-rgw-rgw1.log" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/installation_guide/overriding-ceph-default-settings
Chapter 2. Red Hat Decision Manager BPMN and DMN modelers
Chapter 2. Red Hat Decision Manager BPMN and DMN modelers Red Hat Decision Manager provides the following extensions or applications that you can use to design Business Process Model and Notation (BPMN) process models and Decision Model and Notation (DMN) decision models using graphical modelers. Business Central : Enables you to view and design BPMN models, DMN models, and test scenario files in a related embedded designer. To use Business Central, you can set up a development environment containing a Business Central to design business rules and processes, and a KIE Server to execute and test the created business rules and processes. Red Hat Decision Manager VS Code extension : Enables you to view and design BPMN models, DMN models, and test scenario files in Visual Studio Code (VS Code). The VS Code extension requires VS Code 1.46.0 or later. To install the Red Hat Decision Manager VS Code extension, select the Extensions menu option in VS Code and search for and install the Red Hat Business Automation Bundle extension. Standalone BPMN and DMN editors : Enable you to view and design BPMN and DMN models embedded in your web applications. To download the necessary files, you can either use the NPM artifacts from the NPM registry or download the JavaScript files directly for the DMN standalone editor library at https://<YOUR_PAGE>/dmn/index.js and for the BPMN standalone editor library at https://<YOUR_PAGE>/bpmn/index.js . 2.1. Installing the Red Hat Decision Manager VS Code extension bundle Red Hat Decision Manager provides a Red Hat Business Automation Bundle VS Code extension that enables you to design Decision Model and Notation (DMN) decision models, Business Process Model and Notation (BPMN) 2.0 business processes, and test scenarios directly in VS Code. VS Code is the preferred integrated development environment (IDE) for developing new business applications. Red Hat Decision Manager also provides individual DMN Editor and BPMN Editor VS Code extensions for DMN or BPMN support only, if needed. Important The editors in the VS Code are partially compatible with the editors in the Business Central, and several Business Central features are not supported in the VS Code. Prerequisites The latest stable version of VS Code is installed. Procedure In your VS Code IDE, select the Extensions menu option and search for Red Hat Business Automation Bundle for DMN, BPMN, and test scenario file support. For DMN or BPMN file support only, you can also search for the individual DMN Editor or BPMN Editor extensions. When the Red Hat Business Automation Bundle extension appears in VS Code, select it and click Install . For optimal VS Code editor behavior, after the extension installation is complete, reload or close and re-launch your instance of VS Code. After you install the VS Code extension bundle, any .dmn , .bpmn , or .bpmn2 files that you open or create in VS Code are automatically displayed as graphical models. Additionally, any .scesim files that you open or create are automatically displayed as tabular test scenario models for testing the functionality of your business decisions. If the DMN, BPMN, or test scenario modelers open only the XML source of a DMN, BPMN, or test scenario file and displays an error message, review the reported errors and the model file to ensure that all elements are correctly defined. Note For new DMN or BPMN models, you can also enter dmn.new or bpmn.new in a web browser to design your DMN or BPMN model in the online modeler. When you finish creating your model, you can click Download in the online modeler page to import your DMN or BPMN file into your Red Hat Decision Manager project in VS Code. 2.2. Configuring the Red Hat Decision Manager standalone editors Red Hat Decision Manager provides standalone editors that are distributed in a self-contained library providing an all-in-one JavaScript file for each editor. The JavaScript file uses a comprehensive API to set and control the editor. You can install the standalone editors using the following methods: Download each JavaScript file manually Use the NPM package Procedure Install the standalone editors using one of the following methods: Download each JavaScript file manually : For this method, follow these steps: Download the JavaScript files. Add the downloaded Javascript files to your hosted application. Add the following <script> tag to your HTML page: Script tag for your HTML page for the DMN editor Script tag for your HTML page for the BPMN editor Use the NPM package : For this method, follow these steps: Add the NPM package to your package.json file: Adding the NPM package Import each editor library to your TypeScript file: Importing each editor After you install the standalone editors, open the required editor by using the provided editor API, as shown in the following example for opening a DMN editor. The API is the same for each editor. Opening the DMN standalone editor const editor = DmnEditor.open({ container: document.getElementById("dmn-editor-container"), initialContent: Promise.resolve(""), readOnly: false, origin: "", resources: new Map([ [ "MyIncludedModel.dmn", { contentType: "text", content: Promise.resolve("") } ] ]) }); Use the following parameters with the editor API: Table 2.1. Example parameters Parameter Description container HTML element in which the editor is appended. initialContent Promise to a DMN model content. This parameter can be empty, as shown in the following examples: Promise.resolve("") Promise.resolve("<DIAGRAM_CONTENT_DIRECTLY_HERE>") fetch("MyDmnModel.dmn").then(content ⇒ content.text()) readOnly (Optional) Enables you to allow changes in the editor. Set to false (default) to allow content editing and true for read-only mode in editor. origin (Optional) Origin of the repository. The default value is window.location.origin . resources (Optional) Map of resources for the editor. For example, this parameter is used to provide included models for the DMN editor or work item definitions for the BPMN editor. Each entry in the map contains a resource name and an object that consists of content-type ( text or binary ) and content (similar to the initialContent parameter). The returned object contains the methods that are required to manipulate the editor. Table 2.2. Returned object methods Method Description getContent(): Promise<string> Returns a promise containing the editor content. setContent(path: string, content: string): void Sets the content of the editor. getPreview(): Promise<string> Returns a promise containing an SVG string of the current diagram. subscribeToContentChanges(callback: (isDirty: boolean) ⇒ void): (isDirty: boolean) ⇒ void Sets a callback to be called when the content changes in the editor and returns the same callback to be used for unsubscription. unsubscribeToContentChanges(callback: (isDirty: boolean) ⇒ void): void Unsubscribes the passed callback when the content changes in the editor. markAsSaved(): void Resets the editor state that indicates that the content in the editor is saved. Also, it activates the subscribed callbacks related to content change. undo(): void Undoes the last change in the editor. Also, it activates the subscribed callbacks related to content change. redo(): void Redoes the last undone change in the editor. Also, it activates the subscribed callbacks related to content change. close(): void Closes the editor. getElementPosition(selector: string): Promise<Rect> Provides an alternative to extend the standard query selector when an element lives inside a canvas or a video component. The selector parameter must follow the <PROVIDER>:::<SELECT> format, such as Canvas:::MySquare or Video:::PresenterHand . This method returns a Rect representing the element position. envelopeApi: MessageBusClientApi<KogitoEditorEnvelopeApi> This is an advanced editor API. For more information about advanced editor API, see MessageBusClientApi and KogitoEditorEnvelopeApi .
[ "<script src=\"https://<YOUR_PAGE>/dmn/index.js\"></script>", "<script src=\"https://<YOUR_PAGE>/bpmn/index.js\"></script>", "npm install @kie-tools/kie-editors-standalone", "import * as DmnEditor from \"@kie-tools/kie-editors-standalone/dist/dmn\" import * as BpmnEditor from \"@kie-tools/kie-editors-standalone/dist/bpmn\"", "const editor = DmnEditor.open({ container: document.getElementById(\"dmn-editor-container\"), initialContent: Promise.resolve(\"\"), readOnly: false, origin: \"\", resources: new Map([ [ \"MyIncludedModel.dmn\", { contentType: \"text\", content: Promise.resolve(\"\") } ] ]) });" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/con-bpmn-dmn-modelers_dmn-models
Chapter 15. Cephadm health checks
Chapter 15. Cephadm health checks As a storage administrator, you can monitor the Red Hat Ceph Storage cluster with the additional health checks provided by the Cephadm module. This is supplementary to the default healthchecks provided by the storage cluster. 15.1. Cephadm operations health checks Healthchecks are executed when the Cephadm module is active. You can get the following health warnings: CEPHADM_PAUSED Cephadm background work is paused with the ceph orch pause command. Cephadm continues to perform passive monitoring activities such as checking the host and daemon status, but it does not make any changes like deploying or removing daemons. You can resume Cephadm work with the ceph orch resume command. CEPHADM_STRAY_HOST One or more hosts have running Ceph daemons but are not registered as hosts managed by the Cephadm module. This means that those services are not currently managed by Cephadm, for example, a restart and upgrade that is included in the ceph orch ps command. You can manage the host(s) with the ceph orch host add HOST_NAME command but ensure that SSH access to the remote hosts is configured. Alternatively, you can manually connect to the host and ensure that services on that host are removed or migrated to a host that is managed by Cephadm. You can also disable this warning with the setting ceph config set mgr mgr/cephadm/warn_on_stray_hosts false CEPHADM_STRAY_DAEMON One or more Ceph daemons are running but are not managed by the Cephadm module. This might be because they were deployed using a different tool, or because they were started manually. Those services are not currently managed by Cephadm, for example, a restart and upgrade that is included in the ceph orch ps command. If the daemon is a stateful one that is a monitor or OSD daemon, these daemons should be adopted by Cephadm. For stateless daemons, you can provision a new daemon with the ceph orch apply command and then stop the unmanaged daemon. You can disable this health warning with the setting ceph config set mgr mgr/cephadm/warn_on_stray_daemons false . CEPHADM_HOST_CHECK_FAILED One or more hosts have failed the basic Cephadm host check, which verifies that:name: value The host is reachable and you can execute Cephadm. The host meets the basic prerequisites, like a working container runtime that is Podman , and working time synchronization. If this test fails, Cephadm wont be able to manage the services on that host. You can manually run this check with the ceph cephadm check-host HOST_NAME command. You can remove a broken host from management with the ceph orch host rm HOST_NAME command. You can disable this health warning with the setting ceph config set mgr mgr/cephadm/warn_on_failed_host_check false . 15.2. Cephadm configuration health checks Cephadm periodically scans each of the hosts in the storage cluster, to understand the state of the OS, disks, and NICs . These facts are analyzed for consistency across the hosts in the storage cluster to identify any configuration anomalies. The configuration checks are an optional feature. You can enable this feature with the following command: Example The configuration checks are triggered after each host scan, which is for a duration of one minute. The ceph -W cephadm command shows log entries of the current state and outcome of the configuration checks as follows: Disabled state Example Enabled state Example The configuration checks themselves are managed through several cephadm subcommands. To determine whether the configuration checks are enabled, run the following command: Example This command returns the status of the configuration checker as either Enabled or Disabled . To list all the configuration checks and their current state, run the following command: Example Each configuration check is described as follows: CEPHADM_CHECK_KERNEL_LSM Each host within the storage cluster is expected to operate within the same Linux Security Module (LSM) state. For example, if the majority of the hosts are running with SELINUX in enforcing mode, any host not running in this mode would be flagged as an anomaly and a healthcheck with a warning state is raised. CEPHADM_CHECK_SUBSCRIPTION This check relates to the status of the vendor subscription. This check is only performed for hosts using Red Hat Enterprise Linux, but helps to confirm that all the hosts are covered by an active subscription so that patches and updates are available. CEPHADM_CHECK_PUBLIC_MEMBERSHIP All members of the cluster should have NICs configured on at least one of the public network subnets. Hosts that are not on the public network will rely on routing which may affect performance. CEPHADM_CHECK_MTU The maximum transmission unit (MTU) of the NICs on OSDs can be a key factor in consistent performance. This check examines hosts that are running OSD services to ensure that the MTU is configured consistently within the cluster. This is determined by establishing the MTU setting that the majority of hosts are using, with any anomalies resulting in a Ceph healthcheck. CEPHADM_CHECK_LINKSPEED Similar to the MTU check, linkspeed consistency is also a factor in consistent cluster performance. This check determines the linkspeed shared by the majority of the OSD hosts, resulting in a healthcheck for any hosts that are set at a lower linkspeed rate. CEPHADM_CHECK_NETWORK_MISSING The public_network and cluster_network settings support subnet definitions for IPv4 and IPv6. If these settings are not found on any host in the storage cluster a healthcheck is raised. CEPHADM_CHECK_CEPH_RELEASE Under normal operations, the Ceph cluster should be running daemons under the same Ceph release, for example all Red Hat Ceph Storage cluster 5 releases. This check looks at the active release for each daemon, and reports any anomalies as a healthcheck. This check is bypassed if an upgrade process is active within the cluster. CEPHADM_CHECK_KERNEL_VERSION The OS kernel version is checked for consistency across the hosts. Once again, the majority of the hosts is used as the basis of identifying anomalies.
[ "ceph config set mgr mgr/cephadm/config_checks_enabled true", "ALL cephadm checks are disabled, use 'ceph config set mgr mgr/cephadm/config_checks_enabled true' to enable", "CEPHADM 8/8 checks enabled and executed (0 bypassed, 0 disabled). No issues detected", "ceph cephadm config-check status", "ceph cephadm config-check ls NAME HEALTHCHECK STATUS DESCRIPTION kernel_security CEPHADM_CHECK_KERNEL_LSM enabled checks SELINUX/Apparmor profiles are consistent across cluster hosts os_subscription CEPHADM_CHECK_SUBSCRIPTION enabled checks subscription states are consistent for all cluster hosts public_network CEPHADM_CHECK_PUBLIC_MEMBERSHIP enabled check that all hosts have a NIC on the Ceph public_netork osd_mtu_size CEPHADM_CHECK_MTU enabled check that OSD hosts share a common MTU setting osd_linkspeed CEPHADM_CHECK_LINKSPEED enabled check that OSD hosts share a common linkspeed network_missing CEPHADM_CHECK_NETWORK_MISSING enabled checks that the cluster/public networks defined exist on the Ceph hosts ceph_release CEPHADM_CHECK_CEPH_RELEASE enabled check for Ceph version consistency - ceph daemons should be on the same release (unless upgrade is active) kernel_version CEPHADM_CHECK_KERNEL_VERSION enabled checks that the MAJ.MIN of the kernel on Ceph hosts is consistent" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/administration_guide/cephadm-health-checks
Chapter 1. RBAC APIs
Chapter 1. RBAC APIs 1.1. ClusterRoleBinding [rbac.authorization.k8s.io/v1] Description ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject. Type object 1.2. ClusterRole [rbac.authorization.k8s.io/v1] Description ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding. Type object 1.3. RoleBinding [rbac.authorization.k8s.io/v1] Description RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace. Type object 1.4. Role [rbac.authorization.k8s.io/v1] Description Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding. Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/rbac_apis/rbac-apis
20.16.14. Consoles, Serial, Parallel, and Channel Devices
20.16.14. Consoles, Serial, Parallel, and Channel Devices A character device provides a way to interact with the virtual machine. Paravirtualized consoles, serial ports, parallel ports and channels are all classed as character devices and so represented using the same syntax. To specify the consols, channel and other devices configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <parallel type='pty'> <source path='/dev/pts/2'/> <target port='0'/> </parallel> <serial type='pty'> <source path='/dev/pts/3'/> <target port='0'/> </serial> <console type='pty'> <source path='/dev/pts/4'/> <target port='0'/> </console> <channel type='unix'> <source mode='bind' path='/tmp/guestfwd'/> <target type='guestfwd' address='10.0.2.1' port='4600'/> </channel> </devices> ... Figure 20.59. Consoles, serial, parallel, and channel devices In each of these directives, the top-level element name (parallel, serial, console, channel) describes how the device is presented to the guest virtual machine. The guest virtual machine interface is configured by the target element. The interface presented to the host physical machine is given in the type attribute of the top-level element. The host physical machine interface is configured by the source element. The source element may contain an optional seclabel to override the way that labelling is done on the socket path. If this element is not present, the security label is inherited from the per-domain setting. Each character device element has an optional sub-element address which can tie the device to a particular controller or PCI slot.
[ "<devices> <parallel type='pty'> <source path='/dev/pts/2'/> <target port='0'/> </parallel> <serial type='pty'> <source path='/dev/pts/3'/> <target port='0'/> </serial> <console type='pty'> <source path='/dev/pts/4'/> <target port='0'/> </console> <channel type='unix'> <source mode='bind' path='/tmp/guestfwd'/> <target type='guestfwd' address='10.0.2.1' port='4600'/> </channel> </devices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-section-libvirt-dom-xml-devices-consoles
3.13. Required Networks, Optional Networks, and Virtual Machine Networks
3.13. Required Networks, Optional Networks, and Virtual Machine Networks A required network is a logical network that must be available to all hosts in a cluster. When a host's required network becomes non-operational, virtual machines running on that host are migrated to another host; the extent of this migration is dependent upon the chosen scheduling policy. This is beneficial if you have virtual machines running mission critical workloads. An optional network is a logical network that has not been explicitly declared as Required . Optional networks can be implemented on only the hosts that use them. The presence or absence of optional networks does not affect the Operational status of a host. When a non-required network becomes non-operational, the virtual machines running on the network are not migrated to another host. This prevents unnecessary I/O overload caused by mass migrations. Note that when a logical network is created and added to clusters, the Required box is checked by default. To change a network's Required designation, from the Administration Portal, select a network, click the Cluster tab, and click the Manage Networks button. Virtual machine networks (called a VM network in the user interface) are logical networks designated to carry only virtual machine network traffic. Virtual machine networks can be required or optional. Virtual machines that uses an optional virtual machine network will only start on hosts with that network.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/virtual_machine_networks_and_optional_networks
20.30. Storage Volume Commands
20.30. Storage Volume Commands This section covers commands for creating, deleting, and managing storage volumes. Creating a storage volume requires at least one storage pool. For an example on how to create a storage pool see Example 20.78, "How to create a storage pool from an XML file" For information on storage pools see Section 13.2, "Using Storage Pools" . For information on storage volumes see, Section 13.3, "Using Storage Volumes" . 20.30.1. Creating Storage Volumes The virsh vol-create-from pool file vol command creates a volume, using another volume as input. This command requires either a storage pool name or storage pool UUID, and accepts requires the following parameters and options: --pool string - required - Contains the name of the storage pool or the storage pool's UUID which will be attached to the storage volume. This storage pool does not have to be the same storage pool that is associated with the storage volume you are using to base this new storage volume on. --file string - required - Contains the name of the XML file that contains the parameters for the storage volume. --vol string - required - Contains the name of the storage volume you are using to base this new storage volume on. --inputpool string - optional - Allows you to name the storage pool that is associated with the storage volume that you are using as input for the new storage volume. --prealloc-metadata - optional - preallocates metadata (for qcow2 instead of full allocation) for the new storage volume. For examples, see Section 13.3.2, "Creating Storage Volumes" . 20.30.2. Creating a Storage Volume from Parameters The virsh vol-create-as pool name capacity command creates a volume from a set of arguments. The pool argument contains the name or UUID of the storage pool to create the volume in. This command takes the following required parameters and options: [--pool] string - required - Contains the name of the associated storage pool. [--name] string - required - Contains the name of the new storage volume. [--capacity] string - required - Contains the size of the storage volume, expressed as an integer. The default is bytes, unless specified. Use the suffixes b, k, M, G, T for byte, kilobyte, megabyte, gigabyte, and terabyte, respectively. --allocation string - optional - Contains the initial allocation size, expressed as an integer. The default is bytes, unless specified. --format string - optional - Contains the file format type. Acceptable types include: raw, bochs, qcow, qcow2, qed, host_device, and vmdk. These are, however, only meant for file-based storage pools. By default the qcow version that is used is version 3. If you want to change the version, see Section 23.19.2, "Setting Target Elements" . --backing-vol string - optional - Contains the backing volume. This will be used if you are taking a snapshot. --backing-vol-format string - optional - Contains the format of the backing volume. This will be used if you are taking a snapshot. --prealloc-metadata - optional - Allows you to preallocate metadata (for qcow2 instead of full allocation). Example 20.88. How to create a storage volume from a set of parameters The following example creates a 100MB storage volume named vol-new . It contains the vdisk storage pool that you created in Example 20.78, "How to create a storage pool from an XML file" : 20.30.3. Creating a Storage Volume from an XML File The virsh vol-create pool file command creates a new storage volume from an XML file which contains the storage volume parameters. Example 20.89. How to create a storage volume from an existing XML file The following example creates a storage volume-based on the file vol-new.xml , as shown: The storage volume is associated with the storage pool vdisk . The path to the image is /var/lib/libvirt/images/vol-new : 20.30.4. Cloning a Storage Volume The virsh vol-clone vol-name new-vol-name command clones an existing storage volume. Although the virsh vol-create-from command may also be used, it is not the recommended way to clone a storage volume. The command accepts the --pool string option, which allows you to specify the storage pool that is associated to the new storage volume. The vol argument is the name or key or path of the source storage volume and the name argument refers to the name of the new storage volume. For additional information, see Section 13.3.2.1, "Creating Storage Volumes with virsh" . Example 20.90. How to clone a storage volume The following example clones a storage volume named vol-new to a new volume named vol-clone :
[ "virsh vol-create-as vdisk vol-new 100M vol vol-new created", "<volume> <name>vol-new</name> <allocation>0</allocation> <capacity unit=\"M\">100</capacity> <target> <path>/var/lib/libvirt/images/vol-new</path> <permissions> <owner>107</owner> <group>107</group> <mode>0744</mode> <label>virt_image_t</label> </permissions> </target> </volume>", "virsh vol-create vdisk vol-new.xml vol vol-new created", "virsh vol-clone vol-new vol-clone vol vol-clone cloned from vol-new" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guest_virtual_machines_with_virsh-storage_volume_commands
Nodes
Nodes OpenShift Container Platform 4.18 Configuring and managing nodes in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - \"1000000\" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: [\"ALL\"] resources: limits: memory: \"100Mi\" cpu: \"1\" requests: memory: \"100Mi\" cpu: \"1\" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi", "oc project <project-name>", "oc get pods", "oc get pods", "NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>", "oc adm top pods", "oc adm top pods -n openshift-console", "NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi", "oc adm top pod --selector=''", "oc adm top pod --selector='name=my-pod'", "oc logs -f <pod_name> -c <container_name>", "oc logs ruby-58cd97df55-mww7r", "oc logs -f ruby-57f7f4855b-znl92 -c ruby", "oc logs <object_type>/<resource_name> 1", "oc logs deployment/ruby", "{ \"kind\": \"Pod\", \"spec\": { \"containers\": [ { \"image\": \"openshift/hello-openshift\", \"name\": \"hello-openshift\" } ] }, \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"iperf-slow\", \"annotations\": { \"kubernetes.io/ingress-bandwidth\": \"10M\", \"kubernetes.io/egress-bandwidth\": \"10M\" } } }", "oc create -f <file_or_dir_path>", "oc get poddisruptionbudget --all-namespaces", "NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod", "oc create -f </path/to/file> -n <project_name>", "apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1", "oc create -f pod-disruption-budget.yaml", "apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1", "oc create -f <file-name>.yaml", "oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75", "horizontalpodautoscaler.autoscaling/hello-node autoscaled", "apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hello-node namespace: default spec: maxReplicas: 7 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-node targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0", "oc get deployment hello-node", "NAME REVISION DESIRED CURRENT TRIGGERED BY hello-node 1 5 5 config", "type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60", "behavior: scaleDown: stabilizationWindowSeconds: 300", "apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: behavior: scaleDown: 1 policies: 2 - type: Pods 3 value: 4 4 periodSeconds: 60 5 - type: Percent value: 10 6 periodSeconds: 60 selectPolicy: Min 7 stabilizationWindowSeconds: 300 8 scaleUp: 9 policies: - type: Pods value: 5 10 periodSeconds: 70 - type: Percent value: 12 11 periodSeconds: 80 selectPolicy: Max stabilizationWindowSeconds: 0", "apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: minReplicas: 20 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max scaleUp: selectPolicy: Disabled", "oc edit hpa hpa-resource-metrics-memory", "apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/behavior: '{\"ScaleUp\":{\"StabilizationWindowSeconds\":0,\"SelectPolicy\":\"Max\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":15},{\"Type\":\"Percent\",\"Value\":100,\"PeriodSeconds\":15}]}, \"ScaleDown\":{\"StabilizationWindowSeconds\":300,\"SelectPolicy\":\"Min\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":60},{\"Type\":\"Percent\",\"Value\":10,\"PeriodSeconds\":60}]}}'", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal", "Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>", "oc autoscale <object_type>/<name> \\ 1 --min <number> \\ 2 --max <number> \\ 3 --cpu-percent=<percent> 4", "oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75", "apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: cpu-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: cpu 9 target: type: AverageValue 10 averageValue: 500m 11", "oc create -f <file-name>.yaml", "oc get hpa cpu-autoscale", "NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE cpu-autoscale Deployment/example 173m/500m 1 10 1 20m", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler", "Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Cpu: 0 Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none>", "apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11 behavior: 12 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max", "apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: memory-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: Utilization 10 averageUtilization: 50 11 behavior: 12 scaleUp: stabilizationWindowSeconds: 180 policies: - type: Pods value: 6 periodSeconds: 120 - type: Percent value: 10 periodSeconds: 120 selectPolicy: Max", "oc create -f <file-name>.yaml", "oc create -f hpa.yaml", "horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created", "oc get hpa hpa-resource-metrics-memory", "NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-resource-metrics-memory Deployment/example 2441216/500Mi 1 10 1 20m", "oc describe hpa hpa-resource-metrics-memory", "Name: hpa-resource-metrics-memory Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 04 Mar 2020 16:31:37 +0530 Reference: Deployment/example Metrics: ( current / target ) resource memory on pods: 2441216 / 500Mi Min replicas: 1 Max replicas: 10 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target", "oc describe hpa cm-test", "Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events:", "Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind \"ReplicationController\" in group \"apps\" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x3 over 36s) horizontal-pod-autoscaler no matches for kind \"ReplicationController\" in group \"apps\"", "Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API", "Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal", "Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>", "oc describe hpa <pod-name>", "oc describe hpa cm-test", "Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range", "oc get all -n openshift-vertical-pod-autoscaler", "NAME READY STATUS RESTARTS AGE pod/vertical-pod-autoscaler-operator-85b4569c47-2gmhc 1/1 Running 0 3m13s pod/vpa-admission-plugin-default-67644fc87f-xq7k9 1/1 Running 0 2m56s pod/vpa-recommender-default-7c54764b59-8gckt 1/1 Running 0 2m56s pod/vpa-updater-default-7f6cc87858-47vw9 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vpa-webhook ClusterIP 172.30.53.206 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vertical-pod-autoscaler-operator 1/1 1 1 3m13s deployment.apps/vpa-admission-plugin-default 1/1 1 1 2m56s deployment.apps/vpa-recommender-default 1/1 1 1 2m56s deployment.apps/vpa-updater-default 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/vertical-pod-autoscaler-operator-85b4569c47 1 1 1 3m13s replicaset.apps/vpa-admission-plugin-default-67644fc87f 1 1 1 2m56s replicaset.apps/vpa-recommender-default-7c54764b59 1 1 1 2m56s replicaset.apps/vpa-updater-default-7f6cc87858 1 1 1 2m56s", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none>", "oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"", "oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 3", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 1 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 2 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 3 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\"", "oc get pods -n openshift-vertical-pod-autoscaler -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none>", "resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi", "resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k", "oc get vpa <vpa-name> --output yaml", "status: recommendation: containerRecommendations: - containerName: frontend lowerBound: cpu: 25m memory: 262144k target: cpu: 25m memory: 262144k uncappedTarget: cpu: 25m memory: 262144k upperBound: cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: creationTimestamp: \"2021-04-21T19:29:49Z\" generation: 2 name: default namespace: openshift-vertical-pod-autoscaler resourceVersion: \"142172\" uid: 180e17e9-03cc-427f-9955-3b4d7aeb2d59 spec: minReplicas: 3 1 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Initial\" 3", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Off\" 3", "oc get vpa <vpa-name> --output yaml", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\"", "spec: containers: - name: frontend resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi - name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi", "spec: containers: name: frontend resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: 1 container: args: 2 - '--kube-api-qps=50.0' - '--kube-api-burst=100.0' resources: requests: 3 cpu: 40m memory: 150Mi limits: memory: 300Mi recommender: 4 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' - '--memory-saver=true' 5 resources: requests: cpu: 75m memory: 275Mi limits: memory: 550Mi updater: 6 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' resources: requests: cpu: 80m memory: 350M limits: memory: 700Mi minReplicas: 2 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15", "apiVersion: v1 kind: Pod metadata: name: vpa-updater-default-d65ffb9dc-hgw44 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --min-replicas=2 - --kube-api-qps=60.0 - --kube-api-burst=120.0 resources: requests: cpu: 80m memory: 350M", "apiVersion: v1 kind: Pod metadata: name: vpa-admission-plugin-default-756999448c-l7tsd namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --tls-cert-file=/data/tls-certs/tls.crt - --tls-private-key=/data/tls-certs/tls.key - --client-ca-file=/data/tls-ca-certs/service-ca.crt - --webhook-timeout-seconds=10 - --kube-api-qps=50.0 - --kube-api-burst=100.0 resources: requests: cpu: 40m memory: 150Mi", "apiVersion: v1 kind: Pod metadata: name: vpa-recommender-default-74c979dbbc-znrd2 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --recommendation-margin-fraction=0.15 - --pod-recommendation-min-cpu-millicores=25 - --pod-recommendation-min-memory-mb=250 - --kube-api-qps=60.0 - --kube-api-burst=120.0 - --memory-saver=true resources: requests: cpu: 75m memory: 275Mi", "apiVersion: v1 1 kind: ServiceAccount metadata: name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRoleBinding metadata: name: system:example-metrics-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 3 kind: ClusterRoleBinding metadata: name: system:example-vpa-actor roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-actor subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRoleBinding metadata: name: system:example-vpa-target-reader-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-target-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name>", "apiVersion: apps/v1 kind: Deployment metadata: name: alt-vpa-recommender namespace: <namespace_name> spec: replicas: 1 selector: matchLabels: app: alt-vpa-recommender template: metadata: labels: app: alt-vpa-recommender spec: containers: 1 - name: recommender image: quay.io/example/alt-recommender:latest 2 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - name: prometheus containerPort: 8942 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault serviceAccountName: alt-vpa-recommender-sa 3 securityContext: runAsNonRoot: true", "oc get pods", "NAME READY STATUS RESTARTS AGE frontend-845d5478d-558zf 1/1 Running 0 4m25s frontend-845d5478d-7z9gx 1/1 Running 0 4m25s frontend-845d5478d-b7l4j 1/1 Running 0 4m25s vpa-alt-recommender-55878867f9-6tp5v 1/1 Running 0 9s", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender namespace: <namespace_name> spec: recommenders: - name: alt-vpa-recommender 1 targetRef: apiVersion: \"apps/v1\" kind: Deployment 2 name: frontend", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\" recommenders: 5 - name: my-recommender", "oc create -f <file-name>.yaml", "oc get vpa <vpa-name> --output yaml", "status: recommendation: containerRecommendations: - containerName: frontend lowerBound: 1 cpu: 25m memory: 262144k target: 2 cpu: 25m memory: 262144k uncappedTarget: 3 cpu: 25m memory: 262144k upperBound: 4 cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"", "apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: scalablepods.testing.openshift.io spec: group: testing.openshift.io versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: replicas: type: integer minimum: 0 selector: type: string status: type: object properties: replicas: type: integer subresources: status: {} scale: specReplicasPath: .spec.replicas statusReplicasPath: .status.replicas labelSelectorPath: .spec.selector 1 scope: Namespaced names: plural: scalablepods singular: scalablepod kind: ScalablePod shortNames: - spod", "apiVersion: testing.openshift.io/v1 kind: ScalablePod metadata: name: scalable-cr namespace: default spec: selector: \"app=scalable-cr\" 1 replicas: 1", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: scalable-cr namespace: default spec: targetRef: apiVersion: testing.openshift.io/v1 kind: ScalablePod name: scalable-cr updatePolicy: updateMode: \"Auto\"", "oc delete namespace openshift-vertical-pod-autoscaler", "oc delete crd verticalpodautoscalercheckpoints.autoscaling.k8s.io", "oc delete crd verticalpodautoscalercontrollers.autoscaling.openshift.io", "oc delete crd verticalpodautoscalers.autoscaling.k8s.io", "oc delete MutatingWebhookConfiguration vpa-webhook-config", "oc delete operator/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler", "apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5", "apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB", "apiVersion: v1 kind: ServiceAccount secrets: - name: test-secret", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest'", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: \"sa-name\" 1 type: kubernetes.io/service-account-token 2", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password>", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com", "oc create sa <service_account_name> -n <your_namespace>", "apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: \"sa-name\" 2 type: kubernetes.io/service-account-token 3", "oc apply -f service-account-token-secret.yaml", "oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1", "ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA", "curl -X GET <openshift_cluster_api> --header \"Authorization: Bearer <token>\" 1 2", "apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1", "kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376", "oc create -f <file-name>.yaml", "oc get secrets", "NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m", "oc describe secret my-cert", "Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes", "apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: \"/etc/my-path\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511", "secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60", "oc delete secret <secret_name>", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers", "oc apply -f aws-provider.yaml", "mkdir credentialsrequest-dir-aws", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"secretsmanager:GetSecretValue\" - \"secretsmanager:DescribeSecret\" effect: Allow resource: \"arn:*:secretsmanager:*:*:secret:testSecret-??????\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider", "oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'", "https://<oidc_provider_name>", "ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output", "2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds", "oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testSecret\" objectType: \"secretsmanager\"", "oc create -f secret-provider-class-aws.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3", "oc create -f deployment.yaml", "oc exec my-aws-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testSecret", "oc exec my-aws-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret", "<secret_value>", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers", "oc apply -f aws-provider.yaml", "mkdir credentialsrequest-dir-aws", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"ssm:GetParameter\" - \"ssm:GetParameters\" effect: Allow resource: \"arn:*:ssm:*:*:parameter/testParameter*\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider", "oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'", "https://<oidc_provider_name>", "ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output", "2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds", "oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testParameter\" objectType: \"ssmparameter\"", "oc create -f secret-provider-class-aws.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3", "oc create -f deployment.yaml", "oc exec my-aws-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testParameter", "oc exec my-aws-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret", "<secret_value>", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-azure-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-azure-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-azure-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-azure labels: app: csi-secrets-store-provider-azure spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-azure template: metadata: labels: app: csi-secrets-store-provider-azure spec: serviceAccountName: csi-secrets-store-provider-azure hostNetwork: true containers: - name: provider-azure-installer image: mcr.microsoft.com/oss/azure/secrets-store/provider-azure:v1.4.1 imagePullPolicy: IfNotPresent args: - --endpoint=unix:///provider/azure.sock - --construct-pem-chain=true - --healthz-port=8989 - --healthz-path=/healthz - --healthz-timeout=5s livenessProbe: httpGet: path: /healthz port: 8989 failureThreshold: 3 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 0 capabilities: drop: - ALL volumeMounts: - mountPath: \"/provider\" name: providervol affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: type operator: NotIn values: - virtual-kubelet volumes: - name: providervol hostPath: path: \"/var/run/secrets-store-csi-providers\" tolerations: - operator: Exists nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-azure -n openshift-cluster-csi-drivers", "oc apply -f azure-provider.yaml", "SERVICE_PRINCIPAL_CLIENT_SECRET=\"USD(az ad sp create-for-rbac --name https://USDKEYVAULT_NAME --query 'password' -otsv)\"", "SERVICE_PRINCIPAL_CLIENT_ID=\"USD(az ad sp list --display-name https://USDKEYVAULT_NAME --query '[0].appId' -otsv)\"", "oc create secret generic secrets-store-creds -n my-namespace --from-literal clientid=USD{SERVICE_PRINCIPAL_CLIENT_ID} --from-literal clientsecret=USD{SERVICE_PRINCIPAL_CLIENT_SECRET}", "oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider 1 namespace: my-namespace 2 spec: provider: azure 3 parameters: 4 usePodIdentity: \"false\" useVMManagedIdentity: \"false\" userAssignedIdentityID: \"\" keyvaultName: \"kvname\" objects: | array: - | objectName: secret1 objectType: secret tenantId: \"tid\"", "oc create -f secret-provider-class-azure.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-azure-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-azure-provider\" 3 nodePublishSecretRef: name: secrets-store-creds 4", "oc create -f deployment.yaml", "oc exec my-azure-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "secret1", "oc exec my-azure-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/secret1", "my-secret-value", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-gcp-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-gcp-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-gcp-role rules: - apiGroups: - \"\" resources: - serviceaccounts/token verbs: - create - apiGroups: - \"\" resources: - serviceaccounts verbs: - get --- apiVersion: apps/v1 kind: DaemonSet metadata: name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers labels: app: csi-secrets-store-provider-gcp spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-gcp template: metadata: labels: app: csi-secrets-store-provider-gcp spec: serviceAccountName: csi-secrets-store-provider-gcp initContainers: - name: chown-provider-mount image: busybox command: - chown - \"1000:1000\" - /etc/kubernetes/secrets-store-csi-providers volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol securityContext: privileged: true hostNetwork: false hostPID: false hostIPC: false containers: - name: provider image: us-docker.pkg.dev/secretmanager-csi/secrets-store-csi-driver-provider-gcp/plugin@sha256:a493a78bbb4ebce5f5de15acdccc6f4d19486eae9aa4fa529bb60ac112dd6650 securityContext: privileged: true imagePullPolicy: IfNotPresent resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi env: - name: TARGET_DIR value: \"/etc/kubernetes/secrets-store-csi-providers\" volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol mountPropagation: None readOnly: false livenessProbe: failureThreshold: 3 httpGet: path: /live port: 8095 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 volumes: - name: providervol hostPath: path: /etc/kubernetes/secrets-store-csi-providers tolerations: - key: kubernetes.io/arch operator: Equal value: amd64 effect: NoSchedule nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-gcp -n openshift-cluster-csi-drivers", "oc apply -f gcp-provider.yaml", "oc new-project my-namespace", "oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite", "oc create serviceaccount my-service-account --namespace=my-namespace", "oc create secret generic secrets-store-creds -n my-namespace --from-file=key.json 1", "oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-gcp-provider 1 namespace: my-namespace 2 spec: provider: gcp 3 parameters: 4 secrets: | - resourceName: \"projects/my-project/secrets/testsecret1/versions/1\" path: \"testsecret1.txt\"", "oc create -f secret-provider-class-gcp.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-gcp-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: my-service-account 3 containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-gcp-provider\" 4 nodePublishSecretRef: name: secrets-store-creds 5", "oc create -f deployment.yaml", "oc exec my-gcp-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testsecret1", "oc exec my-gcp-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testsecret1", "<secret_value>", "helm repo add hashicorp https://helm.releases.hashicorp.com", "helm repo update", "oc new-project vault", "oc label ns vault security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite", "oc adm policy add-scc-to-user privileged -z vault -n vault", "oc adm policy add-scc-to-user privileged -z vault-csi-provider -n vault", "helm install vault hashicorp/vault --namespace=vault --set \"server.dev.enabled=true\" --set \"injector.enabled=false\" --set \"csi.enabled=true\" --set \"global.openshift=true\" --set \"injector.agentImage.repository=docker.io/hashicorp/vault\" --set \"server.image.repository=docker.io/hashicorp/vault\" --set \"csi.image.repository=docker.io/hashicorp/vault-csi-provider\" --set \"csi.agent.image.repository=docker.io/hashicorp/vault\" --set \"csi.daemonSet.providersDir=/var/run/secrets-store-csi-providers\"", "oc patch daemonset -n vault vault-csi-provider --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/template/spec/containers/0/securityContext\", \"value\": {\"privileged\": true} }]'", "oc get pods -n vault", "NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 24m vault-csi-provider-87rgw 1/2 Running 0 5s vault-csi-provider-bd6hp 1/2 Running 0 4s vault-csi-provider-smlv7 1/2 Running 0 5s", "oc exec vault-0 --namespace=vault -- vault kv put secret/example1 testSecret1=my-secret-value", "oc exec vault-0 --namespace=vault -- vault kv get secret/example1", "= Secret Path = secret/data/example1 ======= Metadata ======= Key Value --- ----- created_time 2024-04-05T07:05:16.713911211Z custom_metadata <nil> deletion_time n/a destroyed false version 1 === Data === Key Value --- ----- testSecret1 my-secret-value", "oc exec vault-0 --namespace=vault -- vault auth enable kubernetes", "Success! Enabled kubernetes auth method at: kubernetes/", "TOKEN_REVIEWER_JWT=\"USD(oc exec vault-0 --namespace=vault -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)\"", "KUBERNETES_SERVICE_IP=\"USD(oc get svc kubernetes --namespace=default -o go-template=\"{{ .spec.clusterIP }}\")\"", "oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/config issuer=\"https://kubernetes.default.svc.cluster.local\" token_reviewer_jwt=\"USD{TOKEN_REVIEWER_JWT}\" kubernetes_host=\"https://USD{KUBERNETES_SERVICE_IP}:443\" kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt", "Success! Data written to: auth/kubernetes/config", "oc exec -i vault-0 --namespace=vault -- vault policy write csi -<<EOF path \"secret/data/*\" { capabilities = [\"read\"] } EOF", "Success! Uploaded policy: csi", "oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/role/csi bound_service_account_names=default bound_service_account_namespaces=default,test-ns,negative-test-ns,my-namespace policies=csi ttl=20m", "Success! Data written to: auth/kubernetes/role/csi", "oc get pods -n vault", "NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 43m vault-csi-provider-87rgw 2/2 Running 0 19m vault-csi-provider-bd6hp 2/2 Running 0 19m vault-csi-provider-smlv7 2/2 Running 0 19m", "oc get pods -n openshift-cluster-csi-drivers | grep -E \"secrets\"", "secrets-store-csi-driver-node-46d2g 3/3 Running 0 45m secrets-store-csi-driver-node-d2jjn 3/3 Running 0 45m secrets-store-csi-driver-node-drmt4 3/3 Running 0 45m secrets-store-csi-driver-node-j2wlt 3/3 Running 0 45m secrets-store-csi-driver-node-v9xv4 3/3 Running 0 45m secrets-store-csi-driver-node-vlz28 3/3 Running 0 45m secrets-store-csi-driver-operator-84bd699478-fpxrw 1/1 Running 0 47m", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-vault-provider 1 namespace: my-namespace 2 spec: provider: vault 3 parameters: 4 roleName: \"csi\" vaultAddress: \"http://vault.vault:8200\" objects: | - secretPath: \"secret/data/example1\" objectName: \"testSecret1\" secretKey: \"testSecret1", "oc create -f secret-provider-class-vault.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: busybox-deployment 1 namespace: my-namespace 2 labels: app: busybox spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: terminationGracePeriodSeconds: 0 containers: - image: registry.k8s.io/e2e-test-images/busybox:1.29-4 name: busybox imagePullPolicy: IfNotPresent command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-vault-provider\" 3", "oc create -f deployment.yaml", "oc exec busybox-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testSecret1", "oc exec busybox-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret1", "my-secret-value", "oc edit secretproviderclass my-azure-provider 1", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider namespace: my-namespace spec: provider: azure secretObjects: 1 - secretName: tlssecret 2 type: kubernetes.io/tls 3 labels: environment: \"test\" data: - objectName: tlskey 4 key: tls.key 5 - objectName: tlscrt key: tls.crt parameters: usePodIdentity: \"false\" keyvaultName: \"kvname\" objects: | array: - | objectName: tlskey objectType: secret - | objectName: tlscrt objectType: secret tenantId: \"tid\"", "oc get secretproviderclasspodstatus <secret_provider_class_pod_status_name> -o yaml 1", "status: mounted: true objects: - id: secret/tlscrt version: f352293b97da4fa18d96a9528534cb33 - id: secret/tlskey version: 02534bc3d5df481cb138f8b2a13951ef podName: busybox-<hash> secretProviderClassName: my-azure-provider targetPath: /var/lib/kubelet/pods/f0d49c1e-c87a-4beb-888f-37798456a3e7/volumes/kubernetes.io~csi/secrets-store-inline/mount", "oc create serviceaccount <service_account_name>", "oc patch serviceaccount <service_account_name> -p '{\"metadata\": {\"annotations\": {\"cloud.google.com/workload-identity-provider\": \"projects/<project_number>/locations/global/workloadIdentityPools/<identity_pool>/providers/<identity_provider>\"}}}'", "oc patch serviceaccount <service_account_name> -p '{\"metadata\": {\"annotations\": {\"cloud.google.com/service-account-email\": \"<service_account_email>\"}}}'", "oc patch serviceaccount <service_account_name> -p '{\"metadata\": {\"annotations\": {\"cloud.google.com/injection-mode\": \"direct\"}}}'", "gcloud projects add-iam-policy-binding <project_id> --member \"<service_account_email>\" --role \"projects/<project_id>/roles/<role_for_workload_permissions>\"", "oc get serviceaccount <service_account_name>", "apiVersion: v1 kind: ServiceAccount metadata: name: app-x namespace: service-a annotations: cloud.google.com/workload-identity-provider: \"projects/<project_number>/locations/global/workloadIdentityPools/<identity_pool>/providers/<identity_provider>\" 1 cloud.google.com/service-account-email: \"[email protected]\" cloud.google.com/audience: \"sts.googleapis.com\" 2 cloud.google.com/token-expiration: \"86400\" 3 cloud.google.com/gcloud-run-as-user: \"1000\" cloud.google.com/injection-mode: \"direct\" 4", "apiVersion: apps/v1 kind: Deployment metadata: name: ubi9 spec: replicas: 1 selector: matchLabels: app: ubi9 template: metadata: labels: app: ubi9 spec: serviceAccountName: \"<service_account_name>\" 1 containers: - name: ubi image: 'registry.access.redhat.com/ubi9/ubi-micro:latest' command: - /bin/sh - '-c' - | sleep infinity", "oc apply -f deployment.yaml", "oc get pods -o json | jq -r '.items[0].spec.containers[0].env[] | select(.name==\"GOOGLE_APPLICATION_CREDENTIALS\")'", "{ \"name\": \"GOOGLE_APPLICATION_CREDENTIALS\", \"value\": \"/var/run/secrets/workload-identity/federation.json\" }", "apiVersion: v1 kind: Pod metadata: name: app-x-pod namespace: service-a annotations: cloud.google.com/skip-containers: \"init-first,sidecar\" cloud.google.com/external-credentials-json: |- 1 { \"type\": \"external_account\", \"audience\": \"//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/on-prem-kubernetes/providers/<identity_provider>\", \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/[email protected]:generateAccessToken\", \"credential_source\": { \"file\": \"/var/run/secrets/sts.googleapis.com/serviceaccount/token\", \"format\": { \"type\": \"text\" } } } spec: serviceAccountName: app-x initContainers: - name: init-first image: container-image:version containers: - name: sidecar image: container-image:version - name: container-name image: container-image:version env: 2 - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/run/secrets/gcloud/config/federation.json - name: CLOUDSDK_COMPUTE_REGION value: asia-northeast1 volumeMounts: - name: gcp-iam-token readOnly: true mountPath: /var/run/secrets/sts.googleapis.com/serviceaccount - mountPath: /var/run/secrets/gcloud/config name: external-credential-config readOnly: true volumes: - name: gcp-iam-token projected: sources: - serviceAccountToken: audience: sts.googleapis.com expirationSeconds: 86400 path: token - downwardAPI: defaultMode: 288 items: - fieldRef: apiVersion: v1 fieldPath: metadata.annotations['cloud.google.com/external-credentials-json'] path: federation.json name: external-credential-config", "kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2", "oc create configmap <configmap_name> [options]", "oc create configmap game-config --from-file=example-files/", "oc describe configmaps game-config", "Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes", "cat example-files/game.properties", "enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30", "cat example-files/ui.properties", "color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice", "oc create configmap game-config --from-file=example-files/", "oc get configmaps game-config -o yaml", "apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: \"407\" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985", "oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties", "cat example-files/game.properties", "enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30", "cat example-files/ui.properties", "color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice", "oc create configmap game-config-2 --from-file=example-files/game.properties --from-file=example-files/ui.properties", "oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties", "oc get configmaps game-config-2 -o yaml", "apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: \"516\" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985", "oc get configmaps game-config-3 -o yaml", "apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: \"530\" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985", "oc create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm", "oc get configmaps special-config -o yaml", "apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: \"651\" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985", "apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4", "apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "SPECIAL_LEVEL_KEY=very log_level=INFO", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "very charm", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never", "very", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never", "very", "service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }", "oc describe machineconfig <name>", "oc describe machineconfig 00-worker", "Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3", "oc create -f devicemgr.yaml", "kubeletconfig.machineconfiguration.openshift.io/devicemgr created", "oc get priorityclasses", "NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s", "apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 preemptionPolicy: PreemptLowerPriority 3 globalDefault: false 4 description: \"This priority class should be used for XYZ service pods only.\" 5", "oc create -f <file-name>.yaml", "apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] priorityClassName: high-priority 1", "oc create -f <file-name>.yaml", "oc describe pod router-default-66d5cf9464-7pwkc", "kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464", "apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api", "oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc label nodes <name> <key>=<value>", "oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.31.3", "kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1", "apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node", "oc get pods -n openshift-run-once-duration-override-operator", "NAME READY STATUS RESTARTS AGE run-once-duration-override-operator-7b88c676f6-lcxgc 1/1 Running 0 7m46s runoncedurationoverride-62blp 1/1 Running 0 41s runoncedurationoverride-h8h8b 1/1 Running 0 41s runoncedurationoverride-tdsqk 1/1 Running 0 41s", "oc label namespace <namespace> \\ 1 runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true", "apiVersion: v1 kind: Pod metadata: name: example namespace: <namespace> 1 spec: restartPolicy: Never 2 securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: busybox securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] image: busybox:1.25 command: - /bin/sh - -ec - | while sleep 5; do date; done", "oc get pods -n <namespace> -o yaml | grep activeDeadlineSeconds", "activeDeadlineSeconds: 3600", "oc edit runoncedurationoverride cluster", "apiVersion: operator.openshift.io/v1 kind: RunOnceDurationOverride metadata: spec: runOnceDurationOverride: spec: activeDeadlineSeconds: 1800 1", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 containerRuntimeConfig: defaultRuntime: crun 2", "oc edit ns/<namespace_name>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/description: \"\" openshift.io/display-name: \"\" openshift.io/requester: system:admin openshift.io/sa.scc.mcs: s0:c27,c24 openshift.io/sa.scc.supplemental-groups: 1000/10000 1 openshift.io/sa.scc.uid-range: 1000/10000 2 name: userns", "apiVersion: v1 kind: Pod metadata: name: userns-pod spec: containers: - name: userns-container image: registry.access.redhat.com/ubi9 command: [\"sleep\", \"1000\"] securityContext: capabilities: drop: [\"ALL\"] allowPrivilegeEscalation: false 1 runAsNonRoot: true 2 seccompProfile: type: RuntimeDefault runAsUser: 1000 3 runAsGroup: 1000 4 hostUsers: false 5", "oc create -f <file_name>.yaml", "oc rsh -c <container_name> pod/<pod_name>", "oc rsh -c userns-container_name pod/userns-pod", "sh-5.1USD id", "uid=1000(1000) gid=1000(1000) groups=1000(1000)", "sh-5.1USD lsns -t user", "NS TYPE NPROCS PID USER COMMAND 4026532447 user 3 1 1000 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 1", "oc debug node/ci-ln-z5vppzb-72292-8zp2b-worker-c-q8sh9", "oc debug node/ci-ln-z5vppzb-72292-8zp2b-worker-c-q8sh9", "sh-5.1# chroot /host", "sh-5.1# lsns -t user", "NS TYPE NPROCS PID USER COMMAND 4026531837 user 233 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 28 4026532447 user 1 4767 2908816384 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 1", "oc delete crd scaledobjects.keda.k8s.io", "oc delete crd triggerauthentications.keda.k8s.io", "oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem", "oc get all -n openshift-keda", "NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m", "kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: [\"RequestReceived\"] omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" serviceAccount: {}", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: \"false\" 9 unsafeSsl: \"false\" 10", "oc project <project_name> 1", "oc create serviceaccount thanos 1", "apiVersion: v1 kind: Secret metadata: name: thanos-token annotations: kubernetes.io/service-account.name: thanos 1 type: kubernetes.io/service-account-token", "oc create -f <file_name>.yaml", "oc describe serviceaccount thanos 1", "Name: thanos Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token 1 Events: <none>", "apiVersion: keda.sh/v1alpha1 kind: <authentication_method> 1 metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 2 - parameter: bearerToken 3 name: thanos-token 4 key: token 5 - parameter: ca name: thanos-token key: ca.crt", "oc create -f <file-name>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - \"\" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch", "oc create -f <file-name>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: <binding_type> 1 metadata: name: thanos-metrics-reader 2 namespace: my-project 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 4 namespace: <namespace_name> 5", "oc create -f <file-name>.yaml", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: \"0 6 * * *\" 5 end: \"30 18 * * *\" 6 desiredReplicas: \"100\" 7", "apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: \"dXNlcm5hbWU=\" 1 password: \"cGFzc3dvcmQ=\"", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password", "kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password", "apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8", "apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV\" 1", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3", "apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD>", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD", "oc create -f <filename>.yaml", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2", "oc apply -f <filename>", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"", "oc edit ScaledObject scaledobject", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"", "oc edit ScaledObject scaledobject", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0", "kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: metricsServer: auditConfig: logFormat: \"json\" 1 logOutputVolumeClaim: \"pvc-audit-log\" 2 policy: rules: 3 - level: Metadata omitStages: \"RequestReceived\" 4 omitManagedFields: false 5 lifetime: 6 maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\"", "get pod -n openshift-keda", "NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s", "oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1", "oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"4c81d41b-3dab-4675-90ce-20b87ce24013\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/healthz\",\"verb\":\"get\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.131.0.1\"],\"userAgent\":\"kube-probe/1.28\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2023-02-16T13:00:03.554567Z\",\"stageTimestamp\":\"2023-02-16T13:00:03.555032Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}", "oc rsh pod/keda-metrics-apiserver-<hash> -n openshift-keda", "oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n openshift-keda", "sh-4.4USD cd /var/audit-policy/", "sh-4.4USD ls", "log-2023.02.17-14:50 policy.yaml", "sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1", "sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Request\",\"auditID\":\"63e7f68c-04ec-4f4d-8749-bf1656572a41\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/openapi/v2\",\"verb\":\"get\",\"user\":{\"username\":\"system:aggregator\",\"groups\":[\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.1\"],\"responseStatus\":{\"metadata\":{},\"code\":304},\"requestReceivedTimestamp\":\"2023-02-17T13:12:55.035478Z\",\"stageTimestamp\":\"2023-02-17T13:12:55.038346Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:discovery\\\" of ClusterRole \\\"system:discovery\\\" to Group \\\"system:authenticated\\\"\"}}", "oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "oc import-image is/must-gather -n openshift", "oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "IMAGE=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE}", "└── openshift-keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── openshift-keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── previous.insecure.log │ │ └── previous.log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal", "Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>", "apiVersion: v1 kind: Pod spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: \"128Mi\" cpu: \"500m\"", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"0\" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: \"RequestReceived\" omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication", "oc create -f <filename>.yaml", "oc get scaledobject <scaled_object_name>", "NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s", "kind: ScaledJob apiVersion: keda.sh/v1alpha1 metadata: name: scaledjob namespace: my-namespace spec: failedJobsHistoryLimit: 5 jobTargetRef: activeDeadlineSeconds: 600 1 backoffLimit: 6 2 parallelism: 1 3 completions: 1 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] maxReplicaCount: 100 6 pollingInterval: 30 7 successfulJobsHistoryLimit: 5 8 failedJobsHistoryLimit: 5 9 envSourceContainerName: 10 rolloutStrategy: gradual 11 scalingStrategy: 12 strategy: \"custom\" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: \"0.5\" pendingPodConditions: - \"Ready\" - \"PodScheduled\" - \"AnyOtherCustomPodCondition\" multipleScalersCalculation : \"max\" triggers: - type: prometheus 13 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"bearer\" authenticationRef: 14 name: prom-cluster-triggerauthentication", "oc create -f <filename>.yaml", "oc get scaledjob <scaled_job_name>", "NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE scaledjob 100 prometheus prom-triggerauthentication True True 8s", "oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh", "oc get clusterrole | grep keda.sh", "oc delete clusterrole.keda.sh-v1alpha1-admin", "oc get clusterrolebinding | grep keda.sh", "oc delete clusterrolebinding.keda.sh-v1alpha1-admin", "oc delete project openshift-keda", "oc delete operator/openshift-custom-metrics-autoscaler-operator.openshift-keda", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: mastersSchedulable: false profile: HighNodeUtilization 1 #", "apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: topology.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault", "oc create -f <pod-spec>.yaml", "apiVersion: v1 kind: Pod metadata: name: security-s1-east spec: affinity: 1 podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 values: - S1 operator: In 4 topologyKey: topology.kubernetes.io/zone 5", "oc create -f <pod-spec>.yaml", "apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <pod-spec>.yaml", "apiVersion: v1 kind: Pod metadata: name: security-s2-east spec: affinity: 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 values: - S1 operator: In 5 topologyKey: kubernetes.io/hostname 6", "oc create -f <pod-spec>.yaml", "apiVersion: v1 kind: Pod metadata: name: team4 labels: team: \"4\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: team4a spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - \"4\" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none>", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: podAffinityTerm: labelSelector: matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal topologyKey: topology.kubernetes.io/zone #", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>", "apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc label node node1 e2e-az-name=e2e-az1", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: e2e-az-name: e2e-az1 #", "apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-name 3 values: - e2e-az1 - e2e-az2 operator: In 4 #", "oc create -f <file-name>.yaml", "oc label node node1 e2e-az-name=e2e-az3", "apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 3 preference: matchExpressions: - key: e2e-az-name 4 values: - e2e-az3 operator: In 5 #", "oc create -f <file-name>.yaml", "oc label node node1 zone=us", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us #", "cat pod-s1.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #", "oc get pod -o wide", "NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1", "oc label node node1 zone=emea", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea #", "cat pod-s1.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #", "oc describe pod pod-s1", "Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1).", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>", "sysctl -a |grep commit", "# vm.overcommit_memory = 0 #", "sysctl -a |grep panic", "# vm.panic_on_oom = 0 #", "apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc adm taint nodes node1 key1=value1:NoSchedule", "oc adm taint nodes node1 key1=value1:NoExecute", "oc adm taint nodes node1 key2=value2:NoSchedule", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - operator: \"Exists\" #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 key1=value1:NoExecute", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc edit machineset <machineset>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #", "oc scale --replicas=0 machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc adm taint nodes node1 dedicated=groupName:NoSchedule", "kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #", "kind: Project apiVersion: project.openshift.io/v1 metadata: name: <project_name> 1 annotations: openshift.io/node-selector: '<label>' 2 scheduler.alpha.kubernetes.io/defaultTolerations: >- [{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"<key_name>\"} 3 ]", "oc apply -f project.yaml", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #", "oc adm taint nodes <node-name> disktype=ssd:NoSchedule", "oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule", "kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #", "oc adm taint nodes <node-name> <key>-", "oc adm taint nodes ip-10-0-132-248.ec2.internal key1-", "node/ip-10-0-132-248.ec2.internal untainted", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #", "apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #", "apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #", "apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>", "apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #", "apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #", "apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>", "apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #", "oc describe pod router-default-66d5cf9464-7pwkc", "kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464", "apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api", "oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc label nodes <name> <key>=<value>", "oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.31.3", "kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1", "apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1", "oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.31.3", "oc label nodes <name> <key>=<value>", "oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"", "oc get nodes -l <key>=<value>,<key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.31.3", "Error from server (Forbidden): error when creating \"pod.yaml\": pods \"pod-4\" is forbidden: pod node label selector conflicts with its project node label selector", "oc edit namespace <name>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"type=user-node,region=east\" 1 openshift.io/description: \"\" openshift.io/display-name: \"\" openshift.io/requester: kube:admin openshift.io/sa.scc.mcs: s0:c30,c5 openshift.io/sa.scc.supplemental-groups: 1000880000/10000 openshift.io/sa.scc.uid-range: 1000880000/10000 creationTimestamp: \"2021-05-10T12:35:04Z\" labels: kubernetes.io/metadata.name: demo name: demo resourceVersion: \"145537\" uid: 3f8786e3-1fcb-42e3-a0e3-e2ac54d15001 spec: finalizers: - kubernetes", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api", "oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.31.3", "oc label <resource> <name> <key>=<value>", "oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-c-tgq49 type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.31.3", "apiVersion: v1 kind: Pod metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: region: us-east 5 matchLabelKeys: - my-pod-label 6 containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "kind: Pod apiVersion: v1 metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator", "apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 logLevel: Normal managementState: Managed operatorLogLevel: Normal mode: Predictive 1 profileCustomizations: namespaces: 2 excluded: - my-namespace podLifetime: 48h 3 thresholdPriorityClassName: my-priority-class-name 4 evictionLimits: total: 20 5 profiles: 6 - AffinityAndTaints - TopologyAndDuplicates - LifecycleAndUtilization - EvictPodsWithLocalStorage - EvictPodsWithPVC", "oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator", "apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 1", "apiVersion: v1 kind: ConfigMap metadata: name: \"secondary-scheduler-config\" 1 namespace: \"openshift-secondary-scheduler-operator\" 2 data: \"config.yaml\": | apiVersion: kubescheduler.config.k8s.io/v1 kind: KubeSchedulerConfiguration 3 leaderElection: leaderElect: false profiles: - schedulerName: secondary-scheduler 4 plugins: 5 score: disabled: - name: NodeResourcesBalancedAllocation - name: NodeResourcesLeastAllocated", "apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] schedulerName: secondary-scheduler 1", "oc describe pod nginx -n default", "Name: nginx Namespace: default Priority: 0 Node: ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp/10.0.128.3 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12s secondary-scheduler Successfully assigned default/nginx to ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp", "kind: Pod apiVersion: v1 metadata: name: hello-node-6fbccf8d9-9tmzr # spec: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name #", "oc patch namespace myproject -p '{\"metadata\": {\"annotations\": {\"openshift.io/node-selector\": \"\"}}}'", "apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: openshift.io/node-selector: '' #", "oc adm new-project <name> --node-selector=\"\"", "apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset 1 template: metadata: labels: name: hello-daemonset 2 spec: nodeSelector: 3 role: worker containers: - image: openshift/hello-openshift imagePullPolicy: Always name: registry ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log serviceAccount: default terminationGracePeriodSeconds: 10 #", "oc create -f daemonset.yaml", "oc get pods", "hello-daemonset-cx6md 1/1 Running 0 2m hello-daemonset-e3md9 1/1 Running 0 2m", "oc describe pod/hello-daemonset-cx6md|grep Node", "Node: openshift-node01.hostname.com/10.14.20.134", "oc describe pod/hello-daemonset-e3md9|grep Node", "Node: openshift-node02.hostname.com/10.14.20.137", "apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #", "oc delete cronjob/<cron_job_name>", "apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #", "oc create -f <file-name>.yaml", "oc create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'", "apiVersion: batch/v1 kind: CronJob metadata: name: pi spec: schedule: \"*/1 * * * *\" 1 timeZone: Etc/UTC 2 concurrencyPolicy: \"Replace\" 3 startingDeadlineSeconds: 200 4 suspend: true 5 successfulJobsHistoryLimit: 3 6 failedJobsHistoryLimit: 1 7 jobTemplate: 8 spec: template: metadata: labels: 9 parent: \"cronjobpi\" spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 10 #", "oc create -f <file-name>.yaml", "oc create cronjob pi --image=perl --schedule='*/1 * * * *' -- perl -Mbignum=bpi -wle 'print bpi(2000)'", "oc get nodes", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.31.3 node1.example.com Ready worker 7h v1.31.3 node2.example.com Ready worker 7h v1.31.3", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.31.3 node1.example.com NotReady,SchedulingDisabled worker 7h v1.31.3 node2.example.com Ready worker 7h v1.31.3", "oc get nodes -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.31.3 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.31.3 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.31.3 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.31.3-30.rhaos4.10.gitf2f339d.el8-dev", "oc get node <node>", "oc get node node1.example.com", "NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.31.3", "oc describe node <node>", "oc describe node node1.example.com", "Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.31.3-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.31.3 Kube-Proxy Version: v1.31.3 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ovn-kubernetes ovnkube-node-t4dsn 80m (0%) 0 (0%) 1630Mi (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #", "oc get pod --selector=<nodeSelector>", "oc get pod --selector=kubernetes.io/os", "oc get pod -l=<nodeSelector>", "oc get pod -l kubernetes.io/os=linux", "oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%", "oc adm top node --selector=''", "oc adm cordon <node1>", "node/<node1> cordoned", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.31.3", "oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]", "oc adm drain <node1> <node2> --force=true", "oc adm drain <node1> <node2> --grace-period=-1", "oc adm drain <node1> <node2> --ignore-daemonsets=true", "oc adm drain <node1> <node2> --timeout=5s", "oc adm drain <node1> <node2> --delete-emptydir-data=true", "oc adm drain <node1> <node2> --dry-run=true", "oc adm uncordon <node1>", "oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>", "oc label nodes webconsole-7f7f6 unhealthy=true", "kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #", "oc label pods --all <key_1>=<value_1>", "oc label pods --all status=unhealthy", "oc adm cordon <node>", "oc adm cordon node1.example.com", "node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled", "oc adm uncordon <node1>", "oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE>", "oc get machinesets -n openshift-machine-api", "oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api", "oc edit machineset <machine-set-name> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # name: <machine-set-name> namespace: openshift-machine-api # spec: replicas: 2 1 #", "oc adm cordon <node_name>", "oc adm drain <node_name> --force=true", "oc delete node <node_name>", "oc get machineconfigpool --show-labels", "NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False", "oc label machineconfigpool worker custom-kubelet=enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #", "oc create -f <file-name>", "oc create -f master-kube-config.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: ci-ln-hmy310k-72292-5f87z-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-412-85-202203181601-0-gcp-x86-64 1", "oc edit MachineConfiguration cluster", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 2", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: Partial partial: machineResourceSelector: matchLabels: update-boot-image: \"true\" 2", "oc label machineset.machine ci-ln-hmy310k-72292-5f87z-worker-a update-boot-image=true -n openshift-machine-api", "oc get machinesets <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: ci-ln-77hmkpt-72292-d4pxp update-boot-image: \"true\" name: ci-ln-77hmkpt-72292-d4pxp-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-416-92-202402201450-0-gcp-x86-64 1", "oc edit MachineConfiguration cluster", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All", "oc edit schedulers.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: \"2019-09-10T03:04:05Z\" generation: 1 name: cluster resourceVersion: \"433\" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #", "oc create -f 99-worker-setsebool.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3", "oc create -f 05-worker-kernelarg-selinuxpermissive.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.31.3 ip-10-0-136-243.ec2.internal Ready master 34m v1.31.3 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.31.3 ip-10-0-142-249.ec2.internal Ready master 34m v1.31.3 ip-10-0-153-11.ec2.internal Ready worker 28m v1.31.3 ip-10-0-153-150.ec2.internal Ready master 34m v1.31.3", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit", "oc label machineconfigpool worker kubelet-swap=enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #", "#!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo \"Usage: 'USD0 node_name'\" exit 64 fi Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo \"The script needs OpenStack admin credentials. Exiting\"; exit 77; } Check for admin OpenShift credentials adm top node >/dev/null || { >&2 echo \"The script needs OpenShift admin credentials. Exiting\"; exit 77; } set -x declare -r node_name=\"USD1\" declare server_id server_id=\"USD(openstack server list --all-projects -f value -c ID -c Name | grep \"USDnode_name\" | cut -d' ' -f1)\" readonly server_id Drain the node adm cordon \"USDnode_name\" adm drain \"USDnode_name\" --delete-emptydir-data --ignore-daemonsets --force Power off the server debug \"node/USD{node_name}\" -- chroot /host shutdown -h 1 Verify the server is shut off until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Migrate the node openstack server migrate --wait \"USDserver_id\" Resize the VM openstack server resize confirm \"USDserver_id\" Wait for the resize confirm to finish until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Restart the VM openstack server start \"USDserver_id\" Wait for the node to show up as Ready: until oc get node \"USDnode_name\" | grep -q \"^USD{node_name}[[:space:]]\\+Ready\"; do sleep 5; done Uncordon the node adm uncordon \"USDnode_name\" Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type \"Degraded\" }}{{ if ne .status \"False\" }}DEGRADED{{ end }}{{ else if eq .type \"Progressing\"}}{{ if ne .status \"False\" }}PROGRESSING{{ end }}{{ else if eq .type \"Available\"}}{{ if ne .status \"True\" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\\(DEGRADED\\|PROGRESSING\\|NOTAVAILABLE\\)'; do sleep 5; done", "hosts: - hostname: extra-worker-1 rootDeviceHints: deviceName: /dev/sda interfaces: - macAddress: 00:00:00:00:00:00 name: eth0 networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:00 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false - hostname: extra-worker-2 rootDeviceHints: deviceName: /dev/sda interfaces: - macAddress: 00:00:00:00:00:02 name: eth0 networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:02 ipv4: enabled: true address: - ip: 192.168.122.3 prefix-length: 23 dhcp: false", "oc adm node-image create nodes-config.yaml", "oc adm node-image monitor --ip-addresses <ip_addresses>", "oc adm certificate approve <csr_name>", "oc adm node-image create --mac-address=<mac_address>", "oc adm node-image monitor --ip-addresses <ip_address>", "oc adm certificate approve <csr_name>", "hosts:", "hosts: hostname:", "hosts: interfaces:", "hosts: interfaces: name:", "hosts: interfaces: macAddress:", "hosts: rootDeviceHints:", "hosts: rootDeviceHints: deviceName:", "hosts: networkConfig:", "cpuArchitecture:", "sshKey:", "bootArtifactsBaseURL:", "kubeletConfig: podsPerCore: 10", "kubeletConfig: maxPods: 250", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #", "oc create -f <file_name>.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False", "get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #", "oc adm cordon <node1>", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force", "error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction", "oc debug node/<node1>", "chroot /host", "systemctl reboot", "ssh core@<master-node>.<cluster_name>.<base_domain>", "sudo systemctl reboot", "oc adm uncordon <node1>", "ssh core@<target_node>", "sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #", "oc create -f <file_name>.yaml", "oc create -f gc-container.yaml", "kubeletconfig.machineconfiguration.openshift.io/gc-container created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "[Allocatable] = [Node Capacity] - [system-reserved] - [Hard-Eviction-Thresholds]", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #", "oc create -f <file_name>.yaml", "oc debug node/<node_name>", "chroot /host", "SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #", "oc create -f <file_name>.yaml", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: \"0,1,2,3\" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #", "oc create -f <file_name>.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #", "oc create -f <filename>", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/kubernetes/kubelet.conf", "\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1", "USD(nproc) X 1/2 MiB", "for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1", "curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()'", "apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: myapp-container image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: init-mydb image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f myapp.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s", "kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376", "oc create -f myservice.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s", "kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377", "oc create -f mydb.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m", "oc set volume <object_selection> <operation> <mandatory_parameters> <options>", "oc set volume <object_type>/<name> [options]", "oc set volume pod/p1", "oc set volume dc --all --name=v1", "oc set volume <object_type>/<name> --add [options]", "oc set volume dc/registry --add", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP", "oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data", "kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data", "oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 --mount-path=/data --containers=c1", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data", "oc set volume rc --all --add --name=v1 --source='{\"gitRepo\": { \"repository\": \"https://github.com/namespace1/project1\", \"revision\": \"5125c45f9f563\" }}'", "oc set volume <object_type>/<name> --add --overwrite [options]", "oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1", "kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data", "oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt", "oc set volume <object_type>/<name> --remove [options]", "oc set volume dc/d1 --remove --name=v1", "oc set volume dc/d1 --remove --name=v1 --containers=c1", "oc set volume rc/r1 --remove --confirm", "oc rsh <pod>", "sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3", "apiVersion: v1 kind: Pod metadata: name: my-site spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: \"/projected-volume\" 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: \"labels\" fieldRef: fieldPath: metadata.labels - path: \"cpu_limit\" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data", "echo -n \"admin\" | base64", "YWRtaW4=", "echo -n \"1f2d1e2e67df\" | base64", "MWYyZDFlMmU2N2Rm", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4=", "oc create -f <secrets-filename>", "oc create -f secret.yaml", "secret \"mysecret\" created", "oc get secret <secret-name>", "oc get secret mysecret", "NAME TYPE DATA AGE mysecret Opaque 2 17h", "oc get secret <secret-name> -o yaml", "oc get secret mysecret -o yaml", "apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: \"2107\" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque", "kind: Pod metadata: name: test-projected-volume spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-projected-volume image: busybox args: - sleep - \"86400\" volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1", "oc create -f <your_yaml_file>.yaml", "oc create -f secret-pod.yaml", "pod \"test-projected-volume\" created", "oc get pod <name>", "oc get pod test-projected-volume", "NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s", "oc exec -it <pod> <command>", "oc exec -it test-projected-volume -- /bin/sh", "/ # ls", "bin home root tmp dev proc run usr etc projected-volume sys var", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: \"345\" annotation2: \"456\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: [\"sh\", \"-c\", \"cat /tmp/etc/pod_labels /tmp/etc/pod_annotations\"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never", "oc create -f volume-pod.yaml", "oc logs -p dapi-volume-test-pod", "cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ \"/bin/sh\", \"-c\", \"env\" ] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory", "oc create -f pod.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: [\"sh\", \"-c\", \"while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done\"] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: \"cpu_limit\" resourceFieldRef: containerName: client-container resource: limits.cpu - path: \"cpu_request\" resourceFieldRef: containerName: client-container resource: requests.cpu - path: \"mem_limit\" resourceFieldRef: containerName: client-container resource: limits.memory - path: \"mem_request\" resourceFieldRef: containerName: client-container resource: requests.memory", "oc create -f volume-pod.yaml", "apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth", "oc create -f secret.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue", "oc create -f configmap.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Always", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "oc rsync <source> <destination> [-c <container>]", "<pod name>:<dir>", "oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name>", "oc rsync /home/user/source devpod1234:/src -c user-container", "oc rsync devpod1234:/src /home/user/source", "oc rsync devpod1234:/src/status.txt /home/user/", "rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>", "export RSYNC_RSH='oc rsh'", "rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>", "oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>]", "oc exec mypod date", "Thu Apr 9 02:21:53 UTC 2015", "/proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command>", "/proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date", "oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]", "oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]", "oc port-forward <pod> 5000 6000", "Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000", "oc port-forward <pod> 8888:5000", "Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000", "oc port-forward <pod> :5000", "Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000", "oc port-forward <pod> 0:5000", "/proxy/nodes/<node_name>/portForward/<namespace>/<pod>", "/proxy/nodes/node123.openshift.com/portForward/myns/mypod", "sudo sysctl -a", "oc get cm -n openshift-multus cni-sysctl-allowlist -oyaml", "apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD kind: ConfigMap metadata: annotations: kubernetes.io/description: | Sysctl allowlist for nodes. release.openshift.io/version: 4.18.0-0.nightly-2022-11-16-003434 creationTimestamp: \"2022-11-17T14:09:27Z\" name: cni-sysctl-allowlist namespace: openshift-multus resourceVersion: \"2422\" uid: 96d138a3-160e-4943-90ff-6108fa7c50c3", "oc edit cm -n openshift-multus cni-sysctl-allowlist -oyaml", "Please edit the object below. Lines beginning with a '#' will be ignored, and an empty file will abort the edit. If an error occurs while saving this file will be reopened with the relevant failures. # apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv4.conf.IFNAME.rp_filterUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD ^net.ipv6.conf.IFNAME.rp_filterUSD", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tuningnad\", \"plugins\": [{ \"type\": \"bridge\" }, { \"type\": \"tuning\", \"sysctl\": { \"net.ipv4.conf.IFNAME.rp_filter\": \"1\" } } ] }'", "oc apply -f reverse-path-fwd-example.yaml", "networkattachmentdefinition.k8.cni.cncf.io/tuningnad created", "apiVersion: v1 kind: Pod metadata: name: example labels: app: httpd namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: httpd image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest' ports: - containerPort: 8080 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL", "oc apply -f examplepod.yaml", "oc get pod", "NAME READY STATUS RESTARTS AGE example 1/1 Running 0 47s", "oc rsh example", "sh-4.4# sysctl net.ipv4.conf.net1.rp_filter", "net.ipv4.conf.net1.rp_filter = 1", "apiVersion: v1 kind: Pod metadata: name: sysctl-example namespace: default spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 1 runAsGroup: 3000 2 allowPrivilegeEscalation: false 3 capabilities: 4 drop: [\"ALL\"] securityContext: runAsNonRoot: true 5 seccompProfile: 6 type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"1\" - name: net.ipv4.ip_local_port_range value: \"32770 60666\" - name: net.ipv4.tcp_syncookies value: \"0\" - name: net.ipv4.ping_group_range value: \"0 200000000\"", "oc apply -f sysctl_pod.yaml", "oc get pod", "NAME READY STATUS RESTARTS AGE sysctl-example 1/1 Running 0 14s", "oc rsh sysctl-example", "sh-4.4# sysctl kernel.shm_rmid_forced", "kernel.shm_rmid_forced = 1", "apiVersion: v1 kind: Pod metadata: name: sysctl-example-unsafe spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"", "oc apply -f sysctl-example-unsafe.yaml", "oc get pod", "NAME READY STATUS RESTARTS AGE sysctl-example-unsafe 0/1 SysctlForbidden 0 14s", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-bfb92f0cd1684e54d8e234ab7423cc96 True False False 3 3 3 0 42m worker rendered-worker-21b6cb9a0f8919c88caf39db80ac1fce True False False 3 3 3 0 42m", "oc label machineconfigpool worker custom-kubelet=sysctl", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: sysctl 1 kubeletConfig: allowedUnsafeSysctls: 2 - \"kernel.msg*\" - \"net.core.somaxconn\"", "oc apply -f set-sysctl-worker.yaml", "oc get machineconfigpool worker -w", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 2 0 71m worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 3 0 72m worker rendered-worker-0188658afe1f3a183ec8c4f14186f4d5 True False False 3 3 3 0 72m", "apiVersion: v1 kind: Pod metadata: name: sysctl-example-safe-unsafe spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"", "oc apply -f sysctl-example-safe-unsafe.yaml", "Warning: would violate PodSecurity \"restricted:latest\": forbidden sysctls (net.core.somaxconn, kernel.msgmax) pod/sysctl-example-safe-unsafe created", "oc get pod", "NAME READY STATUS RESTARTS AGE sysctl-example-safe-unsafe 1/1 Running 0 19s", "oc rsh sysctl-example-safe-unsafe", "sh-4.4# sysctl net.core.somaxconn", "net.core.somaxconn = 1024", "oc exec -ti no-priv -- /bin/bash", "cat >> Dockerfile <<EOF FROM registry.access.redhat.com/ubi9 EOF", "podman build .", "io.kubernetes.cri-o.Devices: \"/dev/fuse\"", "apiVersion: v1 kind: Pod metadata: name: podman-pod annotations: io.kubernetes.cri-o.Devices: \"/dev/fuse\"", "spec: containers: - name: podman-container image: quay.io/podman/stable args: - sleep - \"1000000\" securityContext: runAsUser: 1000", "oc get events [-n <project>] 1", "oc get events -n openshift-config", "LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image \"gcr.io/google_containers/busybox\" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image \"gcr.io/google_containers/busybox\" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network \"ovn-kubernetes\": cannot set \"ovn-kubernetes\" ifname to \"eth0\": no netns: failed to Statfs \"/proc/33366/ns/net\": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal #", "apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc create -f pod-spec.yaml", "podman login registry.redhat.io", "podman pull registry.redhat.io/openshift4/ose-cluster-capacity", "podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml --verbose", "small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)", "kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [\"\"] resources: [\"pods\", \"nodes\", \"persistentvolumeclaims\", \"persistentvolumes\", \"services\", \"replicationcontrollers\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\", \"statefulsets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"policy\"] resources: [\"poddisruptionbudgets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"watch\", \"list\"]", "oc create -f <file_name>.yaml", "oc create sa cluster-capacity-sa", "oc create sa cluster-capacity-sa -n default", "oc adm policy add-cluster-role-to-user cluster-capacity-role system:serviceaccount:<namespace>:cluster-capacity-sa", "apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc create -f pod.yaml", "oc create configmap cluster-capacity-configmap --from-file=pod.yaml=pod.yaml", "apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: \"Always\" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: \"true\" command: - \"/bin/sh\" - \"-ec\" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: \"Never\" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap", "oc create -f cluster-capacity-job.yaml", "oc logs jobs/cluster-capacity-job", "small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" spec: limits: - type: \"Container\" max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: cpu: \"300m\" memory: \"200Mi\" defaultRequest: cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: cpu: \"10\"", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Container\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"100m\" 4 memory: \"4Mi\" 5 default: cpu: \"300m\" 6 memory: \"200Mi\" 7 defaultRequest: cpu: \"200m\" 8 memory: \"100Mi\" 9 maxLimitRequestRatio: cpu: \"10\" 10", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 maxLimitRequestRatio: cpu: \"10\" 6", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"PersistentVolumeClaim\" min: storage: \"2Gi\" 2 max: storage: \"50Gi\" 3", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" 2 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"200m\" memory: \"6Mi\" - type: \"Container\" 3 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: 4 cpu: \"300m\" memory: \"200Mi\" defaultRequest: 5 cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: 6 cpu: \"10\" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: \"PersistentVolumeClaim\" 9 min: storage: \"2Gi\" max: storage: \"50Gi\"", "oc create -f <limit_range_file> -n <project> 1", "oc get limits -n demoproject", "NAME CREATED AT resource-limits 2020-07-15T17:14:23Z", "oc describe limits resource-limits -n demoproject", "Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -", "oc delete limits <limit_name>", "-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.", "JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"", "apiVersion: v1 kind: Pod metadata: name: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc rsh test", "env | grep MEMORY | sort", "MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184", "oc rsh test", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control", "oom_kill 0", "sed -e '' </dev/zero", "Killed", "echo USD?", "137", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control", "oom_kill 1", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m", "oc get pod test -o yaml", "status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"", "apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: \"512Mi\" cpu: \"2000m\"", "apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: \"1\" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"stable\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <file-name>.yaml", "oc create -f cro-sub.yaml", "oc project clusterresourceoverride-operator", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "oc create -f <file-name>.yaml", "oc create -f cro-cr.yaml", "oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES clusterresourceoverride-786b8c898c-9wrdq 1/1 Running 0 23s 10.128.2.32 ip-10-0-14-183.us-west-2.compute.internal <none> <none> clusterresourceoverride-786b8c898c-vn2lf 1/1 Running 0 26s 10.130.2.10 ip-10-0-20-140.us-west-2.compute.internal <none> <none> clusterresourceoverride-operator-6b8b8b656b-lvr62 1/1 Running 0 56m 10.131.0.33 ip-10-0-2-39.us-west-2.compute.internal <none> <none>", "NAME STATUS ROLES AGE VERSION ip-10-0-14-183.us-west-2.compute.internal Ready control-plane,master 65m v1.31.3 ip-10-0-2-39.us-west-2.compute.internal Ready worker 58m v1.31.3 ip-10-0-20-140.us-west-2.compute.internal Ready control-plane,master 65m v1.31.3 ip-10-0-23-244.us-west-2.compute.internal Ready infra 55m v1.31.3 ip-10-0-77-153.us-west-2.compute.internal Ready control-plane,master 65m v1.31.3 ip-10-0-99-108.us-west-2.compute.internal Ready worker 24m v1.31.3 ip-10-0-24-233.us-west-2.compute.internal Ready infra 55m v1.31.3 ip-10-0-88-109.us-west-2.compute.internal Ready worker 24m v1.31.3 ip-10-0-67-453.us-west-2.compute.internal Ready infra 55m v1.31.3", "oc edit -n clusterresourceoverride-operator subscriptions.operators.coreos.com clusterresourceoverride", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"", "oc edit ClusterResourceOverride cluster -n clusterresourceoverride-operator", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster resourceVersion: \"37952\" spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 deploymentOverrides: replicas: 1 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 deploymentOverrides: replicas: 3 nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 1 - key: \"key\" operator: \"Equal\" value: \"value\" effect: \"NoSchedule\"", "oc get pods -n clusterresourceoverride-operator -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES clusterresourceoverride-786b8c898c-9wrdq 1/1 Running 0 23s 10.127.2.25 ip-10-0-23-244.us-west-2.compute.internal <none> <none> clusterresourceoverride-786b8c898c-vn2lf 1/1 Running 0 26s 10.128.0.80 ip-10-0-24-233.us-west-2.compute.internal <none> <none> clusterresourceoverride-operator-6b8b8b656b-lvr62 1/1 Running 0 56m 10.129.0.71 ip-10-0-67-453.us-west-2.compute.internal <none> <none>", "sysctl -a |grep commit", "# vm.overcommit_memory = 0 #", "sysctl -a |grep panic", "# vm.panic_on_oom = 0 #", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3", "oc create -f <file_name>.yaml", "sysctl -w vm.overcommit_memory=0", "apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" <.>", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: \"v1\" 1", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23d4317815a5f854bd3553d689cfe2e9 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-dcc7f1b92892d34db74d6832bcc9ccd4 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s", "oc describe mc <name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd_unified_cgroup_hierarchy=1 1 cgroup_no_v1=\"all\" 2 psi=0", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd.unified_cgroup_hierarchy=0 1 systemd.legacy_systemd_cgroup_controller=1 2 psi=1 3", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready,SchedulingDisabled master 58m v1.31.3 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.31.3 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.31.3 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.31.3 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.31.3 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.31.3", "oc debug node/<node_name>", "sh-4.4# chroot /host", "stat -c %T -f /sys/fs/cgroup", "cgroup2fs", "tmpfs", "compute: - hyperthreading: Enabled name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 metadataService: authentication: Optional type: c5.4xlarge zones: - us-west-2c replicas: 3 featureSet: TechPreviewNoUpgrade", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1", "oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5", "- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"", "tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute - key: node.kubernetes.io/disk-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/memory-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/pid-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/unschedulable operator: Exists effect: NoSchedule", "kind: Node apiVersion: v1 metadata: labels: topology.kubernetes.io/region=east", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker 1 kubeletConfig: node-status-update-frequency: 2 - \"10s\" node-status-report-frequency: 3 - \"1m\"", "tolerations: - key: \"node.kubernetes.io/unreachable\" operator: \"Exists\" effect: \"NoExecute\" 1 - key: \"node.kubernetes.io/not-ready\" operator: \"Exists\" effect: \"NoExecute\" 2 tolerationSeconds: 600 3", "export OFFLINE_TOKEN=<copied_api_token>", "export JWT_TOKEN=USD( curl --silent --header \"Accept: application/json\" --header \"Content-Type: application/x-www-form-urlencoded\" --data-urlencode \"grant_type=refresh_token\" --data-urlencode \"client_id=cloud-services\" --data-urlencode \"refresh_token=USD{OFFLINE_TOKEN}\" \"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token\" | jq --raw-output \".access_token\" )", "curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq", "{ \"release_tag\": \"v2.5.1\", \"versions\": { \"assisted-installer\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-175\", \"assisted-installer-controller\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-223\", \"assisted-installer-service\": \"quay.io/app-sre/assisted-service:ac87f93\", \"discovery-agent\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-156\" } }", "export API_URL=<api_url> 1", "export OPENSHIFT_CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')", "export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id \"USDOPENSHIFT_CLUSTER_ID\" '{ \"api_vip_dnsname\": \"<api_vip>\", 1 \"openshift_cluster_id\": USDopenshift_cluster_id, \"name\": \"<openshift_cluster_name>\" 2 }')", "CLUSTER_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/clusters/import\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDCLUSTER_REQUEST\" | tee /dev/stderr | jq -r '.id')", "export INFRA_ENV_REQUEST=USD(jq --null-input --slurpfile pull_secret <path_to_pull_secret_file> \\ 1 --arg ssh_pub_key \"USD(cat <path_to_ssh_pub_key>)\" \\ 2 --arg cluster_id \"USDCLUSTER_ID\" '{ \"name\": \"<infraenv_name>\", 3 \"pull_secret\": USDpull_secret[0] | tojson, \"cluster_id\": USDcluster_id, \"ssh_authorized_key\": USDssh_pub_key, \"image_type\": \"<iso_image_type>\" 4 }')", "INFRA_ENV_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/infra-envs\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDINFRA_ENV_REQUEST\" | tee /dev/stderr | jq -r '.id')", "curl -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -r '.download_url'", "https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=USDVERSION", "curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1", "curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -r '.hosts[] | select(.status != \"installed\").id'", "2294ba03-c264-4f11-ac08-2f1bb2f8c296", "HOST_ID=<host_id> 1", "curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq ' def host_name(USDhost): if (.suggested_hostname // \"\") == \"\" then if (.inventory // \"\") == \"\" then \"Unknown hostname, please wait\" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): [\"failure\", \"pending\", \"error\"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // \"{}\" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { \"Hosts validations\": { \"Hosts\": [ .hosts[] | select(.status != \"installed\") | { \"id\": .id, \"name\": host_name(.), \"status\": .status, \"notable_validations\": notable_validations(.validations_info) } ] }, \"Cluster validations info\": { \"notable_validations\": notable_validations(.validations_info) } } ' -r", "{ \"Hosts validations\": { \"Hosts\": [ { \"id\": \"97ec378c-3568-460c-bc22-df54534ff08f\", \"name\": \"localhost.localdomain\", \"status\": \"insufficient\", \"notable_validations\": [ { \"id\": \"ntp-synced\", \"status\": \"failure\", \"message\": \"Host couldn't synchronize with any NTP server\" }, { \"id\": \"api-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"api-int-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"apps-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" } ] } ] }, \"Cluster validations info\": { \"notable_validations\": [] } }", "curl -X POST -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install\" -H \"Authorization: Bearer USD{JWT_TOKEN}\"", "curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq '{ \"Cluster day-2 hosts\": [ .hosts[] | select(.status != \"installed\") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }'", "{ \"Cluster day-2 hosts\": [ { \"id\": \"a1c52dde-3432-4f59-b2ae-0a530c851480\", \"requested_hostname\": \"control-plane-1\", \"status\": \"added-to-existing-cluster\", \"status_info\": \"Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs\", \"progress\": { \"current_stage\": \"Done\", \"installation_percentage\": 100, \"stage_started_at\": \"2022-07-08T10:56:20.476Z\", \"stage_updated_at\": \"2022-07-08T10:56:20.476Z\" }, \"status_updated_at\": \"2022-07-08T10:56:20.476Z\", \"updated_at\": \"2022-07-08T10:57:15.306369Z\", \"infra_env_id\": \"b74ec0c3-d5b5-4717-a866-5b6854791bd3\", \"cluster_id\": \"8f721322-419d-4eed-aa5b-61b50ea586ae\", \"created_at\": \"2022-07-06T22:54:57.161614Z\" } ] }", "curl -s \"USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -c '.[] | {severity, message, event_time, host_id}'", "{\"severity\":\"info\",\"message\":\"Host compute-0: updated status from insufficient to known (Host is ready to be installed)\",\"event_time\":\"2022-07-08T11:21:46.346Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from known to installing (Installation is in progress)\",\"event_time\":\"2022-07-08T11:28:28.647Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing to installing-in-progress (Starting installation)\",\"event_time\":\"2022-07-08T11:28:52.068Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae\",\"event_time\":\"2022-07-08T11:29:47.802Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)\",\"event_time\":\"2022-07-08T11:29:48.259Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host: compute-0, reached installation stage Rebooting\",\"event_time\":\"2022-07-08T11:29:48.261Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"}", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.31.3 compute-1.example.com Ready worker 11m v1.31.3", "OCP_VERSION=<ocp_version> 1", "ARCH=<architecture> 1", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz > openshift-install-linux.tar.gz", "tar zxvf openshift-install-linux.tar.gz", "chmod +x openshift-install", "ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)", "curl -L USDISO_URL -o rhcos-live.iso", "nmcli con mod <network_interface> ipv4.method manual / ipv4.addresses <static_ip> ipv4.gateway <network_gateway> ipv4.dns <dns_server> / 802-3-ethernet.mtu 9000", "nmcli con up <network_interface>", "{ \"ignition\":{ \"version\":\"3.2.0\", \"config\":{ \"merge\":[ { \"source\":\"<hosted_worker_ign_file>\" 1 } ] } }, \"storage\":{ \"files\":[ { \"path\":\"/etc/hostname\", \"contents\":{ \"source\":\"data:,<new_fqdn>\" 2 }, \"mode\":420, \"overwrite\":true, \"path\":\"/etc/hostname\" } ] } }", "sudo coreos-installer install --copy-network / --ignition-url=<new_worker_ign_file> <hard_disk> --insecure-ignition", "coreos-installer install --ignition-url=<hosted_worker_ign_file> <hard_disk>", "apiVersion: agent-install.openshift.io/v1 kind: NMStateConfig metadata: name: nmstateconfig-dhcp namespace: example-sno labels: nmstate_config_cluster_name: <nmstate_config_cluster_label> spec: config: interfaces: - name: eth0 type: ethernet state: up ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: \"eth0\" macAddress: \"AA:BB:CC:DD:EE:11\"", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.31.3 compute-1.example.com Ready worker 11m v1.31.3", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3", "topk(3, sum(increase(container_runtime_crio_containers_oom_count_total[1d])) by (name))", "rate(container_runtime_crio_image_pulls_failure_total[1h]) / (rate(container_runtime_crio_image_pulls_success_total[1h]) + rate(container_runtime_crio_image_pulls_failure_total[1h]))", "sum by (node) (container_memory_rss{id=\"/system.slice\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 80", "sum by (node) (container_memory_rss{id=\"/system.slice/kubelet.service\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 50", "sum by (node) (container_memory_rss{id=\"/system.slice/crio.service\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 50", "sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 80", "sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice/kubelet.service\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 50", "sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice/crio.service\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 50", "API Version: config.openshift.io/v1alpha1 Kind: ImagePolicy Name: p0 Namespace: mynamespace Status: Conditions: Message: has conflicting scope(s) [\"example.com/global/image\"] that equal to or nest inside existing clusterimagepolicy, only policy from clusterimagepolicy scope(s) will be applied Reason: ConflictScopes", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade 1", "apiVersion: config.openshift.io/v1alpha1 kind: ClusterImagePolicy 1 metadata: name: p1 spec: scopes: 2 - example.com policy: 3 rootOfTrust: 4 policyType: PublicKey 5 publicKey: keyData: a2V5RGF0YQ== 6 rekorKeyData: cmVrb3JLZXlEYXRh 7 signedIdentity: 8 matchPolicy: MatchRepoDigestOrExact", "apiVersion: config.openshift.io/v1alpha1 kind: ClusterImagePolicy 1 metadata: name: p1 spec: scopes: 2 - example.com policy: 3 rootOfTrust: 4 policyType: FulcioCAWithRekor 5 fulcioCAWithRekor: 6 fulcioCAData: a2V5RGF0YQ== fulcioSubject: oidcIssuer: \"https://expected.OIDC.issuer/\" signedEmail: \"[email protected]\" rekorKeyData: cmVrb3JLZXlEYXRh 7 signedIdentity: matchPolicy: RemapIdentity 8 remapIdentity: prefix: example.com 9 signedPrefix: mirror-example.com 10", "oc create -f <file_name>.yaml", "oc debug node/<node_name>", "sh-5.1# chroot /host/", "sh-5.1# cat /etc/containers/policy.json", "\"transports\": { \"docker\": { \"example.com\": [ { \"type\": \"sigstoreSigned\", \"keyData\": \"a2V5RGF0YQ==\", \"rekorPublicKeyData\": \"cmVrb3JLZXlEYXRh\", \"signedIdentity\": { \"type\": \"matchRepoDigestOrExact\" } } ],", "\"transports\": { \"docker\": { \"example.com\": [ { \"type\": \"sigstoreSigned\", \"fulcio\": { \"caData\": \"a2V5RGF0YQ==\", \"oidcIssuer\": \"https://expected.OIDC.issuer/\", \"subjectEmail\": \"[email protected]\" }, \"rekorPublicKeyData\": \"cmVrb3JLZXlEYXRh\", \"signedIdentity\": { \"type\": \"remapIdentity\", \"prefix\": \"example.com\", \"signedPrefix\": \"mirror-example.com\" } } ],", "sh-5.1# cat /etc/containers/registries.d/sigstore-registries.yaml", "docker: example.com: use-sigstore-attachments: true 1 quay.io/openshift-release-dev/ocp-release: use-sigstore-attachments: true", "oc image mirror quay.io/openshift-release-dev/ocp-release:sha256-1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef.sig mirror.com/image/repo:sha256-1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef.sig", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade 1", "apiVersion: config.openshift.io/v1alpha1 kind: ImagePolicy 1 metadata: name: p0 namespace: mynamespace 2 spec: scopes: 3 - example.io/crio/signed policy: 4 rootOfTrust: 5 policyType: PublicKey 6 publicKey: keyData: a2V5RGF0YQ== 7 rekorKeyData: cmVrb3JLZXlEYXRh 8 signedIdentity: matchPolicy: MatchRepository 9", "apiVersion: config.openshift.io/v1alpha1 kind: ImagePolicy 1 metadata: name: p1 namespace: mynamespace 2 spec: scopes: 3 - example.io/crio/signed policy: 4 rootOfTrust: 5 policyType: FulcioCAWithRekor 6 fulcioCAWithRekor: 7 fulcioCAData: a2V5RGF0YQ== fulcioSubject: oidcIssuer: \"https://expected.OIDC.issuer/\" signedEmail: \"[email protected]\" rekorKeyData: cmVrb3JLZXlEYXRh 8 signedIdentity: matchPolicy: ExactRepository 9 exactRepository: repository: quay.io/crio/signed 10", "oc create -f <file_name>.yaml", "oc debug node/<node_name>", "sh-5.1# chroot /host/", "sh-5.1# cat /etc/crio/policies/<namespace>.json", "\"transports\": { \"docker\": { \"example.io/crio/signed\": [ { \"type\": \"sigstoreSigned\", \"keyData\": \"a2V5RGF0YQ==\", \"rekorPublicKeyData\": \"cmVrb3JLZXlEYXRh\", \"signedIdentity\": { \"type\": \"matchRepository\", \"dockerRepository\": \"example.org/crio/signed\" }", "\"transports\": { \"docker\": { \"example.io/crio/signed\": [ { \"type\": \"sigstoreSigned\", \"fulcio\": { \"caData\": \"a2V5RGF0YQ==\", \"oidcIssuer\": \"https://expected.OIDC.issuer/\", \"subjectEmail\": \"[email protected]\" }, \"rekorPublicKeyData\": \"cmVrb3JLZXlEYXRh\", \"signedIdentity\": { \"type\": \"exactRepository\", \"dockerRepository\": \"quay.io/crio/signed\" } } ],", "sh-5.1# cat /etc/containers/registries.d/sigstore-registries.yaml", "docker: example.io/crio/signed: use-sigstore-attachments: true 1 quay.io/openshift-release-dev/ocp-release: use-sigstore-attachments: true", "sh-5.1# journalctl -u crio | grep -A 100 \"Pulling image: example.io/crio\"", "msg=\"IsRunningImageAllowed for image docker:example.io/crio/signed:latest\" file=\"signature/policy_eval.go:274\" 1 msg=\"Using transport \\\"docker\\\" specific policy section \\\"example.io/crio/signed\\\"\" file=\"signature/policy_eval.go:150\" 2 msg=\"Reading /var/lib/containers/sigstore/crio/signed@sha256=18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a/signature-1\" file=\"docker/docker_image_src.go:545\" msg=\"Looking for Sigstore attachments in quay.io/crio/signed:sha256-18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a.sig\" file=\"docker/docker_client.go:1138\" msg=\"GET https://quay.io/v2/crio/signed/manifests/sha256-18b42e8ea347780f35d979a829affa178593a8e31d90644466396e1187a07f3a.sig\" file=\"docker/docker_client.go:617\" msg=\"Content-Type from manifest GET is \\\"application/vnd.oci.image.manifest.v1+json\\\"\" file=\"docker/docker_client.go:989\" msg=\"Found a Sigstore attachment manifest with 1 layers\" file=\"docker/docker_image_src.go:639\" msg=\"Fetching Sigstore attachment 1/1: sha256:8276724a208087e73ae5d9d6e8f872f67808c08b0acdfdc73019278807197c45\" file=\"docker/docker_image_src.go:644\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/nodes/index
34.3. Split Brain Timing: Recovering From a Split
34.3. Split Brain Timing: Recovering From a Split After a split occurs JBoss Data Grid will merge the partitions back, and the maximum time to detect a merge after the network partition is healed is: In some cases multiple merges will occur after a split so that the cluster may contain all available partitions. In this case, where multiple merges occur, time should be allowed for all of these to complete, and as there may be as many as three merges occurring sequentially the total delay should be no more than the following: Important The amount of time taken in the formulas above is how long it takes JBoss Data Grid to install a cluster view without the leavers; however, as JBoss Data Grid runs inside a JVM excessive Garbage Collection (GC) times can increase this time beyond the failure detection outlined above. JBoss Data Grid has no control over these GC times, and excessive GC on the coordinator can delay this detection by an amount equal to the GC time. In addition, when merging cluster views JBoss Data Grid tries to confirm all members are present; however, there is no upper bound on waiting for these responses, and merging the cluster views may be delayed due to networking issues. Report a bug
[ "3.1 * MERGE3.max_interval", "10 * MERGE3.max_interval" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/split_brain_timing_recovering_from_a_split
Chapter 20. Analyzing a core dump
Chapter 20. Analyzing a core dump To identify the cause of the system crash, you can use the crash utility, which provides an interactive prompt similar to the GNU Debugger (GDB). By using crash , you can analyze a core dump created by kdump , netdump , diskdump , or xendump and a running Linux system. Alternatively, you can use the Kernel Oops Analyzer or the Kdump Helper tool. 20.1. Installing the crash utility With the provided information, understand the required packages and the procedure to install the crash utility. The crash utility might not be installed by default on your RHEL 8 systems. crash is a tool to interactively analyze a system's state while it is running or after a kernel crash occurs and a core dump file is created. The core dump file is also known as the vmcore file. Procedure Enable the relevant repositories: Install the crash package: Install the kernel-debuginfo package: The package kernel-debuginfo will correspond to the running kernel and provides the data necessary for the dump analysis. 20.2. Running and exiting the crash utility The crash utility is a powerful tool for analyzing kdump . By running crash on a crash dump file, you can gain insights into the system's state at the time of the crash, identify the root cause of the issue, and troubleshoot kernel-related problems. Prerequisites Identify the currently running kernel (for example 4.18.0-5.el8.x86_64 ). Procedure To start the crash utility, pass the following two necessary parameters: The debug-info (a decompressed vmlinuz image), for example /usr/lib/debug/lib/modules/4.18.0-5.el8.x86_64/vmlinux provided through a specific kernel-debuginfo package. The actual vmcore file, for example /var/crash/127.0.0.1-2018-10-06-14:05:33/vmcore . The resulting crash command will be as follows: Use the same <kernel> version that was captured by kdump . Example 20.1. Running the crash utility The following example shows analyzing a core dump created on October 6 2018 at 14:05 PM, using the 4.18.0-5.el8.x86_64 kernel. To exit the interactive prompt and stop crash , type exit or q . Note The crash command is also utilized as a powerful tool for debugging a live system. However, you must use it with caution to avoid system-level issues. Additional resources A Guide to Unexpected System Restarts 20.3. Displaying various indicators in the crash utility Use the crash utility to display various indicators, such as a kernel message buffer, a backtrace, a process status, virtual memory information and open files. Displaying the message buffer To display the kernel message buffer, type the log command at the interactive prompt: Type help log for more information about the command usage. Note The kernel message buffer includes the most essential information about the system crash. It is always dumped first in to the vmcore-dmesg.txt file. If you fail to obtain the full vmcore file, for example, due to insufficient space on the target location, you can obtain the required information from the kernel message buffer. By default, vmcore-dmesg.txt is placed in the /var/crash/ directory. Displaying a backtrace To display the kernel stack trace, use the bt command. Type bt <pid> to display the backtrace of a specific process or type help bt for more information about bt usage. Displaying a process status To display the status of processes in the system, use the ps command. Use ps <pid> to display the status of a single specific process. Use help ps for more information about ps usage. Displaying virtual memory information To display basic virtual memory information, type the vm command at the interactive prompt. Use vm <pid> to display information about a single specific process, or use help vm for more information about vm usage. Displaying open files To display information about open files, use the files command. Use files <pid> to display files opened by only one selected process, or use help files for more information about files usage. 20.4. Using Kernel Oops Analyzer The Kernel Oops Analyzer tool analyzes the crash dump by comparing the oops messages with known issues in the knowledge base. Prerequisites An oops message is secured to feed the Kernel Oops Analyzer. Procedure Access the Kernel Oops Analyzer tool. To diagnose a kernel crash issue, upload a kernel oops log generated in vmcore . Alternatively, you can diagnose a kernel crash issue by providing a text message or a vmcore-dmesg.txt as an input. Click DETECT to compare the oops message based on information from the makedumpfile against known solutions. Additional resources The Kernel Oops Analyzer article 20.5. The Kdump Helper tool The Kdump Helper tool helps to set up the kdump using the provided information. Kdump Helper generates a configuration script based on your preferences. Initiating and running the script on your server sets up the kdump service. Additional resources Kdump Helper
[ "subscription-manager repos --enable baseos repository", "subscription-manager repos --enable appstream repository", "subscription-manager repos --enable rhel-8-for-x86_64-baseos-debug-rpms", "yum install crash", "yum install kernel-debuginfo", "crash /usr/lib/debug/lib/modules/4.18.0-5.el8.x86_64/vmlinux /var/crash/127.0.0.1-2018-10-06-14:05:33/vmcore", "WARNING: kernel relocated [202MB]: patching 90160 gdb minimal_symbol values KERNEL: /usr/lib/debug/lib/modules/4.18.0-5.el8.x86_64/vmlinux DUMPFILE: /var/crash/127.0.0.1-2018-10-06-14:05:33/vmcore [PARTIAL DUMP] CPUS: 2 DATE: Sat Oct 6 14:05:16 2018 UPTIME: 01:03:57 LOAD AVERAGE: 0.00, 0.00, 0.00 TASKS: 586 NODENAME: localhost.localdomain RELEASE: 4.18.0-5.el8.x86_64 VERSION: #1 SMP Wed Aug 29 11:51:55 UTC 2018 MACHINE: x86_64 (2904 Mhz) MEMORY: 2.9 GB PANIC: \"sysrq: SysRq : Trigger a crash\" PID: 10635 COMMAND: \"bash\" TASK: ffff8d6c84271800 [THREAD_INFO: ffff8d6c84271800] CPU: 1 STATE: TASK_RUNNING (SYSRQ) crash>", "crash> exit ~]#", "crash> log ... several lines omitted EIP: 0060:[<c068124f>] EFLAGS: 00010096 CPU: 2 EIP is at sysrq_handle_crash+0xf/0x20 EAX: 00000063 EBX: 00000063 ECX: c09e1c8c EDX: 00000000 ESI: c0a09ca0 EDI: 00000286 EBP: 00000000 ESP: ef4dbf24 DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 Process bash (pid: 5591, ti=ef4da000 task=f196d560 task.ti=ef4da000) Stack: c068146b c0960891 c0968653 00000003 00000000 00000002 efade5c0 c06814d0 <0> fffffffb c068150f b7776000 f2600c40 c0569ec4 ef4dbf9c 00000002 b7776000 <0> efade5c0 00000002 b7776000 c0569e60 c051de50 ef4dbf9c f196d560 ef4dbfb4 Call Trace: [<c068146b>] ? __handle_sysrq+0xfb/0x160 [<c06814d0>] ? write_sysrq_trigger+0x0/0x50 [<c068150f>] ? write_sysrq_trigger+0x3f/0x50 [<c0569ec4>] ? proc_reg_write+0x64/0xa0 [<c0569e60>] ? proc_reg_write+0x0/0xa0 [<c051de50>] ? vfs_write+0xa0/0x190 [<c051e8d1>] ? sys_write+0x41/0x70 [<c0409adc>] ? syscall_call+0x7/0xb Code: a0 c0 01 0f b6 41 03 19 d2 f7 d2 83 e2 03 83 e0 cf c1 e2 04 09 d0 88 41 03 f3 c3 90 c7 05 c8 1b 9e c0 01 00 00 00 0f ae f8 89 f6 <c6> 05 00 00 00 00 01 c3 89 f6 8d bc 27 00 00 00 00 8d 50 d0 83 EIP: [<c068124f>] sysrq_handle_crash+0xf/0x20 SS:ESP 0068:ef4dbf24 CR2: 0000000000000000", "crash> bt PID: 5591 TASK: f196d560 CPU: 2 COMMAND: \"bash\" #0 [ef4dbdcc] crash_kexec at c0494922 #1 [ef4dbe20] oops_end at c080e402 #2 [ef4dbe34] no_context at c043089d #3 [ef4dbe58] bad_area at c0430b26 #4 [ef4dbe6c] do_page_fault at c080fb9b #5 [ef4dbee4] error_code (via page_fault) at c080d809 EAX: 00000063 EBX: 00000063 ECX: c09e1c8c EDX: 00000000 EBP: 00000000 DS: 007b ESI: c0a09ca0 ES: 007b EDI: 00000286 GS: 00e0 CS: 0060 EIP: c068124f ERR: ffffffff EFLAGS: 00010096 #6 [ef4dbf18] sysrq_handle_crash at c068124f #7 [ef4dbf24] __handle_sysrq at c0681469 #8 [ef4dbf48] write_sysrq_trigger at c068150a #9 [ef4dbf54] proc_reg_write at c0569ec2 #10 [ef4dbf74] vfs_write at c051de4e #11 [ef4dbf94] sys_write at c051e8cc #12 [ef4dbfb0] system_call at c0409ad5 EAX: ffffffda EBX: 00000001 ECX: b7776000 EDX: 00000002 DS: 007b ESI: 00000002 ES: 007b EDI: b7776000 SS: 007b ESP: bfcb2088 EBP: bfcb20b4 GS: 0033 CS: 0073 EIP: 00edc416 ERR: 00000004 EFLAGS: 00000246", "crash> ps PID PPID CPU TASK ST %MEM VSZ RSS COMM > 0 0 0 c09dc560 RU 0.0 0 0 [swapper] > 0 0 1 f7072030 RU 0.0 0 0 [swapper] 0 0 2 f70a3a90 RU 0.0 0 0 [swapper] > 0 0 3 f70ac560 RU 0.0 0 0 [swapper] 1 0 1 f705ba90 IN 0.0 2828 1424 init ... several lines omitted 5566 1 1 f2592560 IN 0.0 12876 784 auditd 5567 1 2 ef427560 IN 0.0 12876 784 auditd 5587 5132 0 f196d030 IN 0.0 11064 3184 sshd > 5591 5587 2 f196d560 RU 0.0 5084 1648 bash", "crash> vm PID: 5591 TASK: f196d560 CPU: 2 COMMAND: \"bash\" MM PGD RSS TOTAL_VM f19b5900 ef9c6000 1648k 5084k VMA START END FLAGS FILE f1bb0310 242000 260000 8000875 /lib/ld-2.12.so f26af0b8 260000 261000 8100871 /lib/ld-2.12.so efbc275c 261000 262000 8100873 /lib/ld-2.12.so efbc2a18 268000 3ed000 8000075 /lib/libc-2.12.so efbc23d8 3ed000 3ee000 8000070 /lib/libc-2.12.so efbc2888 3ee000 3f0000 8100071 /lib/libc-2.12.so efbc2cd4 3f0000 3f1000 8100073 /lib/libc-2.12.so efbc243c 3f1000 3f4000 100073 efbc28ec 3f6000 3f9000 8000075 /lib/libdl-2.12.so efbc2568 3f9000 3fa000 8100071 /lib/libdl-2.12.so efbc2f2c 3fa000 3fb000 8100073 /lib/libdl-2.12.so f26af888 7e6000 7fc000 8000075 /lib/libtinfo.so.5.7 f26aff2c 7fc000 7ff000 8100073 /lib/libtinfo.so.5.7 efbc211c d83000 d8f000 8000075 /lib/libnss_files-2.12.so efbc2504 d8f000 d90000 8100071 /lib/libnss_files-2.12.so efbc2950 d90000 d91000 8100073 /lib/libnss_files-2.12.so f26afe00 edc000 edd000 4040075 f1bb0a18 8047000 8118000 8001875 /bin/bash f1bb01e4 8118000 811d000 8101873 /bin/bash f1bb0c70 811d000 8122000 100073 f26afae0 9fd9000 9ffa000 100073 ... several lines omitted", "crash> files PID: 5591 TASK: f196d560 CPU: 2 COMMAND: \"bash\" ROOT: / CWD: /root FD FILE DENTRY INODE TYPE PATH 0 f734f640 eedc2c6c eecd6048 CHR /pts/0 1 efade5c0 eee14090 f00431d4 REG /proc/sysrq-trigger 2 f734f640 eedc2c6c eecd6048 CHR /pts/0 10 f734f640 eedc2c6c eecd6048 CHR /pts/0 255 f734f640 eedc2c6c eecd6048 CHR /pts/0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/analyzing-a-core-dump_managing-monitoring-and-updating-the-kernel
3.6. Configuring IP Networking with ip Commands
3.6. Configuring IP Networking with ip Commands As a system administrator, you can configure a network interface using the ip command, but but changes are not persistent across reboots; when you reboot, you will lose any changes. The commands for the ip utility, sometimes referred to as iproute2 after the upstream package name, are documented in the man ip(8) page. The package name in Red Hat Enterprise Linux 7 is iproute . If necessary, you can check that the ip utility is installed by checking its version number as follows: The ip commands can be used to add and remove addresses and routes to interfaces in parallel with NetworkManager , which will preserve them and recognize them in nmcli , nmtui , control-center , and the D-Bus API. To bring an interface down: Note The ip link set ifname command sets a network interface in IFF_UP state and enables it from the kernel's scope. This is different from the ifup ifname command for initscripts or NetworkManager 's activation state of a device. In fact, NetworkManager always sets an interface up even if it is currently disconnected. Disconnecting the device through the nmcli tool, does not remove the IFF_UP flag. In this way, NetworkManager gets notifications about the carrier state. Note that the ip utility replaces the ifconfig utility because the net-tools package (which provides ifconfig ) does not support InfiniBand addresses. For information about available OBJECTs, use the ip help command. For example: ip link help and ip addr help . Note ip commands given on the command line will not persist after a system restart. Where persistence is required, make use of configuration files ( ifcfg files) or add the commands to a script. Examples of using the command line and configuration files for each task are included after nmtui and nmcli examples but before explaining the use of one of the graphical user interfaces to NetworkManager , namely, control-center and nm-connection-editor . The ip utility can be used to assign IP addresses to an interface with the following form: ip addr [ add | del ] address dev ifname Assigning a Static Address Using ip Commands To assign an IP address to an interface: Further examples and command options can be found in the ip-address(8) manual page. Configuring Multiple Addresses Using ip Commands As the ip utility supports assigning multiple addresses to the same interface it is no longer necessary to use the alias interface method of binding multiple addresses to the same interface. The ip command to assign an address can be repeated multiple times in order to assign multiple address. For example: For more details on the commands for the ip utility, see the ip(8) manual page. Note ip commands given on the command line will not persist after a system restart.
[ "~]USD ip -V ip utility, iproute2-ss130716", "ip link set ifname down", "~]# ip address add 10.0.0.3/24 dev enp1s0 You can view the address assignment of a specific device: ~]# ip addr show dev enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether f0:de:f1:7b:6e:5f brd ff:ff:ff:ff:ff:ff inet 10.0.0.3/24 brd 10.0.0.255 scope global global enp1s0 valid_lft 58682sec preferred_lft 58682sec inet6 fe80::f2de:f1ff:fe7b:6e5f/64 scope link valid_lft forever preferred_lft forever", "~]# ip address add 192.168.2.223/24 dev enp1s0 ~]# ip address add 192.168.4.223/24 dev enp1s0 ~]# ip addr 3: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:fb:77:9e brd ff:ff:ff:ff:ff:ff inet 192.168. 2 .223/24 scope global enp1s0 inet 192.168. 4 .223/24 scope global enp1s0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ip_networking_with_ip_commands
Chapter 58. Installation and Booting
Chapter 58. Installation and Booting Selecting the Lithuanian language causes the installer to crash If you select the Lithuanian (Lietuviu) langauge on the first screen of the graphical installer and press Continue (Testi), the installer crashes and displays a traceback message. To work around this problem, either use a different language, or avoid the graphical installer and use a different approach such as the text mode or a Kickstart installation. (BZ# 1527319 ) oscap-anaconda-addon fails to remediate when installing in TUI using Kickstart The OpenSCAP Anaconda add-on fails to fully remediate a machine to the specified security policy when the system is installed using a Kickstart file that sets installation display mode to the text-based user interface (TUI) using the text Kickstart command. The problem occurs because packages required for the remediation are not installed. To work around this problem, you can either use the graphical installer or add packages required by the security policy to the %packages section of the Kickstart file manually. (BZ# 1547609 ) The grub2-mkimage command fails on UEFI systems by default The grub2-mkimage command may fail on UEFI systems with the following error message: This error is caused by a the package grub2-efi-x64-modules package missing from the system. The package is missing due to a known issue where it is not part of the default installation, and it is not marked as a dependency for grub2-tools which provides the grub2-mkimage command. The error also causes some other tools which depend on it, such as ReaR , to fail. To work around this problem, install the grub2-efi-x64-modules , either manually using Yum , or by adding it to the Kickstart file used for installing the system. (BZ# 1512493 ) Kernel panic during RHEL 7.5 installation on HPE BL920s Gen9 systems A known issue related to the fix for the Meltdown vulnerability causes a kernel panic with a NULL pointer dereference during the installation of Red Hat Enterprise Linux 7.5 on HPE BL920s Gen2 (Superdome 2) systems. When the problem appears, the following error message is displayed: Then the system reboots, or enters an otherwise faulty state. There are multiple possible workarounds for this problem: Add the nopti option to the kernel command line using the boot loader. Once the system finishes booting, upgrade to the latest RHEL 7.5 kernel. Install RHEL 7.4, and then upgrade to the latest RHEL 7.5 kernel. Install RHEL 7.5 on a single blade. Once the system is installed, upgrade to the latest RHEL 7.5 kernel, and then add additional blades as required. (BZ#1540061) The READONLY=yes option is not sufficient to configure a read-only system In Red Hat Enterprise Linux 6, the READONLY=yes option in the /etc/sysconfig/readonly-root file was used to configure a read-only system partition. In Red Hat Enterprise Linux 7, the option is no longer sufficient, because systemd uses a new approach to mounting the system partition. To configure a read-only system in Red Hat Enterprise Linux 7: Set the READONLY=yes option in /etc/sysconfig/readonly-root . Add the ro option to the root mount point in the /etc/fstab file. (BZ# 1444018 )
[ "error: cannot open `/usr/lib/grub/x86_64-efi/moddep.lst': No such file or directory.", "WARNING: CPU: 576 PID: 3924 at kernel/workqueue.c:1518__queue_delayed_work+0x184/0x1a0" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/known_issues_installation_and_booting
Chapter 24. Clustering
Chapter 24. Clustering Pacemaker correctly interprets systemd responses and systemd services are stopped in proper order at cluster shutdown Previously, when a Pacemaker cluster was configured with systemd resources and the cluster was stopped, Pacemaker could mistakenly assume that a systemd service had stopped before it actually had stopped. As a consequence, services could be stopped out of order, potentially leading to stop failures. With this update, Pacemaker now correctly interprets systemd responses and systemd services are stopped in the proper order at cluster shutdown. (BZ#1286316) Pacemaker now distinguishes transient failures from fatal failures when loading systemd units Previously, Pacemaker treated all errors loading a systemd unit as fatal. As a consequence, Pacemaker would not start a systemd resource on a node where it could not load the systemd unit, even if the load failed due to transient conditions such as CPU load. With this update, Pacemaker now distinguishes transient failures from fatal failures when loading systemd units. Logs and cluster status now show more appropriate messages, and the resource can start on the node once the transient error clears. (BZ# 1346726 ) Pacemaker now removes node attributes from its memory when purging a node that has been removed from the cluster Previously, Pacemaker's node attribute manager removed attribute values from its memory but not the attributes themselves when purging a node that had been removed from the cluster. As a result, if a new node was later added to the cluster with the same node ID, attributes that existed on the original node could not be set for the new node. With this update, Pacemaker now purges the attributes themselves when removing a node and a new node with the same ID encounters no problems with setting attributes. (BZ# 1338623 ) Pacemaker now correctly determines expected results for resources that are in a group or depend on a clone Previously, when restarting a service, Pacemaker's crm_resource tool (and thus the pcs resource restart command) could fail to properly determine when affected resources successfully started. As a result, the command could fail to restart a resource that is a member of a group, or the command could hang indefinitely if the restarted resource depended on a cloned resource that moved to another node. With this update, the command now properly determines expected results for resources that are in a group or depend on a clone. The desired service is restarted, and the command returns. (BZ#1337688) Fencing now occurs when DLM requires it, even when the cluster itself does not Previously, DLM could require fencing due to quorum issues, even when the cluster itself did not require fencing, but would be unable to initiate it, As a consequence, DLM and DLM-based services could hang waiting for fencing that never happened. With this fix, the ocf:pacemaker:controld resource agent now checks whether DLM is in this state, and requests fencing if so. Fencing now occurs in this situation, allowing DLM to recover. (BZ# 1268313 ) The DLM now detects and reports connection problems Previously, the Distributed Lock Manager (DLM) used for cluster communications expected TCP/IP packet delivery and waited for responses indefinitely. As a consequence, if a DLM connection was lost, there was no notification of the problem. With this update, the DLM detects and reports when cluster communications are lost. As a result, DLM communication problems can be identified, and cluster nodes that become unresponsive can be restarted once the problems are resolved. (BZ#1267339) High Availability instances created by non-admin users are now evacuated when a compute instance is turned off Previously, the fence_compute agent searched only for compute instances created by admin users. As a consequence, instances created by non-admin users were not evacuated when a compute instance was turned off. This update makes sure that fence_compute searches for instances run as any user, and compute instances are evacuated to new compute nodes as expected. (BZ# 1313561 ) Starting the nfsserver resource no longer fails The nfs-idmapd service fails to start when the var-lib-nfs-rpc_pipefs.mount process is active. The process is active by default. Consequently, starting the nfsserver resource failed. With this update, var-lib-nfs-rpc_pipefs.mount stops in this situation and does not prevent nfs-idmapd from starting. As a result, nfsserver starts as expected. (BZ#1325453) lrmd logs errors as expected and no longer crashes Previously, Pacemaker's Local Resource Management Daemon (lrmd) used an invalid format string when logging certain rare systemd errors. As a consequence, lrmd could terminate unexpectedly with a segmentation fault. A patch has been applied to fix the format string. As a result, lrmd no longer crashes and logs the aforementioned rare error messages as intended. (BZ# 1284069 ) stonithd now properly distinguishes attribute removals from device removals. Prior to this update, if a user deleted an attribute from a fence device, Pacemaker's stonithd service sometimes mistakenly removed the entire device. Consequently, the cluster would no longer use the fence device. The underlying source code has been modified to fix this bug, and stonithd now properly distinguishes attribute removals from device removals. As a result, deleting a fence device attribute no longer removes the device itself. (BZ# 1287315 ) HealthCPU now correctly measures CPU usage Previously, the ocf:pacemaker:HealthCPU resource parsed the output of the top command incorrectly on Red Hat Enterprise Linux 7. As a consequence, the HealthCPU resource did not work. With this update, the resource agent correctly parses the output of later versions of top . As a result, HealthCPU now correctly measures CPU usage. (BZ#1287868) Pacemaker now checks all collected files when stripping sensitive information Pacemaker has the ability to strip sensitive information that matches a given pattern when submitting system information with bug reports, whether directly by Pacemaker's crm_report tool or indirectly via sosreport . However, Pacemaker would only check certain collected files, not log file extracts. Because of this, sensitive information could remain in log file extracts. With this fix, Pacemaker now checks all collected files when stripping sensitive information and no sensitive information is collected. (BZ#1219188) The corosync memory footprint no longer increases on every node rejoin Previously, when a user rejoined a node some buffers in corosync were not freed so that memory consumption grew. With this fix, no memory is leaked and the memory footprint no longer increases on every node rejoin. (BZ# 1306349 ) Corosync starts correctly when configured to use IPv4 and DNS is set to return both IPv4 and IPv6 addresses Previously, when a pcs-generated corosync.conf file used hostnames instead of IP addresses and Internet Protocol version 4 (IPv4) and the DNS server was set to return both IPV4 and IPV6 addresses, the corosync utility failed to start. With this fix, if Corosync is configured to use IPv4, IPv4 is really used. As a result, corosync starts as expected in the described circumstances. (BZ# 1289169 ) The corosync-cmapctl utility correctly handles errors in the print_key() function Previously, the corosync-cmapctl utility did not handle corosync errors in the print_key() function correctly. Consequently, corosync-cmapctl could enter an infinite loop if the corosync utility was killed. The provided fix makes sure all errors returned when Corosync exits are handled correctly. As a result, corosync-cmapctl leaves the loop and displays a relevant error message in this scenario. (BZ# 1336462 )
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/bug_fixes_clustering
Chapter 336. Options
Chapter 336. Options The OpenApi module can be configured using the following options. To configure using a servlet you use the init-param as shown above. When configuring directly in the rest-dsl, you use the appropriate method, such as enableCORS , host,contextPath , dsl. The options with api.xxx is configured using apiProperty dsl. Option Type Description cors Boolean Whether to enable CORS. Notice this only enables CORS for the api browser, and not the actual access to the REST services. Is default false. openapi.version String OpenApi spec version. Is default 3.0. host String To setup the hostname. If not configured camel-openapi-java will calculate the name as localhost based. schemes String The protocol schemes to use. Multiple values can be separated by comma such as "http,https". The default value is "http". base.path String Required : To setup the base path where the REST services is available. The path is relative (eg do not start with http/https) and camel-openapi-java will calculate the absolute base path at runtime, which will be protocol://host:port/context-path/base.path api.path String To setup the path where the API is available (eg /api-docs). The path is relative (eg do not start with http/https) and camel-openapi-java will calculate the absolute base path at runtime, which will be protocol://host:port/context-path/api.path So using relative paths is much easier. See above for an example. api.version String The version of the api. Is default 0.0.0. api.title String The title of the application. api.description String A short description of the application. api.termsOfService String A URL to the Terms of Service of the API. api.contact.name String Name of person or organization to contact api.contact.email String An email to be used for API-related correspondence. api.contact.url String A URL to a website for more contact information. api.license.name String The license name used for the API. api.license.url String A URL to the license used for the API. apiContextIdListing boolean Whether to allow listing all the CamelContext names in the JVM that has REST services. When enabled then the root path of the api-doc will list all the contexts. When disabled then no context ids is listed and the root path of the api-doc lists the current CamelContext. Is default false. apiContextIdPattern String A pattern that allows to filter which CamelContext names is shown in the context listing. The pattern is using regular expression and * as wildcard. Its the same pattern matching as used by Intercept
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/options_164
5.7. Creating Arbitrated Replicated Volumes
5.7. Creating Arbitrated Replicated Volumes An arbitrated replicated volume contains two full copies of the files in the volume. Arbitrated volumes have an extra arbiter brick for every two data bricks in the volume. Arbiter bricks do not store file data; they only store file names, structure, and metadata. Arbiter bricks use client quorum to compare metadata on the arbiter with the metadata of the other nodes to ensure consistency in the volume and prevent split-brain conditions. Advantages of arbitrated replicated volumes Better consistency When an arbiter is configured, arbitration logic uses client-side quorum in auto mode to prevent file operations that would lead to split-brain conditions. Less disk space required Because an arbiter brick only stores file names and metadata, an arbiter brick can be much smaller than the other bricks in the volume. Fewer nodes required The node that contains the arbiter brick of one volume can be configured with the data brick of another volume. This "chaining" configuration allows you to use fewer nodes to fulfill your overall storage requirements. Easy migration from deprecated two-way replicated volumes Red Hat Gluster Storage can convert a two-way replicated volume without arbiter bricks into an arbitrated replicated volume. See Section 5.7.5, "Converting to an arbitrated volume" for details. Limitations of arbitrated replicated volumes Arbitrated replicated volumes provide better data consistency than a two-way replicated volume that does not have arbiter bricks. However, because arbitrated replicated volumes store only metadata, they provide the same level of availability as a two-way replicated volume that does not have arbiter bricks. To achieve high-availability, you need to use a three-way replicated volume instead of an arbitrated replicated volume. Tiering is not compatible with arbitrated replicated volumes. Arbitrated volumes can only be configured in sets of three bricks at a time. Red Hat Gluster Storage can convert an existing two-way replicated volume without arbiter bricks into an arbitrated replicated volume by adding an arbiter brick to that volume. See Section 5.7.5, "Converting to an arbitrated volume" for details. 5.7.1. Arbitrated volume requirements This section outlines the requirements of a supported arbitrated volume deployment. 5.7.1.1. System requirements for nodes hosting arbiter bricks The minimum system requirements for a node that contains an arbiter brick differ depending on the configuration choices made by the administrator. See Section 5.7.4, "Creating multiple arbitrated replicated volumes across fewer total nodes" for details about the differences between the dedicated arbiter and chained arbiter configurations. Table 5.1. Requirements for arbitrated configurations on physical machines Configuration type Min CPU Min RAM NIC Arbiter Brick Size Max Latency Dedicated arbiter 64-bit quad-core processor with 2 sockets 8 GB [a] Match to other nodes in the storage pool 1 TB to 4 TB [b] 5 ms [c] Chained arbiter Match to other nodes in the storage pool 1 TB to 4 TB [d] 5 ms [e] [a] More RAM may be necessary depending on the combined capacity of the number of arbiter bricks on the node. [b] Arbiter and data bricks can be configured on the same device provided that the data and arbiter bricks belong to different replica sets. See Section 5.7.1.2, "Arbiter capacity requirements" for further details on sizing arbiter volumes. [c] This is the maximum round trip latency requirement between all nodes irrespective of Aribiter node. See KCS#413623 to know how to determine latency between nodes. [d] Multiple bricks can be created on a single RAIDed physical device. Please refer the following product documentation: Section 19.2, "Brick Configuration" [e] This is the maximum round trip latency requirement between all nodes irrespective of Aribiter node. See KCS#413623 to know how to determine latency between nodes. The requirements for arbitrated configurations on virtual machines are: minimum 4 vCPUs minimum 16 GB RAM 1 TB to 4 TB of virtual disk space maximum 5 ms latency 5.7.1.2. Arbiter capacity requirements Because an arbiter brick only stores file names and metadata, an arbiter brick can be much smaller than the other bricks in the volume or replica set. The required size for an arbiter brick depends on the number of files being stored on the volume. The recommended minimum arbiter brick size can be calculated with the following formula: For example, if you have two 1 TB data bricks, and the average size of the files is 2 GB, then the recommended minimum size for your arbiter brick 2 MB, as shown in the following example: If sharding is enabled, and your shard-block-size is smaller than the average file size in KB, then you need to use the following formula instead, because each shard also has a metadata file: Alternatively, if you know how many files you will store in a volume, the recommended minimum arbiter brick size is the maximum number of files multiplied by 4 KB. For example, if you expect to have 200,000 files on your volume, your arbiter brick should be at least 800,000 KB, or 0.8 GB, in size. Red Hat also recommends overprovisioning where possible so that there is no short-term need to increase the size of the arbiter brick. Also, refer to Brick Configuration for more information on usage of maxpct. 5.7.2. Arbitration logic In an arbitrated volume, whether a file operation is permitted depends on the current state of the bricks in the volume. The following table describes arbitration behavior in all possible volume states. Table 5.2. Allowed operations for current volume state Volume state Arbitration behavior All bricks available All file operations permitted. Arbiter and 1 data brick available If the arbiter does not agree with the available data node, write operations fail with ENOTCONN (since the brick that is correct is not available). Other file operations are permitted. If the arbiter's metadata agrees with the available data node, all file operations are permitted. Arbiter down, data bricks available All file operations are permitted. The arbiter's records are healed when it becomes available. Only one brick available All file operations fail with ENOTCONN. 5.7.3. Creating an arbitrated replicated volume The command for creating an arbitrated replicated volume has the following syntax: This creates a volume with one arbiter for every three replicate bricks. The arbiter is the last brick in every set of three bricks. Note The syntax of this command is misleading. There are a total of 3 bricks in this set. This command creates a volume with two bricks that replicate all data and one arbiter brick that replicates only metadata. In the following example, the bricks on server3 and server6 are the arbiter bricks. Note that because multiple sets of three bricks are provided, this creates a distributed replicated volume with arbiter bricks. 5.7.4. Creating multiple arbitrated replicated volumes across fewer total nodes If you are configuring more than one arbitrated-replicated volume, or a single volume with multiple replica sets, you can use fewer nodes in total by using either of the following techniques: Chain multiple arbitrated replicated volumes together, by placing the arbiter brick for one volume on the same node as a data brick for another volume. Chaining is useful for write-heavy workloads when file size is closer to metadata file size (that is, from 32-128 KiB). This avoids all metadata I/O going through a single disk. In arbitrated distributed-replicated volumes, you can also place an arbiter brick on the same node as another replica sub-volume's data brick, since these do not share the same data. Place the arbiter bricks from multiple volumes on a single dedicated node. A dedicated arbiter node is suited to write-heavy workloads with larger files, and read-heavy workloads. Example 5.6. Example of a dedicated configuration The following commands create two arbitrated replicated volumes, firstvol and secondvol. Server3 contains the arbiter bricks of both volumes. Two gluster volumes configured across five servers to create two three-way arbitrated replicated volumes, with the arbiter bricks on a dedicated arbiter node. Example 5.7. Example of a chained configuration The following command configures an arbitrated replicated volume with six sub-volumes chained across six servers in a 6 x (2 + 1) configuration. Six replicated gluster sub-volumes chained across six servers to create a 6 * (2 + 1) arbitrated distributed-replicated configuration. 5.7.5. Converting to an arbitrated volume You can convert a replicated volume into an arbitrated volume by adding new arbiter bricks for each replicated sub-volume, or replacing replica bricks with arbiter bricks. Procedure 5.1. Converting a replica 2 volume to an arbitrated volume Warning Do not perform this process if geo-replication is configured. There is a race condition tracked by Bug 1683893 that means data can be lost when converting a volume if geo-replication is enabled. Verify that healing is not in progress Wait until pending heal entries is 0 before proceeding. Disable and stop self-healing Run the following commands to disable data, metadata, and entry self-heal, and the self-heal daemon. Add arbiter bricks to the volume Convert the volume by adding an arbiter brick for each replicated sub-volume. For example, if you have an existing two-way replicated volume called testvol, and a new brick for the arbiter to use, you can add a brick as an arbiter with the following command: If you have an existing two-way distributed-replicated volume, you need a new brick for each sub-volume in order to convert it to an arbitrated distributed-replicated volume, for example: Wait for client volfiles to update This takes about 5 minutes. Verify that bricks added successfully Re-enable self-healing Run the following commands to re-enable self-healing on the servers. Verify all entries are healed Wait until pending heal entries is 0 to ensure that all heals completed successfully. Procedure 5.2. Converting a replica 3 volume to an arbitrated volume Warning Do not perform this process if geo-replication is configured. There is a race condition tracked by Bug 1683893 that means data can be lost when converting a volume if geo-replication is enabled. Verify that healing is not in progress Wait until pending heal entries is 0 before proceeding. Reduce the replica count of the volume to 2 Remove one brick from every sub-volume in the volume so that the replica count is reduced to 2. For example, in a replica 3 volume that distributes data across 2 sub-volumes, run the following command: Note In a distributed replicated volume, data is distributed across sub-volumes, and replicated across bricks in a sub-volume. This means that to reduce the replica count of a volume, you need to remove a brick from every sub-volume. Bricks are grouped by sub-volume in the gluster volume info output. If the replica count is 3, the first 3 bricks form the first sub-volume, the 3 bricks form the second sub-volume, and so on. In this volume, data is distributed across two sub-volumes, which each consist of three bricks. The first sub-volume consists of bricks 1, 2, and 3. The second sub-volume consists of bricks 4, 5, and 6. Removing any one brick from each subvolume using the following command reduces the replica count to 2 as required. Disable and stop self-healing Run the following commands to disable data, metadata, and entry self-heal, and the self-heal daemon. Add arbiter bricks to the volume Convert the volume by adding an arbiter brick for each replicated sub-volume. For example, if you have an existing replicated volume: If you have an existing distributed-replicated volume: Wait for client volfiles to update This takes about 5 minutes. Verify that this is complete by running the following command on each client. The number of times connected=1 appears in the output is the number of bricks connected to the client. Verify that bricks added successfully Re-enable self-healing Run the following commands to re-enable self-healing on the servers. Verify all entries are healed Wait until pending heal entries is 0 to ensure that all heals completed successfully. 5.7.6. Converting an arbitrated volume to a three-way replicated volume You can convert an arbitrated volume into a three-way replicated volume or a three-way distributed replicated volume by replacing the arbiter bricks with full bricks for each replicated sub-volume. Warning Do not perform this process if geo-replication is configured. There is a race condition tracked by Bug 1683893 that means data can be lost when converting a volume if geo-replication is enabled. Procedure 5.3. Converting an arbitrated volume to a replica 3 volume Verify that healing is not in progress Wait until pending heal entries is 0 before proceeding. Remove arbiter bricks from the volume Check which bricks are listed as (arbiter) , and then remove those bricks from the volume. Disable and stop self-healing Run the following commands to disable data, metadata, and entry self-heal, and the self-heal daemon. Add full bricks to the volume Convert the volume by adding a brick for each replicated sub-volume. For example, if you have an existing arbitrated replicated volume: If you have an existing arbitrated distributed-replicated volume: Wait for client volfiles to update This takes about 5 minutes. Verify that bricks added successfully Re-enable self-healing Run the following commands to re-enable self-healing on the servers. Verify all entries are healed Wait until pending heal entries is 0 to ensure that all heals completed successfully. 5.7.7. Tuning recommendations for arbitrated volumes Red Hat recommends the following when arbitrated volumes are in use: For dedicated arbiter nodes, use JBOD for arbiter bricks, and RAID6 for data bricks. For chained arbiter volumes, use the same RAID6 drive for both data and arbiter bricks. See Chapter 19, Tuning for Performance for more information on enhancing performance that is not specific to the use of arbiter volumes.
[ "minimum arbiter brick size = 4 KB * ( size in KB of largest data brick in volume or replica set / average file size in KB)", "minimum arbiter brick size = 4 KB * ( 1 TB / 2 GB ) = 4 KB * ( 1000000000 KB / 2000000 KB ) = 4 KB * 500 KB = 2000 KB = 2 MB", "minimum arbiter brick size = 4 KB * ( size in KB of largest data brick in volume or replica set / shard block size in KB )", "gluster volume create VOLNAME replica 3 arbiter 1 HOST1 : DATA_BRICK1 HOST2 : DATA_BRICK2 HOST3 : ARBITER_BRICK3", "gluster volume create testvol replica 3 arbiter 1 server1:/bricks/brick server2:/bricks/brick server3:/bricks/arbiter_brick server4:/bricks/brick server5:/bricks/brick server6:/bricks/arbiter_brick", "gluster volume info testvol Volume Name: testvol Type: Distributed-Replicate Volume ID: ed9fa4d5-37f1-49bb-83c3-925e90fab1bc Status: Created Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: server1:/bricks/brick Brick2: server2:/bricks/brick Brick3: server3:/bricks/arbiter_brick (arbiter) Brick1: server4:/bricks/brick Brick2: server5:/bricks/brick Brick3: server6:/bricks/arbiter_brick (arbiter) Options Reconfigured: cluster.granular-entry-heal: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on", "gluster volume create firstvol replica 3 arbiter 1 server1:/bricks/brick server2:/bricks/brick server3:/bricks/arbiter_brick gluster volume create secondvol replica 3 arbiter 1 server4:/bricks/data_brick server5:/bricks/brick server3:/bricks/brick", "gluster volume create arbrepvol replica 3 arbiter 1 server1:/bricks/brick1 server2:/bricks/brick1 server3:/bricks/arbiter_brick1 server2:/bricks/brick2 server3:/bricks/brick2 server4:/bricks/arbiter_brick2 server3:/bricks/brick3 server4:/bricks/brick3 server5:/bricks/arbiter_brick3 server4:/bricks/brick4 server5:/bricks/brick4 server6:/bricks/arbiter_brick4 server5:/bricks/brick5 server6:/bricks/brick5 server1:/bricks/arbiter_brick5 server6:/bricks/brick6 server1:/bricks/brick6 server2:/bricks/arbiter_brick6", "gluster volume heal VOLNAME info", "gluster volume set VOLNAME cluster.data-self-heal off gluster volume set VOLNAME cluster.metadata-self-heal off gluster volume set VOLNAME cluster.entry-self-heal off gluster volume set VOLNAME self-heal-daemon off", "gluster volume add-brick VOLNAME replica 3 arbiter 1 HOST : arbiter-brick-path", "gluster volume add-brick testvol replica 3 arbiter 1 server:/bricks/arbiter_brick", "gluster volume add-brick testvol replica 3 arbiter 1 server1:/bricks/arbiter_brick1 server2:/bricks/arbiter_brick2", "gluster volume info VOLNAME gluster volume status VOLNAME", "gluster volume set VOLNAME cluster.data-self-heal on gluster volume set VOLNAME cluster.metadata-self-heal on gluster volume set VOLNAME cluster.entry-self-heal on gluster volume set VOLNAME self-heal-daemon on", "gluster volume heal VOLNAME info", "gluster volume heal VOLNAME info", "gluster volume remove-brick VOLNAME replica 2 HOST : subvol1-brick-path HOST : subvol2-brick-path force", "gluster volume info VOLNAME [...] Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: node1:/test1/brick Brick2: node2:/test2/brick Brick3: node3:/test3/brick Brick4: node1:/test4/brick Brick5: node2:/test5/brick Brick6: node3:/test6/brick [...]", "gluster volume remove-brick VOLNAME replica 2 HOST : subvol1-brick-path HOST : subvol2-brick-path force", "gluster volume set VOLNAME cluster.data-self-heal off gluster volume set VOLNAME cluster.metadata-self-heal off gluster volume set VOLNAME cluster.entry-self-heal off gluster volume set VOLNAME self-heal-daemon off", "gluster volume add-brick VOLNAME replica 3 arbiter 1 HOST : arbiter-brick-path", "gluster volume add-brick testvol replica 3 arbiter 1 server:/bricks/brick", "gluster volume add-brick testvol replica 3 arbiter 1 server1:/bricks/arbiter_brick1 server2:/bricks/arbiter_brick2", "grep -ir connected mount-path /.meta/graphs/active/ volname -client-*/private", "gluster volume info VOLNAME gluster volume status VOLNAME", "gluster volume set VOLNAME cluster.data-self-heal on gluster volume set VOLNAME cluster.metadata-self-heal on gluster volume set VOLNAME cluster.entry-self-heal on gluster volume set VOLNAME self-heal-daemon on", "gluster volume heal VOLNAME info", "gluster volume heal VOLNAME info", "gluster volume info VOLNAME", "gluster volume remove-brick VOLNAME replica 2 HOST : arbiter-brick-path force", "gluster volume set VOLNAME cluster.data-self-heal off gluster volume set VOLNAME cluster.metadata-self-heal off gluster volume set VOLNAME cluster.entry-self-heal off gluster volume set VOLNAME self-heal-daemon off", "gluster volume add-brick VOLNAME replica 3 HOST : brick-path", "gluster volume add-brick testvol replica 3 server:/bricks/brick", "gluster volume add-brick testvol replica 3 server1:/bricks/brick1 server2:/bricks/brick2", "gluster volume info VOLNAME gluster volume status VOLNAME", "gluster volume set VOLNAME cluster.data-self-heal on gluster volume set VOLNAME cluster.metadata-self-heal on gluster volume set VOLNAME cluster.entry-self-heal on gluster volume set VOLNAME self-heal-daemon on", "gluster volume heal VOLNAME info" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/creating_arbitrated_replicated_volumes
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using the local storage devices on any platform, you can create internal cluster resources. This approach internally provisions base services so that all the applications can access additional storage classes. You can also deploy OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster and IBM FlashSystem. For instructions, see Deploying OpenShift Data Foundation in external mode . External mode deployment works on clusters that are detected as non-cloud. If your cluster is not detected correctly, open up a bug in Bugzilla . Before you begin the deployment of Red Hat OpenShift Data Foundation using a local storage, ensure that you meet the resource requirements. See Requirements for installing OpenShift Data Foundation using local storage devices . Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS) follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption, refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Kubernetes authentication method . Ensure that you are using signed certificates on your vault servers. After completing the preparatory steps, perform the following procedures: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation cluster on any platform . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_on_any_platform/preparing_to_deploy_openshift_data_foundation
Chapter 2. The AppStream repository
Chapter 2. The AppStream repository Content in the AppStream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Certain user space components distributed in the AppStream repository are Application Streams. Application Streams are delivered on a cadence that is suitable for each package, which makes the distribution diversified. Application Streams offer multiple versions of a single package for installation within RHEL 8, which is an improvement over methods of making multiple versions of packages available. RHEL 8 also consolidates distribution channels to a single place. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules , or as Software Collections. For detailed information on the length of Application Streams support, see the Red Hat Enterprise Linux 8 Application Streams Life Cycle . For information about the other components or packages in the AppStream repository, see the Scope of Coverage Details document. The following table lists all the packages in the AppStream repository along with their license. For a list of available modules and streams, see Section 2.1, "AppStream modules" . Package License 389-ds-base GPLv3+ and (ASL 2.0 or MIT) 389-ds-base-devel GPLv3+ and (ASL 2.0 or MIT) 389-ds-base-legacy-tools GPLv3+ and (ASL 2.0 or MIT) 389-ds-base-libs GPLv3+ and (ASL 2.0 or MIT) 389-ds-base-snmp GPLv3+ and (ASL 2.0 or MIT) aardvark-dns ASL 2.0 and BSD and MIT abattis-cantarell-fonts OFL abrt GPLv2+ abrt-addon-ccpp GPLv2+ abrt-addon-coredump-helper GPLv2+ abrt-addon-kerneloops GPLv2+ abrt-addon-pstoreoops GPLv2+ abrt-addon-vmcore GPLv2+ abrt-addon-xorg GPLv2+ abrt-cli GPLv2+ abrt-cli-ng GPLv2+ abrt-console-notification GPLv2+ abrt-dbus GPLv2+ abrt-desktop GPLv2+ abrt-gui GPLv2+ abrt-gui-libs GPLv2+ abrt-java-connector GPLv2+ abrt-libs GPLv2+ abrt-plugin-machine-id GPLv2+ abrt-plugin-sosreport GPLv2+ abrt-tui GPLv2+ accountsservice GPLv3+ accountsservice-libs GPLv3+ acpid GPLv2+ adobe-mappings-cmap BSD adobe-mappings-cmap-deprecated BSD adobe-mappings-pdf BSD adwaita-cursor-theme LGPLv3+ or CC-BY-SA adwaita-gtk2-theme LGPLv2+ adwaita-icon-theme LGPLv3+ or CC-BY-SA adwaita-qt5 LGPLv2+ and GPLv2+ aide GPLv2+ alsa-firmware GPL+ and BSD and GPLv2+ and GPLv2 and LGPLv2+ alsa-lib LGPLv2+ alsa-lib-devel LGPLv2+ alsa-plugins-arcamav LGPLv2+ alsa-plugins-maemo LGPLv2+ alsa-plugins-oss LGPLv2+ alsa-plugins-pulseaudio LGPLv2+ alsa-plugins-samplerate GPLv2+ alsa-plugins-speex LGPLv2+ alsa-plugins-upmix LGPLv2+ alsa-plugins-usbstream LGPLv2+ alsa-plugins-vdownmix LGPLv2+ alsa-tools-firmware GPLv2+ alsa-ucm BSD alsa-utils GPLv2+ alsa-utils-alsabat GPLv2+ amanda BSD and GPLv3+ and GPLv2+ and GPLv2 amanda-client BSD and GPLv3+ and GPLv2+ and GPLv2 amanda-libs BSD and GPLv3+ and GPLv2+ and GPLv2 amanda-server BSD and GPLv3+ and GPLv2+ and GPLv2 anaconda GPLv2+ and MIT anaconda-core GPLv2+ and MIT anaconda-dracut GPLv2+ and MIT anaconda-gui GPLv2+ and MIT anaconda-install-env-deps GPLv2+ and MIT anaconda-tui GPLv2+ and MIT anaconda-user-help CC-BY-SA anaconda-widgets GPLv2+ and MIT annobin GPLv3+ annobin-annocheck GPLv3+ ansible-collection-microsoft-sql MIT ansible-collection-redhat-rhel_mgmt GPLv3+ ansible-core GPLv3+ ansible-freeipa GPL-3.0-or-later ansible-freeipa-tests GPL-3.0-or-later ansible-pcp MIT ansible-test GPLv3+ ant ASL 2.0 ant-lib ASL 2.0 aopalliance Public Domain apache-commons-cli ASL 2.0 apache-commons-codec ASL 2.0 apache-commons-collections ASL 2.0 apache-commons-compress ASL 2.0 apache-commons-io ASL 2.0 apache-commons-jxpath ASL 2.0 apache-commons-lang ASL 2.0 apache-commons-lang3 ASL 2.0 apache-commons-logging ASL 2.0 apache-commons-net ASL 2.0 apcu-panel PHP apiguardian ASL 2.0 appstream-data CC0 and CC-BY and CC-BY-SA and GFDL apr ASL 2.0 and BSD with advertising and ISC and BSD apr-devel ASL 2.0 and BSD with advertising and ISC and BSD apr-util ASL 2.0 apr-util-bdb ASL 2.0 apr-util-devel ASL 2.0 apr-util-ldap ASL 2.0 apr-util-mysql ASL 2.0 apr-util-odbc ASL 2.0 apr-util-openssl ASL 2.0 apr-util-pgsql ASL 2.0 apr-util-sqlite ASL 2.0 asciidoc GPL+ and GPLv2+ aspell LGPLv2+ and LGPLv2 and GPLv2+ and BSD aspell-en MIT and BSD aspnetcore-runtime-3.0 MIT and ASL 2.0 and BSD aspnetcore-runtime-3.1 MIT and ASL 2.0 and BSD aspnetcore-runtime-5.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib aspnetcore-runtime-6.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib aspnetcore-runtime-7.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib aspnetcore-runtime-8.0 0BSD AND Apache-2.0 AND (Apache-2.0 WITH LLVM-exception) AND APSL-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSD-4-Clause AND BSL-1.0 AND bzip2-1.0.6 AND CC0-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC-PDDC AND CNRI-Python AND EPL-1.0 AND GPL-2.0-only AND (GPL-2.0-only WITH GCC-exception-2.0) AND GPL-2.0-or-later AND GPL-3.0-only AND ICU AND ISC AND LGPL-2.1-only AND LGPL-2.1-or-later AND LicenseRef-Fedora-Public-Domain AND LicenseRef-ISO-8879 AND MIT AND MIT-Wu AND MS-PL AND MS-RL AND NCSA AND OFL-1.1 AND OpenSSL AND Unicode-DFS-2015 AND Unicode-DFS-2016 AND W3C-19980720 AND X11 AND Zlib aspnetcore-targeting-pack-3.0 MIT and ASL 2.0 and BSD aspnetcore-targeting-pack-3.1 MIT and ASL 2.0 and BSD aspnetcore-targeting-pack-5.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib aspnetcore-targeting-pack-6.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib aspnetcore-targeting-pack-7.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib aspnetcore-targeting-pack-8.0 0BSD AND Apache-2.0 AND (Apache-2.0 WITH LLVM-exception) AND APSL-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSD-4-Clause AND BSL-1.0 AND bzip2-1.0.6 AND CC0-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC-PDDC AND CNRI-Python AND EPL-1.0 AND GPL-2.0-only AND (GPL-2.0-only WITH GCC-exception-2.0) AND GPL-2.0-or-later AND GPL-3.0-only AND ICU AND ISC AND LGPL-2.1-only AND LGPL-2.1-or-later AND LicenseRef-Fedora-Public-Domain AND LicenseRef-ISO-8879 AND MIT AND MIT-Wu AND MS-PL AND MS-RL AND NCSA AND OFL-1.1 AND OpenSSL AND Unicode-DFS-2015 AND Unicode-DFS-2016 AND W3C-19980720 AND X11 AND Zlib at-spi2-atk LGPLv2+ at-spi2-atk-devel LGPLv2+ at-spi2-core LGPLv2+ at-spi2-core-devel LGPLv2+ atinject ASL 2.0 atk LGPLv2+ atk-devel LGPLv2+ atkmm LGPLv2+ authd GPLv2+ authselect-compat GPLv3+ autoconf GPLv2+ and GFDL autocorr-af (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-bg (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-ca (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-cs (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-da (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-de (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-en (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-es (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-fa (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-fi (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-fr (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-ga (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-hr (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-hu (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-is (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-it (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-ja (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-ko (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-lb (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-lt (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-mn (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-nl (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-pl (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-pt (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-ro (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-ru (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-sk (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-sl (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-sr (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-sv (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-tr (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-vi (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autocorr-zh (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 autogen-libopts LGPLv3+ automake GPLv2+ and GFDL and Public Domain and MIT avahi-tools LGPLv2+ avahi-ui-gtk3 LGPLv2+ babel BSD babl LGPLv3+ and GPLv3+ bacula-client AGPLv3 with exceptions bacula-common AGPLv3 with exceptions bacula-console AGPLv3 with exceptions bacula-director AGPLv3 with exceptions bacula-libs AGPLv3 with exceptions bacula-libs-sql AGPLv3 with exceptions bacula-logwatch AGPLv3 with exceptions bacula-storage AGPLv3 with exceptions baobab GPLv2+ and GFDL batik-css ASL 2.0 and W3C batik-util ASL 2.0 and W3C bcc ASL 2.0 bcc-tools ASL 2.0 bea-stax-api ASL 1.1 and ASL 2.0 bind MPLv2.0 bind-chroot MPLv2.0 bind-devel MPLv2.0 bind-dyndb-ldap GPLv2+ bind-libs MPLv2.0 bind-libs-lite MPLv2.0 bind-license MPLv2.0 bind-lite-devel MPLv2.0 bind-pkcs11 MPLv2.0 bind-pkcs11-devel MPLv2.0 bind-pkcs11-libs MPLv2.0 bind-pkcs11-utils MPLv2.0 bind-sdb MPLv2.0 bind-sdb-chroot MPLv2.0 bind-utils MPLv2.0 bind9.16 MPLv2.0 bind9.16-chroot MPLv2.0 bind9.16-dnssec-utils MPLv2.0 bind9.16-libs MPLv2.0 bind9.16-license MPLv2.0 bind9.16-utils MPLv2.0 binutils-devel GPLv3+ bison GPLv3+ bison-runtime GPLv3+ bitmap-console-fonts GPLv2 bitmap-fangsongti-fonts MIT bitmap-fixed-fonts GPLv2 bitmap-fonts-compat GPLv2 and MIT and Lucida bitmap-lucida-typewriter-fonts Lucida blas BSD blas64 BSD blivet-data LGPLv2+ bluez-cups GPLv2+ bogofilter GPLv2 boost Boost and MIT and Python boost-atomic Boost and MIT and Python boost-chrono Boost and MIT and Python boost-container Boost and MIT and Python boost-context Boost and MIT and Python boost-coroutine Boost and MIT and Python boost-date-time Boost and MIT and Python boost-devel Boost and MIT and Python boost-fiber Boost and MIT and Python boost-filesystem Boost and MIT and Python boost-graph Boost and MIT and Python boost-iostreams Boost and MIT and Python boost-locale Boost and MIT and Python boost-log Boost and MIT and Python boost-math Boost and MIT and Python boost-program-options Boost and MIT and Python boost-random Boost and MIT and Python boost-regex Boost and MIT and Python boost-serialization Boost and MIT and Python boost-signals Boost and MIT and Python boost-stacktrace Boost and MIT and Python boost-system Boost and MIT and Python boost-test Boost and MIT and Python boost-thread Boost and MIT and Python boost-timer Boost and MIT and Python boost-type_erasure Boost and MIT and Python boost-wave Boost and MIT and Python bpftrace ASL 2.0 bpg-algeti-fonts GPL+ with exceptions bpg-chveulebrivi-fonts GPL+ with exceptions bpg-classic-fonts GPL+ with exceptions bpg-courier-fonts GPL+ with exceptions bpg-courier-s-fonts GPL+ with exceptions bpg-dedaena-block-fonts GPL+ with exceptions bpg-dejavu-sans-fonts Bitstream Vera bpg-elite-fonts GPL+ with exceptions bpg-excelsior-caps-fonts Bitstream Vera bpg-excelsior-condenced-fonts Bitstream Vera bpg-excelsior-fonts Bitstream Vera bpg-fonts-common GPL+ with exceptions bpg-glaho-fonts GPL+ with exceptions bpg-gorda-fonts GPL+ with exceptions bpg-ingiri-fonts GPL+ with exceptions bpg-irubaqidze-fonts GPL+ with exceptions bpg-mikhail-stephan-fonts GPL+ with exceptions bpg-mrgvlovani-caps-fonts GPL+ with exceptions bpg-mrgvlovani-fonts GPL+ with exceptions bpg-nateli-caps-fonts GPL+ with exceptions bpg-nateli-condenced-fonts GPL+ with exceptions bpg-nateli-fonts GPL+ with exceptions bpg-nino-medium-cond-fonts GPL+ with exceptions bpg-nino-medium-fonts GPL+ with exceptions bpg-sans-fonts GPL+ with exceptions bpg-sans-medium-fonts GPL+ with exceptions bpg-sans-modern-fonts Bitstream Vera bpg-sans-regular-fonts GPL+ with exceptions bpg-serif-fonts GPL+ with exceptions bpg-serif-modern-fonts Bitstream Vera bpg-ucnobi-fonts GPL+ with exceptions brasero GPLv3+ brasero-libs GPLv3+ brasero-nautilus GPLv3+ brlapi LGPLv2+ brlapi-java LGPLv2+ brltty LGPLv2+ brltty-at-spi2 LGPLv2+ brltty-docs LGPLv2+ brltty-dracut LGPLv2+ brltty-espeak-ng LGPLv2+ brltty-xw LGPLv2+ brotli-devel MIT buildah ASL 2.0 buildah-tests ASL 2.0 byacc Public Domain byteman LGPLv2+ byteman-javadoc LGPLv2+ c2esp GPLv2+ cairo LGPLv2 or MPLv1.1 cairo-devel LGPLv2 or MPLv1.1 cairo-gobject LGPLv2 or MPLv1.1 cairo-gobject-devel LGPLv2 or MPLv1.1 cairomm LGPLv2+ cargo (Apache-2.0 OR MIT) AND (Artistic-2.0 AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 AND Unicode-DFS-2016) cdi-api ASL 2.0 cdparanoia GPLv2 and LGPLv2 cdparanoia-libs LGPLv2 cdrdao GPLv2+ celt051 BSD certmonger GPLv3+ cgdcbxd GPLv2 chan ASL 2.0 check LGPLv2+ check-devel LGPLv2+ cheese GPLv2+ cheese-libs GPLv2+ chrome-gnome-shell GPLv3+ cim-schema DMTF cjose MIT cjose-devel MIT clang NCSA clang-analyzer NCSA and MIT clang-devel NCSA clang-libs NCSA clang-resource-filesystem NCSA clang-tools-extra NCSA clang-tools-extra-devel NCSA cldr-emoji-annotation Unicode clevis GPLv3+ clevis-dracut GPLv3+ clevis-luks GPLv3+ clevis-systemd GPLv3+ clevis-udisks2 GPLv3+ clippy (Apache-2.0 OR MIT) AND (Artistic-2.0 AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 AND Unicode-DFS-2016) cloud-init GPLv3 cloud-utils-growpart GPLv3 clucene-contribs-lib LGPLv2+ or ASL 2.0 clucene-core LGPLv2+ or ASL 2.0 clutter LGPLv2+ clutter-gst2 LGPLv2+ clutter-gst3 LGPLv2+ clutter-gtk LGPLv2+ cmake BSD and MIT and zlib cmake-data BSD and MIT and zlib cmake-doc BSD and MIT and zlib cmake-filesystem BSD and MIT and zlib cmake-gui BSD and MIT and zlib cmake-rpm-macros BSD and MIT and zlib cockpit-composer MIT cockpit-leapp LGPLv2+ cockpit-machines LGPL-2.1-or-later cockpit-packagekit LGPL-2.1-or-later cockpit-pcp LGPL-2.1-or-later cockpit-podman LGPL-2.1-or-later cockpit-session-recording LGPL-2.1-or-later cockpit-storaged LGPL-2.1-or-later cogl LGPLv2+ color-filesystem Public Domain colord GPLv2+ and LGPLv2+ colord-gtk LGPLv2+ colord-libs GPLv2+ and LGPLv2+ compat-exiv2-026 GPLv2+ compat-libgfortran-48 GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD compat-libpthread-nonshared LGPLv2+ and LGPLv2+ with exceptions and GPLv2+ and GPLv2+ with exceptions and BSD and Inner-Net and ISC and Public Domain and GFDL compat-libtiff3 libtiff compat-openssl10 OpenSSL compiler-rt NCSA or MIT composer-cli GPLv2+ conmon ASL 2.0 container-exception-logger GPLv2+ container-selinux GPLv2 containernetworking-plugins ASL 2.0 containers-common ASL 2.0 convmv GPLv2 or GPLv3 copy-jdk-configs BSD coreos-installer ASL 2.0 coreos-installer-bootinfra ASL 2.0 coreos-installer-dracut ASL 2.0 corosynclib BSD cpp GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD crash GPLv3 crash-gcore-command GPLv2 crash-ptdump-command GPLv2 crash-trace-command GPLv2 createrepo_c GPLv2+ createrepo_c-devel GPLv2+ createrepo_c-libs GPLv2+ crit GPLv2 criu GPLv2 criu-devel GPLv2 criu-libs GPLv2 crun GPLv2+ cryptsetup-devel GPLv2+ and LGPLv2+ cscope BSD and GPLv2+ ctags GPLv2+ and LGPLv2+ and Public Domain culmus-aharoni-clm-fonts GPLv2 culmus-caladings-clm-fonts GPLv2 culmus-david-clm-fonts GPLv2 culmus-drugulin-clm-fonts GPLv2 culmus-ellinia-clm-fonts GPLv2 culmus-fonts-common GPLv2 culmus-frank-ruehl-clm-fonts GPLv2 culmus-hadasim-clm-fonts GPLv2 culmus-keteryg-fonts GPLv2 culmus-miriam-clm-fonts GPLv2 culmus-miriam-mono-clm-fonts GPLv2 culmus-nachlieli-clm-fonts GPLv2 culmus-shofar-fonts GPLv2 culmus-simple-clm-fonts GPLv2 culmus-stamashkenaz-clm-fonts GPLv2 culmus-stamsefarad-clm-fonts GPLv2 culmus-yehuda-clm-fonts GPLv2 CUnit LGPLv2+ cups GPLv2+ and LGPLv2 with exceptions and AML cups-client GPLv2 cups-devel LGPLv2 cups-filesystem GPLv2+ and LGPLv2 with exceptions and AML cups-filters GPLv2 and GPLv2+ and GPLv3 and GPLv3+ and LGPLv2+ and MIT and BSD with advertising cups-filters-libs LGPLv2 and MIT cups-ipptool GPLv2+ and LGPLv2 with exceptions and AML cups-lpd GPLv2+ and LGPLv2 with exceptions and AML cups-pk-helper GPLv2+ custodia GPLv3+ cyrus-imapd BSD cyrus-imapd-utils BSD cyrus-imapd-vzic GPLv2+ cyrus-sasl-sql BSD with advertising daxctl-devel LGPLv2 daxio BSD dbus-devel (GPLv2+ or AFL) and GPLv2+ dbus-glib-devel AFL and GPLv2+ dbus-x11 (GPLv2+ or AFL) and GPLv2+ dconf LGPLv2+ and GPLv2+ and GPLv3+ dconf-editor LGPLv2+ dcraw GPLv2+ dejavu-lgc-sans-fonts Bitstream Vera and Public Domain delve MIT desktop-file-utils GPLv2+ devhelp GPLv2+ and LGPL2+ devhelp-libs GPLv2+ and LGPL2+ dialog LGPLv2 diffstat MIT directory-maven-plugin ASL 2.0 directory-maven-plugin-javadoc ASL 2.0 dirsplit GPLv2 disruptor ASL 2.0 dleyna-connector-dbus LGPLv2 dleyna-core LGPLv2 dleyna-renderer LGPLv2 dleyna-server LGPLv2 dnf-plugin-spacewalk GPLv2 dnsmasq GPLv2 or GPLv3 dnsmasq-utils GPLv2 or GPLv3 dnssec-trigger BSD dnssec-trigger-panel BSD docbook-dtds Copyright only docbook-style-xsl DMIT dotconf LGPLv2 dotnet 0BSD AND Apache-2.0 AND (Apache-2.0 WITH LLVM-exception) AND APSL-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSD-4-Clause AND BSL-1.0 AND bzip2-1.0.6 AND CC0-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC-PDDC AND CNRI-Python AND EPL-1.0 AND GPL-2.0-only AND (GPL-2.0-only WITH GCC-exception-2.0) AND GPL-2.0-or-later AND GPL-3.0-only AND ICU AND ISC AND LGPL-2.1-only AND LGPL-2.1-or-later AND LicenseRef-Fedora-Public-Domain AND LicenseRef-ISO-8879 AND MIT AND MIT-Wu AND MS-PL AND MS-RL AND NCSA AND OFL-1.1 AND OpenSSL AND Unicode-DFS-2015 AND Unicode-DFS-2016 AND W3C-19980720 AND X11 AND Zlib dotnet-apphost-pack-3.0 MIT and ASL 2.0 and BSD dotnet-apphost-pack-3.1 MIT and ASL 2.0 and BSD dotnet-apphost-pack-5.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-apphost-pack-6.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-apphost-pack-7.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-apphost-pack-8.0 0BSD AND Apache-2.0 AND (Apache-2.0 WITH LLVM-exception) AND APSL-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSD-4-Clause AND BSL-1.0 AND bzip2-1.0.6 AND CC0-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC-PDDC AND CNRI-Python AND EPL-1.0 AND GPL-2.0-only AND (GPL-2.0-only WITH GCC-exception-2.0) AND GPL-2.0-or-later AND GPL-3.0-only AND ICU AND ISC AND LGPL-2.1-only AND LGPL-2.1-or-later AND LicenseRef-Fedora-Public-Domain AND LicenseRef-ISO-8879 AND MIT AND MIT-Wu AND MS-PL AND MS-RL AND NCSA AND OFL-1.1 AND OpenSSL AND Unicode-DFS-2015 AND Unicode-DFS-2016 AND W3C-19980720 AND X11 AND Zlib dotnet-host 0BSD AND Apache-2.0 AND (Apache-2.0 WITH LLVM-exception) AND APSL-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSD-4-Clause AND BSL-1.0 AND bzip2-1.0.6 AND CC0-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC-PDDC AND CNRI-Python AND EPL-1.0 AND GPL-2.0-only AND (GPL-2.0-only WITH GCC-exception-2.0) AND GPL-2.0-or-later AND GPL-3.0-only AND ICU AND ISC AND LGPL-2.1-only AND LGPL-2.1-or-later AND LicenseRef-Fedora-Public-Domain AND LicenseRef-ISO-8879 AND MIT AND MIT-Wu AND MS-PL AND MS-RL AND NCSA AND OFL-1.1 AND OpenSSL AND Unicode-DFS-2015 AND Unicode-DFS-2016 AND W3C-19980720 AND X11 AND Zlib dotnet-host-fxr-2.1 MIT and ASL 2.0 and BSD dotnet-hostfxr-3.0 MIT and ASL 2.0 and BSD dotnet-hostfxr-3.1 MIT and ASL 2.0 and BSD dotnet-hostfxr-5.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-hostfxr-6.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-hostfxr-7.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-hostfxr-8.0 0BSD AND Apache-2.0 AND (Apache-2.0 WITH LLVM-exception) AND APSL-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSD-4-Clause AND BSL-1.0 AND bzip2-1.0.6 AND CC0-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC-PDDC AND CNRI-Python AND EPL-1.0 AND GPL-2.0-only AND (GPL-2.0-only WITH GCC-exception-2.0) AND GPL-2.0-or-later AND GPL-3.0-only AND ICU AND ISC AND LGPL-2.1-only AND LGPL-2.1-or-later AND LicenseRef-Fedora-Public-Domain AND LicenseRef-ISO-8879 AND MIT AND MIT-Wu AND MS-PL AND MS-RL AND NCSA AND OFL-1.1 AND OpenSSL AND Unicode-DFS-2015 AND Unicode-DFS-2016 AND W3C-19980720 AND X11 AND Zlib dotnet-runtime-2.1 MIT and ASL 2.0 and BSD dotnet-runtime-3.0 MIT and ASL 2.0 and BSD dotnet-runtime-3.1 MIT and ASL 2.0 and BSD dotnet-runtime-5.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-runtime-6.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-runtime-7.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-runtime-8.0 0BSD AND Apache-2.0 AND (Apache-2.0 WITH LLVM-exception) AND APSL-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSD-4-Clause AND BSL-1.0 AND bzip2-1.0.6 AND CC0-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC-PDDC AND CNRI-Python AND EPL-1.0 AND GPL-2.0-only AND (GPL-2.0-only WITH GCC-exception-2.0) AND GPL-2.0-or-later AND GPL-3.0-only AND ICU AND ISC AND LGPL-2.1-only AND LGPL-2.1-or-later AND LicenseRef-Fedora-Public-Domain AND LicenseRef-ISO-8879 AND MIT AND MIT-Wu AND MS-PL AND MS-RL AND NCSA AND OFL-1.1 AND OpenSSL AND Unicode-DFS-2015 AND Unicode-DFS-2016 AND W3C-19980720 AND X11 AND Zlib dotnet-sdk-2.1 MIT and ASL 2.0 and BSD dotnet-sdk-2.1.5xx MIT and ASL 2.0 and BSD dotnet-sdk-3.0 MIT and ASL 2.0 and BSD dotnet-sdk-3.1 MIT and ASL 2.0 and BSD dotnet-sdk-5.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-sdk-6.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-sdk-7.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-sdk-8.0 0BSD AND Apache-2.0 AND (Apache-2.0 WITH LLVM-exception) AND APSL-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSD-4-Clause AND BSL-1.0 AND bzip2-1.0.6 AND CC0-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC-PDDC AND CNRI-Python AND EPL-1.0 AND GPL-2.0-only AND (GPL-2.0-only WITH GCC-exception-2.0) AND GPL-2.0-or-later AND GPL-3.0-only AND ICU AND ISC AND LGPL-2.1-only AND LGPL-2.1-or-later AND LicenseRef-Fedora-Public-Domain AND LicenseRef-ISO-8879 AND MIT AND MIT-Wu AND MS-PL AND MS-RL AND NCSA AND OFL-1.1 AND OpenSSL AND Unicode-DFS-2015 AND Unicode-DFS-2016 AND W3C-19980720 AND X11 AND Zlib dotnet-targeting-pack-3.0 MIT and ASL 2.0 and BSD dotnet-targeting-pack-3.1 MIT and ASL 2.0 and BSD dotnet-targeting-pack-5.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-targeting-pack-6.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-targeting-pack-7.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-targeting-pack-8.0 0BSD AND Apache-2.0 AND (Apache-2.0 WITH LLVM-exception) AND APSL-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSD-4-Clause AND BSL-1.0 AND bzip2-1.0.6 AND CC0-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC-PDDC AND CNRI-Python AND EPL-1.0 AND GPL-2.0-only AND (GPL-2.0-only WITH GCC-exception-2.0) AND GPL-2.0-or-later AND GPL-3.0-only AND ICU AND ISC AND LGPL-2.1-only AND LGPL-2.1-or-later AND LicenseRef-Fedora-Public-Domain AND LicenseRef-ISO-8879 AND MIT AND MIT-Wu AND MS-PL AND MS-RL AND NCSA AND OFL-1.1 AND OpenSSL AND Unicode-DFS-2015 AND Unicode-DFS-2016 AND W3C-19980720 AND X11 AND Zlib dotnet-templates-3.0 MIT and ASL 2.0 and BSD dotnet-templates-3.1 MIT and ASL 2.0 and BSD dotnet-templates-5.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-templates-6.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-templates-7.0 MIT and ASL 2.0 and BSD and LGPLv2+ and CC-BY and CC0 and MS-PL and EPL-1.0 and GPL+ and GPLv2 and ISC and OFL and zlib dotnet-templates-8.0 0BSD AND Apache-2.0 AND (Apache-2.0 WITH LLVM-exception) AND APSL-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSD-4-Clause AND BSL-1.0 AND bzip2-1.0.6 AND CC0-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC-PDDC AND CNRI-Python AND EPL-1.0 AND GPL-2.0-only AND (GPL-2.0-only WITH GCC-exception-2.0) AND GPL-2.0-or-later AND GPL-3.0-only AND ICU AND ISC AND LGPL-2.1-only AND LGPL-2.1-or-later AND LicenseRef-Fedora-Public-Domain AND LicenseRef-ISO-8879 AND MIT AND MIT-Wu AND MS-PL AND MS-RL AND NCSA AND OFL-1.1 AND OpenSSL AND Unicode-DFS-2015 AND Unicode-DFS-2016 AND W3C-19980720 AND X11 AND Zlib dovecot MIT and LGPLv2 dovecot-mysql MIT and LGPLv2 dovecot-pgsql MIT and LGPLv2 dovecot-pigeonhole MIT and LGPLv2 dpdk BSD and LGPLv2 and GPLv2 dpdk-doc BSD and LGPLv2 and GPLv2 dpdk-tools BSD and LGPLv2 and GPLv2 driverctl LGPLv2 dropwatch GPLv2+ drpm LGPLv2+ and BSD dvd+rw-tools GPLv2 dwz GPLv2+ and GPLv3+ dyninst LGPLv2+ ecj EPL-2.0 eclipse-ecf-core EPL-2.0 and ASL 2.0 and BSD eclipse-ecf-runtime EPL-2.0 and ASL 2.0 and BSD eclipse-emf-core EPL-2.0 eclipse-emf-runtime EPL-2.0 eclipse-emf-xsd EPL-2.0 eclipse-equinox-osgi EPL-2.0 eclipse-jdt EPL-2.0 eclipse-p2-discovery EPL-2.0 eclipse-pde EPL-2.0 eclipse-platform EPL-2.0 eclipse-swt EPL-2.0 edk2-aarch64 BSD-2-Clause-Patent and OpenSSL edk2-ovmf BSD-2-Clause-Patent and OpenSSL ee4j-parent EPL-2.0 or GPLv2 with exceptions efi-srpm-macros GPLv3+ egl-utils MIT egl-wayland MIT emacs GPLv3+ and CC0-1.0 emacs-common GPLv3+ and GFDL and BSD emacs-lucid GPLv3+ and CC0-1.0 emacs-nox GPLv3+ and CC0-1.0 emacs-terminal GPLv3+ and CC0-1.0 emoji-picker GPLv3+ enchant LGPLv2+ enchant2 LGPLv2+ enscript GPLv3+ and LGPLv2+ and GPLv2+ eog GPLv2+ and GFDL esc GPL+ espeak-ng GPLv3+ eth-tools-basic BSD eth-tools-fastfabric BSD evemu GPLv3+ evemu-libs LGPLv3+ evince GPLv2+ and GPLv3+ and LGPLv2+ and MIT and Afmparse evince-browser-plugin GPLv2+ and GPLv3+ and LGPLv2+ and MIT and Afmparse evince-libs GPLv2+ and GPLv3+ and LGPLv2+ and MIT and Afmparse evince-nautilus GPLv2+ and GPLv3+ and LGPLv2+ and MIT and Afmparse evolution GPLv2+ and GFDL evolution-bogofilter GPLv2+ and GFDL evolution-data-server LGPLv2+ evolution-data-server-devel LGPLv2+ evolution-data-server-langpacks LGPLv2+ evolution-data-server-ui LGPLv2+ evolution-data-server-ui-devel LGPLv2+ evolution-ews LGPLv2 evolution-ews-langpacks LGPLv2 evolution-help GPLv2+ and GFDL evolution-langpacks GPLv2+ and GFDL evolution-mapi LGPLv2+ evolution-mapi-langpacks LGPLv2+ evolution-pst GPLv2+ and GFDL evolution-spamassassin GPLv2+ and GFDL exchange-bmc-os-info BSD exempi BSD exiv2 GPLv2+ exiv2-libs GPLv2+ fabtests BSD and (BSD or GPLv2) and MIT fapolicyd GPLv3+ fapolicyd-selinux GPLv3+ farstream02 LGPLv2+ and GPLv2+ fasterxml-oss-parent Apache-2.0 fdo-admin-cli BSD fdo-client BSD fdo-init BSD fdo-manufacturing-server BSD fdo-owner-cli BSD fdo-owner-onboarding-server BSD fdo-rendezvous-server BSD felix-gogo-command ASL 2.0 felix-gogo-runtime ASL 2.0 and MIT felix-gogo-shell ASL 2.0 felix-scr ASL 2.0 fence-agents-all GPLv2+ and LGPLv2+ and ASL 2.0 fence-agents-amt-ws ASL 2.0 fence-agents-apc GPLv2+ and LGPLv2+ fence-agents-apc-snmp GPLv2+ and LGPLv2+ fence-agents-bladecenter GPLv2+ and LGPLv2+ fence-agents-brocade GPLv2+ and LGPLv2+ fence-agents-cisco-mds GPLv2+ and LGPLv2+ fence-agents-cisco-ucs GPLv2+ and LGPLv2+ fence-agents-common GPLv2+ and LGPLv2+ fence-agents-compute GPLv2+ and LGPLv2+ fence-agents-drac5 GPLv2+ and LGPLv2+ fence-agents-eaton-snmp GPLv2+ and LGPLv2+ fence-agents-emerson GPLv2+ and LGPLv2+ fence-agents-eps GPLv2+ and LGPLv2+ fence-agents-heuristics-ping GPLv2+ and LGPLv2+ fence-agents-hpblade GPLv2+ and LGPLv2+ fence-agents-ibm-powervs GPLv2+ and LGPLv2+ fence-agents-ibm-vpc GPLv2+ and LGPLv2+ fence-agents-ibmblade GPLv2+ and LGPLv2+ fence-agents-ifmib GPLv2+ and LGPLv2+ fence-agents-ilo-moonshot GPLv2+ and LGPLv2+ fence-agents-ilo-mp GPLv2+ and LGPLv2+ fence-agents-ilo-ssh GPLv2+ and LGPLv2+ fence-agents-ilo2 GPLv2+ and LGPLv2+ fence-agents-intelmodular GPLv2+ and LGPLv2+ fence-agents-ipdu GPLv2+ and LGPLv2+ fence-agents-ipmilan GPLv2+ and LGPLv2+ fence-agents-kdump GPLv2+ and LGPLv2+ fence-agents-kubevirt GPLv2+ and LGPLv2+ and ASL 2.0 and BSD and BSD-2-Clause and BSD-3-Clause and ISC and MIT and MPL-2.0 fence-agents-lpar GPLv2+ and LGPLv2+ fence-agents-mpath GPLv2+ and LGPLv2+ fence-agents-redfish GPLv2+ and LGPLv2+ fence-agents-rhevm GPLv2+ and LGPLv2+ fence-agents-rsa GPLv2+ and LGPLv2+ fence-agents-rsb GPLv2+ and LGPLv2+ fence-agents-sbd GPLv2+ and LGPLv2+ fence-agents-scsi GPLv2+ and LGPLv2+ fence-agents-virsh GPLv2+ and LGPLv2+ fence-agents-vmware-rest GPLv2+ and LGPLv2+ fence-agents-vmware-soap GPLv2+ and LGPLv2+ fence-agents-wti GPLv2+ and LGPLv2+ fence-agents-zvm GPLv2+ and LGPLv2+ fence-virt GPLv2+ fence-virtd GPLv2+ fence-virtd-cpg GPLv2+ fence-virtd-libvirt GPLv2+ fence-virtd-multicast GPLv2+ fence-virtd-serial GPLv2+ fence-virtd-tcp GPLv2+ fetchmail GPL+ and Public Domain fftw GPLv2+ fftw-devel GPLv2+ fftw-libs GPLv2+ fftw-libs-double GPLv2+ fftw-libs-long GPLv2+ fftw-libs-quad GPLv2+ fftw-libs-single GPLv2+ fftw-static GPLv2+ file-roller GPLv2+ fio GPLv2 firefox MPLv1.1 or GPLv2+ or LGPLv2+ firewall-applet GPLv2+ firewall-config GPLv2+ flac-libs BSD and GPLv2+ and GFDL flatpak LGPLv2+ flatpak-builder LGPLv2+ and GPLv2+ flatpak-libs LGPLv2+ flatpak-selinux LGPLv2+ flatpak-session-helper LGPLv2+ flatpak-spawn LGPLv2+ flatpak-xdg-utils LGPLv2+ flex BSD and LGPLv2+ flex-doc BSD and LGPLv2+ fltk LGPLv2+ with exceptions flute W3C and LGPLv2+ fontawesome-fonts OFL fontawesome-fonts-web OFL and MIT fonts-tweak-tool LGPLv3+ foomatic GPLv2+ foomatic-db GPLv2+ foomatic-db-filesystem Public Domain foomatic-db-ppds GPLv2+ and MIT fprintd GPLv2+ fprintd-pam GPLv2+ freeglut MIT freeglut-devel MIT freeradius GPLv2+ and LGPLv2+ freeradius-devel GPLv2+ and LGPLv2+ freeradius-doc GPLv2+ and LGPLv2+ freeradius-krb5 GPLv2+ and LGPLv2+ freeradius-ldap GPLv2+ and LGPLv2+ freeradius-mysql GPLv2+ and LGPLv2+ freeradius-perl GPLv2+ and LGPLv2+ freeradius-postgresql GPLv2+ and LGPLv2+ freeradius-rest GPLv2+ and LGPLv2+ freeradius-sqlite GPLv2+ and LGPLv2+ freeradius-unixODBC GPLv2+ and LGPLv2+ freeradius-utils GPLv2+ and LGPLv2+ freerdp ASL 2.0 freerdp-libs ASL 2.0 frei0r-plugins GPLv2+ frei0r-plugins-opencv GPLv2+ fribidi LGPLv2+ and UCD fribidi-devel LGPLv2+ and UCD frr GPLv2+ frr-selinux GPLv2+ fstrm MIT fstrm-devel MIT ftp BSD with advertising fuse-overlayfs GPLv3+ galera GPLv2 gavl GPLv3+ gc BSD gcc GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-c++ GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-gdb-plugin GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-gfortran GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-offload-nvptx GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-plugin-annobin GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10 GPLv2+ gcc-toolset-10-annobin GPLv3+ gcc-toolset-10-binutils GPLv3+ gcc-toolset-10-binutils-devel GPLv3+ gcc-toolset-10-build GPLv2+ gcc-toolset-10-dwz GPLv2+ and GPLv3+ gcc-toolset-10-dyninst LGPLv2+ gcc-toolset-10-dyninst-devel LGPLv2+ gcc-toolset-10-elfutils GPLv3+ and (GPLv2+ or LGPLv3+) and GFDL gcc-toolset-10-elfutils-debuginfod-client GPLv3+ and (GPLv2+ or LGPLv3+) gcc-toolset-10-elfutils-debuginfod-client-devel GPLv2+ or LGPLv3+ gcc-toolset-10-elfutils-devel GPLv2+ or LGPLv3+ gcc-toolset-10-elfutils-libelf GPLv2+ or LGPLv3+ gcc-toolset-10-elfutils-libelf-devel GPLv2+ or LGPLv3+ gcc-toolset-10-elfutils-libs GPLv2+ or LGPLv3+ gcc-toolset-10-gcc GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10-gcc-c++ GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10-gcc-gdb-plugin GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10-gcc-gfortran GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10-gdb GPLv3+ and GPLv3+ with exceptions and GPLv2+ and GPLv2+ with exceptions and GPL+ and LGPLv2+ and LGPLv3+ and BSD and Public Domain and GFDL gcc-toolset-10-gdb-doc GFDL gcc-toolset-10-gdb-gdbserver GPLv3+ and GPLv3+ with exceptions and GPLv2+ and GPLv2+ with exceptions and GPL+ and LGPLv2+ and LGPLv3+ and BSD and Public Domain and GFDL gcc-toolset-10-libasan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10-libatomic-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10-libitm-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10-liblsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10-libquadmath-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10-libstdc++-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10-libstdc++-docs GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10-libtsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10-libubsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-10-ltrace GPLv2+ gcc-toolset-10-make GPLv3+ gcc-toolset-10-make-devel GPLv3+ gcc-toolset-10-perftools GPLv2+ gcc-toolset-10-runtime GPLv2+ gcc-toolset-10-strace LGPL-2.1+ and GPL-2.0+ gcc-toolset-10-systemtap GPLv2+ gcc-toolset-10-systemtap-client GPLv2+ gcc-toolset-10-systemtap-devel GPLv2+ gcc-toolset-10-systemtap-initscript GPLv2+ gcc-toolset-10-systemtap-runtime GPLv2+ gcc-toolset-10-systemtap-sdt-devel GPLv2+ and Public Domain gcc-toolset-10-systemtap-server GPLv2+ gcc-toolset-10-toolchain GPLv2+ gcc-toolset-10-valgrind GPLv2+ gcc-toolset-10-valgrind-devel GPLv2+ gcc-toolset-11 GPLv2+ gcc-toolset-11-annobin-annocheck GPLv3+ gcc-toolset-11-annobin-docs GPLv3+ gcc-toolset-11-annobin-plugin-gcc GPLv3+ gcc-toolset-11-binutils GPLv3+ gcc-toolset-11-binutils-devel GPLv3+ gcc-toolset-11-build GPLv2+ gcc-toolset-11-dwz GPLv2+ and GPLv3+ gcc-toolset-11-dyninst LGPLv2+ gcc-toolset-11-dyninst-devel LGPLv2+ gcc-toolset-11-elfutils GPLv3+ and (GPLv2+ or LGPLv3+) and GFDL gcc-toolset-11-elfutils-debuginfod-client GPLv3+ and (GPLv2+ or LGPLv3+) gcc-toolset-11-elfutils-debuginfod-client-devel GPLv2+ or LGPLv3+ gcc-toolset-11-elfutils-devel GPLv2+ or LGPLv3+ gcc-toolset-11-elfutils-libelf GPLv2+ or LGPLv3+ gcc-toolset-11-elfutils-libelf-devel GPLv2+ or LGPLv3+ gcc-toolset-11-elfutils-libs GPLv2+ or LGPLv3+ gcc-toolset-11-gcc GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-gcc-c++ GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-gcc-gdb-plugin GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-gcc-gfortran GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-gcc-plugin-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-gdb GPLv3+ and GPLv3+ with exceptions and GPLv2+ and GPLv2+ with exceptions and GPL+ and LGPLv2+ and LGPLv3+ and BSD and Public Domain and GFDL gcc-toolset-11-gdb-doc GFDL gcc-toolset-11-gdb-gdbserver GPLv3+ and GPLv3+ with exceptions and GPLv2+ and GPLv2+ with exceptions and GPL+ and LGPLv2+ and LGPLv3+ and BSD and Public Domain and GFDL gcc-toolset-11-libasan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-libatomic-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-libgccjit GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-libgccjit-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-libgccjit-docs GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-libitm-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-liblsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-libquadmath-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-libstdc++-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-libstdc++-docs GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-libtsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-libubsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-11-ltrace GPLv2+ gcc-toolset-11-make GPLv3+ gcc-toolset-11-make-devel GPLv3+ gcc-toolset-11-perftools GPLv2+ gcc-toolset-11-runtime GPLv2+ gcc-toolset-11-strace LGPL-2.1+ and GPL-2.0+ gcc-toolset-11-systemtap GPLv2+ gcc-toolset-11-systemtap-client GPLv2+ gcc-toolset-11-systemtap-devel GPLv2+ gcc-toolset-11-systemtap-initscript GPLv2+ gcc-toolset-11-systemtap-runtime GPLv2+ gcc-toolset-11-systemtap-sdt-devel GPLv2+ and Public Domain gcc-toolset-11-systemtap-server GPLv2+ gcc-toolset-11-toolchain GPLv2+ gcc-toolset-11-valgrind GPLv2+ gcc-toolset-11-valgrind-devel GPLv2+ gcc-toolset-12 GPLv2+ gcc-toolset-12-annobin-annocheck GPLv3+ gcc-toolset-12-annobin-docs GPLv3+ gcc-toolset-12-annobin-plugin-gcc GPLv3+ gcc-toolset-12-binutils GPLv3+ gcc-toolset-12-binutils-devel GPLv3+ gcc-toolset-12-binutils-gold GPLv3+ gcc-toolset-12-build GPLv2+ gcc-toolset-12-dwz GPLv2+ and GPLv3+ gcc-toolset-12-gcc GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-gcc-c++ GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-gcc-gfortran GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-gcc-plugin-annobin GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-gcc-plugin-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-gdb GPLv3+ and GPLv3+ with exceptions and GPLv2+ and GPLv2+ with exceptions and GPL+ and LGPLv2+ and LGPLv3+ and BSD and Public Domain and GFDL gcc-toolset-12-libasan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-libatomic-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-libgccjit GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-libgccjit-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-libgccjit-docs GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-libitm-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-liblsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-libquadmath-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-libstdc++-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-libstdc++-docs GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-libtsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-libubsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-offload-nvptx GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-12-runtime GPLv2+ gcc-toolset-13 GPLv2+ gcc-toolset-13-annobin-annocheck GPL-3.0-or-later AND LGPL-2.0-or-later AND (GPL-2.0-or-later WITH GCC-exception-2.0) AND (LGPL-2.0-or-later WITH GCC-exception-2.0) AND GFDL-1.3-or-later gcc-toolset-13-annobin-docs GPL-3.0-or-later AND LGPL-2.0-or-later AND (GPL-2.0-or-later WITH GCC-exception-2.0) AND (LGPL-2.0-or-later WITH GCC-exception-2.0) AND GFDL-1.3-or-later gcc-toolset-13-annobin-plugin-gcc GPL-3.0-or-later AND LGPL-2.0-or-later AND (GPL-2.0-or-later WITH GCC-exception-2.0) AND (LGPL-2.0-or-later WITH GCC-exception-2.0) AND GFDL-1.3-or-later gcc-toolset-13-binutils GPLv3+ gcc-toolset-13-binutils-devel GPLv3+ gcc-toolset-13-binutils-gold GPLv3+ gcc-toolset-13-dwz GPLv2+ and GPLv3+ gcc-toolset-13-gcc GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-gcc-c++ GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-gcc-gfortran GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-gcc-plugin-annobin GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-gcc-plugin-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-gdb GPLv3+ and GPLv3+ with exceptions and GPLv2+ and GPLv2+ with exceptions and GPL+ and LGPLv2+ and LGPLv3+ and BSD and Public Domain and GFDL gcc-toolset-13-libasan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-libatomic-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-libgccjit GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-libgccjit-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-libitm-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-liblsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-libquadmath-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-libstdc++-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-libstdc++-docs GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-libtsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-libubsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-offload-nvptx GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-13-runtime GPLv2+ gcc-toolset-9 GPLv2+ gcc-toolset-9-annobin GPLv3+ gcc-toolset-9-binutils GPLv3+ gcc-toolset-9-binutils-devel GPLv3+ gcc-toolset-9-build GPLv2+ gcc-toolset-9-dwz GPLv2+ and GPLv3+ gcc-toolset-9-dyninst LGPLv2+ gcc-toolset-9-elfutils GPLv3+ and (GPLv2+ or LGPLv3+) gcc-toolset-9-elfutils-devel GPLv2+ or LGPLv3+ gcc-toolset-9-elfutils-libelf GPLv2+ or LGPLv3+ gcc-toolset-9-elfutils-libelf-devel GPLv2+ or LGPLv3+ gcc-toolset-9-elfutils-libs GPLv2+ or LGPLv3+ gcc-toolset-9-gcc GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-9-gcc-c++ GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-9-gcc-gdb-plugin GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-9-gcc-gfortran GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-9-gdb GPLv3+ and GPLv3+ with exceptions and GPLv2+ and GPLv2+ with exceptions and GPL+ and LGPLv2+ and LGPLv3+ and BSD and Public Domain and GFDL gcc-toolset-9-gdb-doc GFDL gcc-toolset-9-gdb-gdbserver GPLv3+ and GPLv3+ with exceptions and GPLv2+ and GPLv2+ with exceptions and GPL+ and LGPLv2+ and LGPLv3+ and BSD and Public Domain and GFDL gcc-toolset-9-libasan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-9-libatomic-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-9-libitm-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-9-liblsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-9-libquadmath-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-9-libstdc++-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-9-libstdc++-docs GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-9-libtsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-9-libubsan-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD gcc-toolset-9-ltrace GPLv2+ gcc-toolset-9-make GPLv3+ gcc-toolset-9-make-devel GPLv3+ gcc-toolset-9-perftools GPLv2+ gcc-toolset-9-runtime GPLv2+ gcc-toolset-9-strace LGPL-2.1+ and GPL-2.0+ gcc-toolset-9-systemtap GPLv2+ gcc-toolset-9-systemtap-client GPLv2+ gcc-toolset-9-systemtap-devel GPLv2+ gcc-toolset-9-systemtap-initscript GPLv2+ gcc-toolset-9-systemtap-runtime GPLv2+ gcc-toolset-9-systemtap-sdt-devel GPLv2+ and Public Domain gcc-toolset-9-systemtap-server GPLv2+ gcc-toolset-9-toolchain GPLv2+ gcc-toolset-9-valgrind GPLv2+ gcc-toolset-9-valgrind-devel GPLv2+ GConf2 LGPLv2+ and GPLv2+ gcr LGPLv2+ gcr-devel LGPLv2+ gd MIT gd-devel MIT gdb GPLv3+ and GPLv3+ with exceptions and GPLv2+ and GPLv2+ with exceptions and GPL+ and LGPLv2+ and LGPLv3+ and BSD and Public Domain and GFDL gdb-doc GFDL gdb-gdbserver GPLv3+ and GPLv3+ with exceptions and GPLv2+ and GPLv2+ with exceptions and GPL+ and LGPLv2+ and LGPLv3+ and BSD and Public Domain and GFDL gdb-headless GPLv3+ and GPLv3+ with exceptions and GPLv2+ and GPLv2+ with exceptions and GPL+ and LGPLv2+ and LGPLv3+ and BSD and Public Domain and GFDL gdk-pixbuf2-devel LGPLv2+ gdk-pixbuf2-modules LGPLv2+ gdm GPLv2+ gedit GPLv2+ and GFDL gedit-plugin-bookmarks GPLv2+ gedit-plugin-bracketcompletion GPLv2+ gedit-plugin-codecomment GPLv2+ gedit-plugin-colorpicker GPLv2+ gedit-plugin-colorschemer GPLv2+ gedit-plugin-commander GPLv2+ gedit-plugin-drawspaces GPLv2+ gedit-plugin-findinfiles GPLv2+ gedit-plugin-joinlines GPLv2+ gedit-plugin-multiedit GPLv2+ gedit-plugin-smartspaces GPLv2+ gedit-plugin-terminal GPLv2+ gedit-plugin-textsize GPLv2+ gedit-plugin-translate GPLv2+ gedit-plugin-wordcompletion GPLv2+ gedit-plugins GPLv2+ gedit-plugins-data GPLv2+ gegl LGPLv3+ and GPLv3+ gegl04 LGPLv3+ genisoimage GPLv2 geoclue2 GPLv2+ geoclue2-demos GPLv2+ geoclue2-libs LGPLv2+ geocode-glib LGPLv2+ geocode-glib-devel LGPLv2+ geoipupdate GPLv2 geolite2-city CC-BY-SA geolite2-country CC-BY-SA geronimo-annotation ASL 2.0 gfbgraph LGPLv2+ ghc-srpm-macros GPLv2+ ghostscript AGPLv3+ ghostscript-x11 AGPLv3+ giflib MIT gimp GPLv3+ and GPLv3 gimp-devel LGPLv3+ gimp-devel-tools LGPLv3+ gimp-libs LGPLv3+ git GPLv2 git-all GPLv2 git-clang-format NCSA git-core GPLv2 git-core-doc GPLv2 git-credential-libsecret GPLv2 git-daemon GPLv2 git-email GPLv2 git-gui GPLv2 git-instaweb GPLv2 git-lfs MIT git-subtree GPLv2 git-svn GPLv2 gitk GPLv2 gitweb GPLv2 gjs MIT and (MPLv1.1 or GPLv2+ or LGPLv2+) gl-manpages MIT and Open Publication glade-libs GPLv2+ and LGPLv2+ glassfish-annotation-api CDDL-1.1 or GPLv2 with exceptions glassfish-el CDDL-1.1 or GPLv2 with exceptions glassfish-el-api (CDDL or GPLv2 with exceptions) and ASL 2.0 glassfish-fastinfoset ASL 2.0 glassfish-jaxb-api CDDL-1.1 and GPLv2 with exceptions glassfish-jaxb-core CDDL-1.1 and GPLv2 with exceptions glassfish-jaxb-runtime CDDL-1.1 and GPLv2 with exceptions glassfish-jaxb-txw2 CDDL-1.1 and GPLv2 with exceptions glassfish-jsp (CDDL-1.1 or GPLv2 with exceptions) and ASL 2.0 glassfish-jsp-api (CDDL-1.1 or GPLv2 with exceptions) and ASL 2.0 glassfish-servlet-api (CDDL or GPLv2 with exceptions) and ASL 2.0 glibc-utils LGPLv2+ and LGPLv2+ with exceptions and GPLv2+ and GPLv2+ with exceptions and BSD and Inner-Net and ISC and Public Domain and GFDL glibmm24 LGPLv2+ glusterfs-api GPLv2 or LGPLv3+ glusterfs-cli GPLv2 or LGPLv3+ glx-utils MIT gnome-abrt GPLv2+ gnome-autoar LGPLv2+ gnome-backgrounds GPLv2 gnome-backgrounds-extras GPLv2 gnome-bluetooth GPLv2+ gnome-bluetooth-libs LGPLv2+ gnome-boxes LGPLv2+ gnome-calculator GPLv3+ gnome-characters BSD and GPLv2+ gnome-classic-session GPLv2+ gnome-color-manager GPLv2+ gnome-control-center GPLv2+ and CC-BY-SA gnome-control-center-filesystem GPLv2+ and CC-BY-SA gnome-desktop3 GPLv2+ and LGPLv2+ gnome-desktop3-devel LGPLv2+ gnome-disk-utility GPLv2+ gnome-font-viewer GPLv2+ gnome-getting-started-docs CC-BY-SA gnome-getting-started-docs-cs CC-BY-SA gnome-getting-started-docs-de CC-BY-SA gnome-getting-started-docs-es CC-BY-SA gnome-getting-started-docs-fr CC-BY-SA gnome-getting-started-docs-gl CC-BY-SA gnome-getting-started-docs-hu CC-BY-SA gnome-getting-started-docs-it CC-BY-SA gnome-getting-started-docs-pl CC-BY-SA gnome-getting-started-docs-pt_BR CC-BY-SA gnome-getting-started-docs-ru CC-BY-SA gnome-initial-setup GPLv2+ gnome-keyring GPLv2+ and LGPLv2+ gnome-keyring-pam LGPLv2+ gnome-logs GPLv3+ gnome-menus LGPLv2+ gnome-online-accounts LGPLv2+ gnome-online-accounts-devel LGPLv2+ gnome-online-miners GPLv2+ and LGPLv2+ and MIT gnome-photos GPLv3+ and LGPLv2+ gnome-photos-tests GPLv3+ and LGPLv2+ gnome-remote-desktop GPLv2+ gnome-screenshot GPLv2+ gnome-session GPLv2+ gnome-session-kiosk-session GPLv2+ gnome-session-wayland-session GPLv2+ gnome-session-xsession GPLv2+ gnome-settings-daemon GPLv2+ gnome-shell GPLv2+ gnome-shell-extension-apps-menu GPLv2+ gnome-shell-extension-auto-move-windows GPLv2+ gnome-shell-extension-classification-banner GPLv2+ gnome-shell-extension-common GPLv2+ gnome-shell-extension-custom-menu GPLv2+ gnome-shell-extension-dash-to-dock GPLv2+ gnome-shell-extension-dash-to-panel GPLv2+ gnome-shell-extension-desktop-icons GPLv3+ gnome-shell-extension-disable-screenshield GPLv2+ gnome-shell-extension-drive-menu GPLv2+ gnome-shell-extension-gesture-inhibitor GPLv2+ gnome-shell-extension-heads-up-display GPLv3+ gnome-shell-extension-horizontal-workspaces GPLv3+ gnome-shell-extension-launch-new-instance GPLv2+ gnome-shell-extension-native-window-placement GPLv2+ gnome-shell-extension-no-hot-corner GPLv2+ gnome-shell-extension-panel-favorites GPLv2+ gnome-shell-extension-places-menu GPLv2+ gnome-shell-extension-screenshot-window-sizer GPLv2+ gnome-shell-extension-systemMonitor GPLv2+ gnome-shell-extension-top-icons GPLv2+ gnome-shell-extension-updates-dialog GPLv2+ gnome-shell-extension-user-theme GPLv2+ gnome-shell-extension-window-grouper GPLv2+ gnome-shell-extension-window-list GPLv2+ gnome-shell-extension-windowsNavigator GPLv2+ gnome-shell-extension-workspace-indicator GPLv2+ gnome-software GPLv2+ gnome-system-monitor GPLv2+ gnome-terminal GPLv3+ and GFDL and LGPLv2+ gnome-terminal-nautilus GPLv3+ and GFDL and LGPLv2+ gnome-themes-standard LGPLv2+ gnome-tweaks GPLv3 and CC0 gnome-user-docs CC-BY-SA gnome-video-effects GPLv2 gnu-free-fonts-common GPLv3+ with exceptions gnu-free-mono-fonts GPLv3+ with exceptions gnu-free-sans-fonts GPLv3+ with exceptions gnu-free-serif-fonts GPLv3+ with exceptions gnuplot gnuplot and MIT gnuplot-common gnuplot and MIT gnutls-c++ GPLv3+ and LGPLv2+ gnutls-dane GPLv3+ and LGPLv2+ gnutls-devel GPLv3+ and LGPLv2+ gnutls-utils GPLv3+ go-srpm-macros GPLv3+ go-toolset BSD and Public Domain gobject-introspection-devel GPLv2+, LGPLv2+, MIT golang BSD and Public Domain golang-bin BSD and Public Domain golang-docs BSD and Public Domain golang-misc BSD and Public Domain golang-src BSD and Public Domain golang-tests BSD and Public Domain gom LGPLv2+ google-crosextra-caladea-fonts ASL 2.0 google-crosextra-carlito-fonts OFL google-droid-kufi-fonts ASL 2.0 google-droid-sans-fonts ASL 2.0 google-droid-sans-mono-fonts ASL 2.0 google-droid-serif-fonts ASL 2.0 google-gson ASL 2.0 google-guice ASL 2.0 google-noto-cjk-fonts-common OFL google-noto-emoji-color-fonts OFL and ASL 2.0 google-noto-emoji-fonts OFL and ASL 2.0 google-noto-fonts-common OFL google-noto-kufi-arabic-fonts OFL google-noto-mono-fonts OFL google-noto-naskh-arabic-fonts OFL google-noto-naskh-arabic-ui-fonts OFL google-noto-nastaliq-urdu-fonts OFL google-noto-sans-armenian-fonts OFL google-noto-sans-avestan-fonts OFL google-noto-sans-balinese-fonts OFL google-noto-sans-bamum-fonts OFL google-noto-sans-batak-fonts OFL google-noto-sans-bengali-fonts OFL google-noto-sans-bengali-ui-fonts OFL google-noto-sans-brahmi-fonts OFL google-noto-sans-buginese-fonts OFL google-noto-sans-buhid-fonts OFL google-noto-sans-canadian-aboriginal-fonts OFL google-noto-sans-carian-fonts OFL google-noto-sans-cham-fonts OFL google-noto-sans-cherokee-fonts OFL google-noto-sans-cjk-ttc-fonts OFL google-noto-sans-coptic-fonts OFL google-noto-sans-cuneiform-fonts OFL google-noto-sans-cypriot-fonts OFL google-noto-sans-deseret-fonts OFL google-noto-sans-devanagari-fonts OFL google-noto-sans-devanagari-ui-fonts OFL google-noto-sans-egyptian-hieroglyphs-fonts OFL google-noto-sans-ethiopic-fonts OFL google-noto-sans-fonts OFL google-noto-sans-georgian-fonts OFL google-noto-sans-glagolitic-fonts OFL google-noto-sans-gothic-fonts OFL google-noto-sans-gujarati-fonts OFL google-noto-sans-gujarati-ui-fonts OFL google-noto-sans-gurmukhi-fonts OFL google-noto-sans-gurmukhi-ui-fonts OFL google-noto-sans-hanunoo-fonts OFL google-noto-sans-hebrew-fonts OFL google-noto-sans-imperial-aramaic-fonts OFL google-noto-sans-inscriptional-pahlavi-fonts OFL google-noto-sans-inscriptional-parthian-fonts OFL google-noto-sans-javanese-fonts OFL google-noto-sans-kaithi-fonts OFL google-noto-sans-kannada-fonts OFL google-noto-sans-kannada-ui-fonts OFL google-noto-sans-kayah-li-fonts OFL google-noto-sans-kharoshthi-fonts OFL google-noto-sans-khmer-fonts OFL google-noto-sans-khmer-ui-fonts OFL google-noto-sans-lao-fonts OFL google-noto-sans-lao-ui-fonts OFL google-noto-sans-lepcha-fonts OFL google-noto-sans-limbu-fonts OFL google-noto-sans-linear-b-fonts OFL google-noto-sans-lisu-fonts OFL google-noto-sans-lycian-fonts OFL google-noto-sans-lydian-fonts OFL google-noto-sans-malayalam-fonts OFL google-noto-sans-malayalam-ui-fonts OFL google-noto-sans-mandaic-fonts OFL google-noto-sans-meetei-mayek-fonts OFL google-noto-sans-mongolian-fonts OFL google-noto-sans-myanmar-fonts OFL google-noto-sans-myanmar-ui-fonts OFL google-noto-sans-new-tai-lue-fonts OFL google-noto-sans-nko-fonts OFL google-noto-sans-ogham-fonts OFL google-noto-sans-ol-chiki-fonts OFL google-noto-sans-old-italic-fonts OFL google-noto-sans-old-persian-fonts OFL google-noto-sans-old-south-arabian-fonts OFL google-noto-sans-old-turkic-fonts OFL google-noto-sans-oriya-fonts OFL google-noto-sans-oriya-ui-fonts OFL google-noto-sans-osmanya-fonts OFL google-noto-sans-phags-pa-fonts OFL google-noto-sans-phoenician-fonts OFL google-noto-sans-rejang-fonts OFL google-noto-sans-runic-fonts OFL google-noto-sans-samaritan-fonts OFL google-noto-sans-saurashtra-fonts OFL google-noto-sans-shavian-fonts OFL google-noto-sans-sinhala-fonts OFL google-noto-sans-sundanese-fonts OFL google-noto-sans-syloti-nagri-fonts OFL google-noto-sans-symbols-fonts OFL google-noto-sans-syriac-eastern-fonts OFL google-noto-sans-syriac-estrangela-fonts OFL google-noto-sans-syriac-western-fonts OFL google-noto-sans-tagalog-fonts OFL google-noto-sans-tagbanwa-fonts OFL google-noto-sans-tai-le-fonts OFL google-noto-sans-tai-tham-fonts OFL google-noto-sans-tai-viet-fonts OFL google-noto-sans-tamil-fonts OFL google-noto-sans-tamil-ui-fonts OFL google-noto-sans-telugu-fonts OFL google-noto-sans-telugu-ui-fonts OFL google-noto-sans-thaana-fonts OFL google-noto-sans-thai-fonts OFL google-noto-sans-thai-ui-fonts OFL google-noto-sans-tibetan-fonts OFL google-noto-sans-tifinagh-fonts OFL google-noto-sans-ugaritic-fonts OFL google-noto-sans-ui-fonts OFL google-noto-sans-vai-fonts OFL google-noto-sans-yi-fonts OFL google-noto-serif-armenian-fonts OFL google-noto-serif-bengali-fonts OFL google-noto-serif-cjk-ttc-fonts OFL google-noto-serif-devanagari-fonts OFL google-noto-serif-fonts OFL google-noto-serif-georgian-fonts OFL google-noto-serif-gujarati-fonts OFL google-noto-serif-kannada-fonts OFL google-noto-serif-khmer-fonts OFL google-noto-serif-lao-fonts OFL google-noto-serif-malayalam-fonts OFL google-noto-serif-tamil-fonts OFL google-noto-serif-telugu-fonts OFL google-noto-serif-thai-fonts OFL gpm GPLv2 and GPLv2+ with exceptions and GPLv3+ and Verbatim and Copyright only gpm-devel GPLv2 and GPLv2+ with exceptions and GPLv3+ and Verbatim and Copyright only gpm-libs GPLv2 and GPLv2+ with exceptions and GPLv3+ and Verbatim and Copyright only grafana AGPLv3 grafana-pcp ASL 2.0 grafana-selinux AGPLv3 graphite2 (LGPLv2+ or GPLv2+ or MPL) and (Netscape or GPLv2+ or LGPLv2+) graphite2-devel (LGPLv2+ or GPLv2+ or MPL) and (Netscape or GPLv2+ or LGPLv2+) graphviz EPL-1.0 graphviz-ruby EPL-1.0 greenboot LGPLv2+ greenboot-default-health-checks LGPLv2+ grilo LGPLv2+ grilo-plugins LGPLv2+ gsettings-desktop-schemas LGPLv2+ gsettings-desktop-schemas-devel LGPLv2+ gsl GPLv3 and GFDL and BSD gsl-devel GPLv3 and GFDL and BSD gsm MIT gsound LGPLv2 gspell LGPLv2+ gssdp LGPLv2+ gssntlmssp LGPLv3+ gstreamer1 LGPLv2+ gstreamer1-devel LGPLv2+ gstreamer1-plugins-bad-free LGPLv2+ and LGPLv2 gstreamer1-plugins-base LGPLv2+ gstreamer1-plugins-base-devel LGPLv2+ gstreamer1-plugins-good LGPLv2+ gstreamer1-plugins-good-gtk LGPLv2+ gstreamer1-plugins-ugly-free LGPLv2+ and LGPLv2 gtk-update-icon-cache LGPLv2+ gtk-vnc2 LGPLv2+ gtk2 LGPLv2+ gtk2-devel LGPLv2+ gtk2-devel-docs LGPLv2+ gtk2-immodule-xim LGPLv2+ gtk2-immodules LGPLv2+ gtk3 LGPLv2+ gtk3-devel LGPLv2+ gtk3-immodule-xim LGPLv2+ gtkmm24 LGPLv2+ gtkmm30 LGPLv2+ gtksourceview3 LGPLv2+ gtkspell GPLv2+ gtkspell3 GPLv2+ guava ASL 2.0 and CC0 guava20 ASL 2.0 and CC0 gubbi-fonts GPLv3+ with exceptions guile LGPLv3+ gupnp LGPLv2+ gupnp-av LGPLv2+ gupnp-dlna LGPLv2+ gupnp-igd LGPLv2+ gutenprint GPLv2+ gutenprint-cups GPLv2+ gutenprint-doc GPLv2+ gutenprint-libs GPLv2+ gutenprint-libs-ui GPLv2+ gutenprint-plugin GPLv2+ gvfs GPLv3 and LGPLv2+ and BSD and MPLv2.0 gvfs-afc GPLv3 and LGPLv2+ and BSD and MPLv2.0 gvfs-afp GPLv3 and LGPLv2+ and BSD and MPLv2.0 gvfs-archive GPLv3 and LGPLv2+ and BSD and MPLv2.0 gvfs-client GPLv3 and LGPLv2+ and BSD and MPLv2.0 gvfs-devel GPLv3 and LGPLv2+ and BSD and MPLv2.0 gvfs-fuse GPLv3 and LGPLv2+ and BSD and MPLv2.0 gvfs-goa GPLv3 and LGPLv2+ and BSD and MPLv2.0 gvfs-gphoto2 GPLv3 and LGPLv2+ and BSD and MPLv2.0 gvfs-mtp GPLv3 and LGPLv2+ and BSD and MPLv2.0 gvfs-smb GPLv3 and LGPLv2+ and BSD and MPLv2.0 gvnc LGPLv2+ hamcrest BSD hamcrest-core BSD haproxy GPLv2+ harfbuzz MIT harfbuzz-devel MIT harfbuzz-icu MIT hawtjni-runtime ASL 2.0 and EPL and BSD HdrHistogram BSD and CC0 HdrHistogram-javadoc BSD and CC0 HdrHistogram_c BSD and Public Domain hesiod MIT hexchat GPLv2+ hexchat-devel GPLv2+ hexedit GPLv2+ hicolor-icon-theme GPLv2+ highlight GPLv3 highlight-gui GPLv3 hivex LGPLv2 hivex-devel LGPLv2 hostapd BSD hplip GPLv2+ and MIT and BSD and IJG and Public Domain and GPLv2+ with exceptions and ISC hplip-common GPLv2+ hplip-gui BSD hplip-libs GPLv2+ and MIT hspell AGPLv3 http-parser MIT httpcomponents-client ASL 2.0 httpcomponents-core ASL 2.0 httpd ASL 2.0 httpd-devel ASL 2.0 httpd-filesystem ASL 2.0 httpd-manual ASL 2.0 httpd-tools ASL 2.0 hunspell LGPLv2+ or GPLv2+ or MPLv1.1 hunspell-af LGPLv2+ hunspell-ak LGPLv3 hunspell-am GPL+ hunspell-ar GPLv2 or LGPLv2 or MPLv1.1 hunspell-as GPLv2+ or LGPLv2+ or MPLv1.1 hunspell-ast GPLv3+ hunspell-az GPLv2+ hunspell-be GPL+ and LGPLv2+ hunspell-ber GPL+ or LGPLv2+ or MPLv1.1 hunspell-bg GPLv2+ or LGPLv2+ or MPLv1.1 hunspell-bn GPLv2+ hunspell-br LGPLv2+ hunspell-ca GPLv2+ hunspell-cop GPLv3+ hunspell-cs GPL+ hunspell-csb GPLv2+ hunspell-cv GPLv3+ or LGPLv3+ or MPLv1.1 hunspell-cy GPL+ hunspell-da GPLv2+ hunspell-de GPLv2 or GPLv3 hunspell-devel LGPLv2+ or GPLv2+ or MPLv1.1 hunspell-dsb GPLv2+ hunspell-el GPLv2+ or LGPLv2+ or MPLv1.1 hunspell-en LGPLv2+ and LGPLv2 and BSD hunspell-en-GB LGPLv2+ and LGPLv2 and BSD hunspell-en-US LGPLv2+ and LGPLv2 and BSD hunspell-eo LGPLv3 hunspell-es LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-AR LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-BO LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-CL LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-CO LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-CR LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-CU LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-DO LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-EC LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-ES LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-GT LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-HN LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-MX LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-NI LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-PA LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-PE LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-PR LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-PY LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-SV LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-US LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-UY LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-es-VE LGPLv3+ or GPLv3+ or MPLv1.1 hunspell-et LGPLv2+ and LPPL hunspell-eu GPLv2 hunspell-fa GPLv2+ hunspell-fj LGPLv2+ or GPLv2+ or MPLv1.1 hunspell-fo GPLv2+ hunspell-fr MPLv2.0 hunspell-fur GPLv2+ hunspell-fy LGPLv2+ hunspell-ga GPLv2+ hunspell-gd GPLv2+ and GPLv3+ hunspell-gl GPLv2 hunspell-grc GPL+ or LGPLv2+ hunspell-gu GPL+ hunspell-gv GPL+ hunspell-haw GPLv2+ hunspell-he AGPLv3 hunspell-hi GPLv2+ hunspell-hil GPLv2+ hunspell-hr LGPLv2+ or SISSL hunspell-hsb GPLv2+ hunspell-ht GPLv3+ hunspell-hu LGPLv2+ or GPLv2+ or MPLv1.1 hunspell-hy GPLv2+ hunspell-ia LGPLv2+ hunspell-id GPLv2 hunspell-is GPLv2+ hunspell-it GPLv3+ hunspell-kk GPLv2+ or LGPLv2+ or MPLv1.1 hunspell-km GPLv3 hunspell-kn GPLv2+ or LGPLv2+ or MPLv1.1 hunspell-ko MPLv1.1 or GPLv2 or LGPLv2 hunspell-ku GPLv3 or LGPLv3 or MPLv1.1 hunspell-ky GPLv2+ hunspell-la GPLv2+ hunspell-lb EUPL 1.1 hunspell-ln GPLv2+ hunspell-lt BSD hunspell-lv LGPLv2+ hunspell-mai GPLv2+ or LGPLv2+ or MPLv1.1 hunspell-mg GPLv2+ hunspell-mi GPLv3+ hunspell-mk GPL+ hunspell-ml GPLv3+ hunspell-mn GPLv2 hunspell-mos LGPLv3 hunspell-mr LGPLv2+ hunspell-ms GFDL and GPL+ hunspell-mt LGPLv2+ hunspell-nb GPL+ hunspell-nds GPLv2+ hunspell-ne LGPLv2 hunspell-nl BSD or CC-BY hunspell-nn GPL+ hunspell-nr LGPLv2+ hunspell-nso LGPLv2+ hunspell-ny GPLv3+ hunspell-oc GPLv3+ hunspell-om GPLv3+ hunspell-or GPLv2+ hunspell-pa GPLv2+ hunspell-pl LGPLv2+ or GPL+ or MPLv1.1 or ASL 2.0 or CC-BY-SA hunspell-pt ((LGPLv3 or MPL) and LGPLv2) and (GPLv2 or LGPLv2 or MPLv1.1) hunspell-qu AGPLv3 hunspell-quh GPLv2+ hunspell-ro GPLv2+ or LGPLv2+ or MPLv1.1 hunspell-ru BSD hunspell-rw GPLv2+ hunspell-sc AGPLv3+ and GPLv2 hunspell-se GPLv3 hunspell-shs GPLv2+ hunspell-si GPLv2+ hunspell-sk LGPLv2 or GPLv2 or MPLv1.1 hunspell-sl GPL+ or LGPLv2+ hunspell-smj GPLv3 hunspell-so GPLv2+ hunspell-sq GPLv2+ hunspell-sr LGPLv3 hunspell-ss LGPLv2+ hunspell-st LGPLv2+ hunspell-sv LGPLv3 hunspell-sw LGPLv2+ hunspell-ta GPLv2+ hunspell-te GPL+ hunspell-tet GPLv2+ hunspell-th LGPLv2+ hunspell-ti GPL+ hunspell-tk GPLv2+ hunspell-tl GPLv2+ hunspell-tn GPLv3+ hunspell-tpi GPLv3+ hunspell-ts LGPLv2+ hunspell-uk GPLv2+ or LGPLv2+ or MPLv1.1 hunspell-ur LGPLv2+ hunspell-uz GPLv2+ hunspell-ve LGPLv2+ hunspell-vi GPLv2 hunspell-wa LGPLv2+ hunspell-xh LGPLv2+ hunspell-yi LGPLv2+ or GPLv2+ or MPLv1.1 hunspell-zu GPLv3+ hwloc-gui BSD hwloc-plugins BSD hyperv-daemons GPLv2 hyperv-daemons-license GPLv2 hyperv-tools GPLv2 hypervfcopyd GPLv2 hypervkvpd GPLv2 hypervvssd GPLv2 hyphen GPLv2 or LGPLv2+ or MPLv1.1 hyphen-af LGPLv2+ hyphen-as LGPLv3+ hyphen-be GPL+ and LGPLv2+ hyphen-bg GPLv2+ or LGPLv2+ or MPLv1.1 hyphen-bn LGPLv3+ hyphen-ca GPLv3 hyphen-cs GPL+ hyphen-cy LPPL hyphen-da LGPLv2+ hyphen-de LGPLv2+ hyphen-el LGPLv2+ hyphen-en GPLv2 or LGPLv2+ or MPLv1.1 hyphen-es LGPLv3+ or GPLv3+ or MPLv1.1 hyphen-et LGPLv2+ and LPPL hyphen-eu LPPL hyphen-fa LPPL hyphen-fo GPL+ hyphen-fr LGPLv2+ hyphen-ga GPL+ hyphen-gl GPLv3 hyphen-grc LPPL hyphen-gu LGPLv3+ hyphen-hi LGPLv3+ hyphen-hr LGPLv2+ or SISSL hyphen-hsb LPPL hyphen-hu GPLv2 hyphen-ia LPPL hyphen-id GPL+ hyphen-is LGPLv2+ or SISSL hyphen-it LGPLv2+ hyphen-kn LGPLv3+ hyphen-ku GPLv2+ or LGPLv2+ hyphen-lt LPPL hyphen-lv LGPLv2+ hyphen-mi GPLv3+ hyphen-ml LGPLv3+ hyphen-mn LPPL hyphen-mr LGPLv3+ hyphen-nb GPL+ hyphen-nl GPLv2 hyphen-nn GPL+ hyphen-or LGPLv3+ hyphen-pa LGPLv3+ hyphen-pl LGPLv2+ hyphen-pt GPL+ hyphen-ro GPLv2+ hyphen-ru LGPLv2+ hyphen-sa LPPL hyphen-sk GPL+ hyphen-sl LGPLv2+ hyphen-sr LGPLv3 hyphen-sv LGPLv2+ or GPLv2+ hyphen-ta LGPLv3+ hyphen-te LGPLv3+ hyphen-tk Public Domain hyphen-uk GPLv2+ hyphen-zu LGPLv2+ i2c-tools GPLv2+ i2c-tools-perl GPLv2+ ibus LGPLv2+ ibus-gtk2 LGPLv2+ ibus-gtk3 LGPLv2+ ibus-hangul GPLv2+ ibus-kkc GPLv2+ ibus-libpinyin GPLv2+ ibus-libs LGPLv2+ ibus-libzhuyin GPLv2+ ibus-m17n GPLv2+ ibus-sayura GPLv2+ ibus-setup LGPLv2+ ibus-table LGPLv2+ ibus-table-chinese GPLv3+ ibus-table-chinese-array Freely redistributable without restriction ibus-table-chinese-cangjie Freely redistributable without restriction ibus-table-chinese-cantonese GPLv2 and GPLv3+ and Freely redistributable without restriction ibus-table-chinese-easy GPLv2 ibus-table-chinese-erbi GPLv2+ ibus-table-chinese-quick Freely redistributable without restriction ibus-table-chinese-scj GPLv3+ ibus-table-chinese-stroke5 GPLv3+ ibus-table-chinese-wu GPLv2+ ibus-table-chinese-wubi-haifeng BSD ibus-table-chinese-wubi-jidian Freely redistributable without restriction ibus-table-chinese-yong GPLv3 ibus-typing-booster GPLv3+ ibus-wayland LGPLv2+ icedax GPLv2 icedtea-web LGPLv2+ and GPLv2 with exceptions icedtea-web-javadoc LGPLv2+ and GPLv2 with exceptions icoutils GPLv3+ icu4j Unicode and MIT and BSD and Public Domain idm-jss MPLv1.1 or GPLv2+ or LGPLv2+ idm-jss-javadoc MPLv1.1 or GPLv2+ or LGPLv2+ idm-ldapjdk MPLv1.1 or GPLv2+ or LGPLv2+ idm-ldapjdk-javadoc MPLv1.1 or GPLv2+ or LGPLv2+ idm-pki-acme GPLv2 and LGPLv2 idm-pki-base GPLv2 and LGPLv2 idm-pki-base-java GPLv2 and LGPLv2 idm-pki-ca GPLv2 and LGPLv2 idm-pki-kra GPLv2 and LGPLv2 idm-pki-server GPLv2 and LGPLv2 idm-pki-symkey GPLv2 and LGPLv2 idm-pki-tools GPLv2 and LGPLv2 idm-tomcatjss LGPLv2+ idn2 GPLv3+ iio-sensor-proxy GPLv3+ ilmbase BSD initial-setup GPLv2+ initial-setup-gui GPLv2+ inkscape GPLv2+ and CC-BY inkscape-docs GPLv2+ and CC-BY inkscape-view GPLv2+ and CC-BY inkscape1 GPLv2+ and CC-BY inkscape1-docs GPLv2+ and CC-BY inkscape1-view GPLv2+ and CC-BY insights-client GPLv2+ intel-gpu-tools MIT intltool GPLv2 with exceptions iowatcher GPLv2+ ipa-client GPLv3+ ipa-client-common GPLv3+ ipa-client-epn GPLv3+ ipa-client-samba GPLv3+ ipa-common GPLv3+ ipa-healthcheck GPLv3 ipa-healthcheck-core GPLv3 ipa-python-compat GPLv3+ ipa-selinux GPLv3+ ipa-server GPLv3+ ipa-server-common GPLv3+ ipa-server-dns GPLv3+ ipa-server-trust-ad GPLv3+ iperf3 BSD ipmievd BSD ipmitool BSD ipvsadm GPLv2+ ipxe-bootimgs-aarch64 GPLv2 with additional permissions and BSD ipxe-bootimgs-x86 GPLv2 with additional permissions and BSD ipxe-roms GPLv2 with additional permissions and BSD ipxe-roms-qemu GPLv2 with additional permissions and BSD irssi GPLv2+ isl MIT iso-codes LGPLv2+ iso-codes-devel LGPLv2+ isomd5sum GPLv2+ istack-commons-runtime CDDL-1.1 and GPLv2 with exceptions istack-commons-tools CDDL-1.1 and GPLv2 with exceptions itstool GPLv3+ jackson-annotations Apache-2.0 jackson-bom Apache-2.0 jackson-core Apache-2.0 jackson-databind Apache-2.0 and LGPL-2.0-or-later jackson-jaxrs-json-provider Apache-2.0 jackson-jaxrs-providers Apache-2.0 jackson-module-jaxb-annotations Apache-2.0 jackson-modules-base Apache-2.0 jackson-parent Apache-2.0 jaf BSD jaf-javadoc BSD jakarta-activation2 EPL-2.0 or BSD or GPLv2 with exceptions jakarta-annotations EPL-2.0 or GPLv2 with exceptions jakarta-commons-httpclient ASL 2.0 and (ASL 2.0 or LGPLv2+) jansi ASL 2.0 jansi-native ASL 2.0 jansson-devel MIT jasper-libs JasPer java-1.8.0-openjdk ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-accessibility ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-demo ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-devel ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-headless ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-javadoc ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-javadoc-zip ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-1.8.0-openjdk-src ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib java-11-openjdk ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-demo ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-devel ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-headless ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-javadoc ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-javadoc-zip ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-jmods ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-src ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-11-openjdk-static-libs ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-demo ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-devel ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-headless ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-javadoc ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-javadoc-zip ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-jmods ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-src ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-17-openjdk-static-libs ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-demo ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-devel ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-headless ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-javadoc ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-javadoc-zip ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-jmods ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-src ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-21-openjdk-static-libs ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib and ISC and FTL and RSA java-atk-wrapper LGPLv2+ javapackages-filesystem BSD javapackages-tools BSD javassist MPLv1.1 or LGPLv2+ or ASL 2.0 javassist-javadoc MPLv1.1 or LGPLv2+ or ASL 2.0 jaxb-api4 BSD jaxb-codemodel BSD jaxb-core BSD jaxb-dtd-parser BSD jaxb-istack-commons-runtime BSD jaxb-istack-commons-tools BSD jaxb-relaxng-datatype BSD jaxb-rngom MIT and BSD jaxb-runtime BSD jaxb-txw2 BSD jaxb-xjc BSD and ASL 2.0 jaxb-xsom BSD jbig2dec-libs GPLv2 jbigkit-libs GPLv2+ jboss-annotations-1.2-api CDDL or GPLv2 with exceptions jboss-interceptors-1.2-api CDDL or GPLv2 with exceptions jboss-jaxrs-2.0-api (CDDL or GPLv2 with exceptions) and ASL 2.0 jboss-logging ASL 2.0 jboss-logging-tools ASL 2.0 and LGPLv2+ jcl-over-slf4j MIT and ASL 2.0 jctools ASL 2.0 jdeparser ASL 2.0 jetty-continuation ASL 2.0 or EPL-1.0 jetty-http ASL 2.0 or EPL-1.0 jetty-io ASL 2.0 or EPL-1.0 jetty-security ASL 2.0 or EPL-1.0 jetty-server ASL 2.0 or EPL-1.0 jetty-servlet ASL 2.0 or EPL-1.0 jetty-util (ASL 2.0 or EPL-1.0) and MIT jigawatts GPLv2 with exceptions jigawatts-javadoc GPLv2 with exceptions jline BSD jmc UPL jmc-core UPL jmc-core-javadoc UPL jna (LGPLv2 or ASL 2.0) and ASL 2.0 jolokia-jvm-agent ASL 2.0 jomolhari-fonts OFL jose ASL 2.0 jq MIT and ASL 2.0 and CC-BY and GPLv3 js-d3-flame-graph ASL 2.0 jsch BSD json-c-devel MIT json-glib-devel LGPLv2+ jsoup MIT jsr-305 BSD and CC-BY Judy LGPLv2+ julietaula-montserrat-fonts OFL junit EPL-1.0 junit5 EPL-2.0 jzlib BSD kacst-art-fonts GPLv2 kacst-book-fonts GPLv2 kacst-decorative-fonts GPLv2 kacst-digital-fonts GPLv2 kacst-farsi-fonts GPLv2 kacst-fonts-common GPLv2 kacst-letter-fonts GPLv2 kacst-naskh-fonts GPLv2 kacst-office-fonts GPLv2 kacst-one-fonts GPLv2 kacst-pen-fonts GPLv2 kacst-poster-fonts GPLv2 kacst-qurn-fonts GPLv2 kacst-screen-fonts GPLv2 kacst-title-fonts GPLv2 kacst-titlel-fonts GPLv2 kdump-anaconda-addon GPLv2 keepalived GPLv2+ kernel-rpm-macros GPL+ kernelshark GPLv2 and LGPLv2 keybinder3 MIT keycloak-httpd-client-install GPLv3 khmeros-base-fonts LGPLv2+ khmeros-battambang-fonts LGPLv2+ khmeros-bokor-fonts LGPLv2+ khmeros-fonts-common LGPLv2+ khmeros-handwritten-fonts LGPLv2+ khmeros-metal-chrieng-fonts LGPLv2+ khmeros-muol-fonts LGPLv2+ khmeros-siemreap-fonts LGPLv2+ koan GPLv2+ ksh EPL-1.0 kurdit-unikurd-web-fonts GPLv3 kyotocabinet-libs GPLv3 lame-libs GPLv2+ langpacks-af GPLv2+ langpacks-am GPLv2+ langpacks-ar GPLv2+ langpacks-as GPLv2+ langpacks-ast GPLv2+ langpacks-be GPLv2+ langpacks-bg GPLv2+ langpacks-bn GPLv2+ langpacks-br GPLv2+ langpacks-bs GPLv2+ langpacks-ca GPLv2+ langpacks-cs GPLv2+ langpacks-cy GPLv2+ langpacks-da GPLv2+ langpacks-de GPLv2+ langpacks-el GPLv2+ langpacks-en GPLv2+ langpacks-en_GB GPLv2+ langpacks-es GPLv2+ langpacks-et GPLv2+ langpacks-eu GPLv2+ langpacks-fa GPLv2+ langpacks-fi GPLv2+ langpacks-fr GPLv2+ langpacks-ga GPLv2+ langpacks-gl GPLv2+ langpacks-gu GPLv2+ langpacks-he GPLv2+ langpacks-hi GPLv2+ langpacks-hr GPLv2+ langpacks-hu GPLv2+ langpacks-ia GPLv2+ langpacks-id GPLv2+ langpacks-is GPLv2+ langpacks-it GPLv2+ langpacks-ja GPLv2+ langpacks-kk GPLv2+ langpacks-kn GPLv2+ langpacks-ko GPLv2+ langpacks-lt GPLv2+ langpacks-lv GPLv2+ langpacks-mai GPLv2+ langpacks-mk GPLv2+ langpacks-ml GPLv2+ langpacks-mr GPLv2+ langpacks-ms GPLv2+ langpacks-nb GPLv2+ langpacks-ne GPLv2+ langpacks-nl GPLv2+ langpacks-nn GPLv2+ langpacks-nr GPLv2+ langpacks-nso GPLv2+ langpacks-or GPLv2+ langpacks-pa GPLv2+ langpacks-pl GPLv2+ langpacks-pt GPLv2+ langpacks-pt_BR GPLv2+ langpacks-ro GPLv2+ langpacks-ru GPLv2+ langpacks-si GPLv2+ langpacks-sk GPLv2+ langpacks-sl GPLv2+ langpacks-sq GPLv2+ langpacks-sr GPLv2+ langpacks-ss GPLv2+ langpacks-sv GPLv2+ langpacks-ta GPLv2+ langpacks-te GPLv2+ langpacks-th GPLv2+ langpacks-tn GPLv2+ langpacks-tr GPLv2+ langpacks-ts GPLv2+ langpacks-uk GPLv2+ langpacks-ur GPLv2+ langpacks-ve GPLv2+ langpacks-vi GPLv2+ langpacks-xh GPLv2+ langpacks-zh_CN GPLv2+ langpacks-zh_TW GPLv2+ langpacks-zu GPLv2+ langtable GPLv3+ lapack BSD lapack64 BSD lasso GPLv2+ lato-fonts OFL lcms2 MIT ldns BSD leapp ASL 2.0 leapp-deps ASL 2.0 leapp-upgrade-el8toel9 ASL 2.0 leapp-upgrade-el8toel9-deps ASL 2.0 lemon Public Domain leptonica BSD and Leptonica lftp GPLv3+ lftp-scripts GPLv3+ liba52 GPLv2 libabw MPLv2.0 libadwaita-qt5 LGPLv2+ and GPLv2+ libao GPLv2+ libappindicator-gtk3 LGPLv2 and LGPLv3 libasan6 GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD libasan8 GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD libasyncns LGPLv2+ libatasmart LGPLv2+ libatomic_ops GPLv2 and MIT libavc1394 GPLv2+ and LGPLv2+ libbase LGPLv2 libblockdev LGPLv2+ libblockdev-crypto LGPLv2+ libblockdev-dm LGPLv2+ libblockdev-fs LGPLv2+ libblockdev-kbd LGPLv2+ libblockdev-loop LGPLv2+ libblockdev-lvm LGPLv2+ libblockdev-lvm-dbus LGPLv2+ libblockdev-mdraid LGPLv2+ libblockdev-mpath LGPLv2+ libblockdev-nvdimm LGPLv2+ libblockdev-part LGPLv2+ libblockdev-plugins-all LGPLv2+ libblockdev-s390 LGPLv2+ libblockdev-swap LGPLv2+ libblockdev-utils LGPLv2+ libblockdev-vdo LGPLv2+ libbluray LGPLv2+ libburn GPLv2+ libbytesize LGPLv2+ libcacard LGPLv2+ libcacard-devel LGPLv2+ libcanberra LGPLv2+ libcanberra-devel LGPLv2+ libcanberra-gtk2 LGPLv2+ libcanberra-gtk3 LGPLv2+ libcdio GPLv3+ libcdio-paranoia GPLv3+ libcdr MPLv2.0 and Public Domain libcmis GPLv2+ or LGPLv2+ or MPLv1.1 libcmpiCppImpl0 EPL libdatrie LGPLv2+ libdazzle GPLv3+ libdb-devel BSD and LGPLv2 and Sleepycat libdbusmenu LGPLv3 or LGPLv2 and GPLv3 libdbusmenu-gtk3 LGPLv3 or LGPLv2 and GPLv3 libdc1394 LGPLv2+ libdmapsharing LGPLv2+ libdmx MIT libdnet BSD libdrm MIT libdrm-devel MIT libdv LGPLv2+ libdvdnav GPLv2+ libdvdread GPLv2+ libdwarf LGPLv2 libeasyfc LGPLv3+ libeasyfc-gobject LGPLv3+ libecap BSD libecap-devel BSD libecpg PostgreSQL libepoxy MIT libepoxy-devel MIT libepubgen MPLv2.0 libestr LGPLv2+ libetonyek MPLv2.0 libev BSD or GPLv2+ libev-devel BSD or GPLv2+ libev-libevent-devel BSD or GPLv2+ libev-source BSD or GPLv2+ libevdev MIT libevent-devel BSD libexif LGPLv2+ libexttextcat BSD libfastjson MIT libfdt GPLv2+ libfontenc MIT libfonts LGPLv2 and UCD libformula LGPLv2 libfprint LGPLv2+ libfreehand MPLv2.0 libgdata LGPLv2+ libgdata-devel LGPLv2+ libgdither GPLv2+ libgee LGPLv2+ libgexiv2 GPLv2+ libgit2 GPLv2 with exceptions libgit2-glib LGPLv2+ libglvnd MIT libglvnd-core-devel MIT libglvnd-devel MIT libglvnd-egl MIT libglvnd-gles MIT libglvnd-glx MIT libglvnd-opengl MIT libgnomekbd LGPLv2+ libgovirt LGPLv2+ libgphoto2 GPLv2+ and GPLv2 libgpod LGPLv2+ libgs AGPLv3+ libgsf LGPLv2 libgtop2 GPLv2+ libguestfs LGPLv2+ libguestfs-appliance LGPLv2+ libguestfs-bash-completion LGPLv2+ libguestfs-devel LGPLv2+ libguestfs-gfs2 LGPLv2+ libguestfs-gobject LGPLv2+ libguestfs-gobject-devel LGPLv2+ libguestfs-inspect-icons LGPLv2+ libguestfs-java LGPLv2+ libguestfs-java-devel LGPLv2+ libguestfs-javadoc LGPLv2+ libguestfs-man-pages-ja LGPLv2+ libguestfs-man-pages-uk LGPLv2+ libguestfs-rescue LGPLv2+ libguestfs-rsync LGPLv2+ libguestfs-tools GPLv2+ libguestfs-tools-c GPLv2+ libguestfs-winsupport GPLv2+ libguestfs-xfs LGPLv2+ libgweather GPLv2+ libgweather-devel GPLv2+ libgxps LGPLv2+ libhangul LGPLv2+ libi2c LGPLv2+ libical-devel LGPLv2 or MPLv2.0 libICE MIT libICE-devel MIT libidn LGPLv2+ and GPLv3+ and GFDL libidn2-devel (GPLv2+ or LGPLv3+) and GPLv3+ libiec61883 LGPLv2+ libieee1284 GPLv2+ libieee1284-devel GPLv2+ libijs AGPLv3+ libimobiledevice LGPLv2+ libindicator-gtk3 GPLv3 libinput MIT libinput-utils MIT libipt BSD libiptcdata LGPLv2+ libiscsi LGPLv2+ libiscsi-devel LGPLv2+ libiscsi-utils GPLv2+ libisoburn GPLv2+ libisofs GPLv2+ and LGPLv2+ libitm-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD libjose ASL 2.0 libjose-devel ASL 2.0 libjpeg-turbo IJG libjpeg-turbo-devel IJG libjpeg-turbo-utils IJG libkkc GPLv3+ libkkc-common GPLv3+ libkkc-data GPLv3+ liblangtag LGPLv3+ or MPLv2.0 liblangtag-data UCD liblayout LGPLv2+ and UCD libloader LGPLv2 liblognorm LGPLv2+ liblognorm-doc LGPLv2+ liblouis LGPLv3+ liblsan GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD libluksmeta LGPLv2+ libluksmeta-devel LGPLv2+ libmad GPLv2+ libmalaga GPLv2+ libmatchbox LGPLv2+ libmaxminddb ASL 2.0 and BSD libmaxminddb-devel ASL 2.0 and BSD libmcpp BSD libmediaart LGPLv2+ libmemcached BSD libmemcached-libs BSD libmng zlib libmng-devel zlib libmnl-devel LGPLv2+ libmpc LGPLv3+ libmpc-devel LGPLv3+ libmpcdec BSD libmspack LGPLv2 libmspub MPLv2.0 libmtp LGPLv2+ libmusicbrainz5 LGPLv2 libmwaw LGPLv2+ or MPLv2.0 libnbd LGPLv2+ libnbd-bash-completion LGPLv2+ libnbd-devel LGPLv2+ and BSD libnet BSD libnice LGPLv2 and MPLv1.1 libnice-gstreamer1 LGPLv2 and MPLv1.1 libnma GPLv2+ and LGPLv2+ libnotify LGPLv2+ libnotify-devel LGPLv2+ libnumbertext (LGPLv3+ or BSD) and (LGPLv3+ or BSD or CC-BY-SA) libnxz ASL 2.0 or GPLv2+ libnxz-devel ASL 2.0 or GPLv2+ liboauth MIT liboauth-devel MIT libodfgen LGPLv2+ or MPLv2.0 libogg BSD libogg-devel BSD libomp NCSA libomp-devel NCSA libopenraw LGPLv3+ liborcus MPLv2.0 libosinfo LGPLv2+ libotf LGPLv2+ libpagemaker MPLv2.0 libpaper GPLv2 libpciaccess-devel MIT libpeas-gtk LGPLv2+ libpeas-loader-python3 LGPLv2+ libpfm MIT libpfm-devel MIT libpgtypes PostgreSQL libpinyin GPLv3+ libpinyin-data GPLv3+ libplist LGPLv2+ libpmem BSD libpmem-debug BSD libpmem-devel BSD libpmemblk BSD libpmemblk-debug BSD libpmemblk-devel BSD libpmemlog BSD libpmemlog-debug BSD libpmemlog-devel BSD libpmemobj BSD libpmemobj++-devel BSD libpmemobj++-doc BSD libpmemobj-debug BSD libpmemobj-devel BSD libpmempool BSD libpmempool-debug BSD libpmempool-devel BSD libpng12 zlib libpng15 zlib libpq PostgreSQL libpq-devel PostgreSQL libproxy-bin LGPLv2+ libproxy-gnome LGPLv2+ libproxy-networkmanager LGPLv2+ libproxy-webkitgtk4 LGPLv2+ libpst-libs GPLv2+ libpurple BSD and GPLv2+ and GPLv2 and LGPLv2+ and MIT libquadmath-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD libquvi AGPLv3+ libquvi-scripts AGPLv3+ libqxp MPLv2.0 librabbitmq-tools MIT librados2 LGPL-2.1 and CC-BY-SA-1.0 and GPL-2.0 and BSL-1.0 and BSD-3-Clause and MIT LibRaw BSD and (CDDL or LGPLv2) libraw1394 LGPLv2+ librbd1 LGPL-2.1 and CC-BY-SA-1.0 and GPL-2.0 and BSL-1.0 and BSD-3-Clause and MIT librdkafka BSD librelp GPLv3+ libreoffice (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-base (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-calc (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-core (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-data (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-draw (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-emailmerge (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-filters (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-gdb-debug-support (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-graphicfilter (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-gtk3 (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-ar (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-bg (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-bn (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-ca (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-cs (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-da (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-de (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-dz (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-el (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-en (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-es (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-et (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-eu (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-fi (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-fr (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-gl (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-gu (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-he (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-hi (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-hr (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-hu (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-id (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-it (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-ja (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-ko (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-lt (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-lv (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-nb (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-nl (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-nn (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-pl (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-pt-BR (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-pt-PT (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-ro (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-ru (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-si (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-sk (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-sl (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-sv (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-ta (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-tr (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-uk (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-zh-Hans (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-help-zh-Hant (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-impress (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-af (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-ar (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-as (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-bg (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-bn (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-br (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-ca (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-cs (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-cy (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-da (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-de (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-dz (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-el (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-en (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-es (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-et (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-eu (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-fa (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-fi (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-fr (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-ga (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-gl (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-gu (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-he (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-hi (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-hr (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-hu (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-id (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-it (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-ja (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-kk (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-kn (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-ko (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-lt (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-lv (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-mai (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-ml (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-mr (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-nb (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-nl (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-nn (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-nr (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-nso (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-or (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-pa (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-pl (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-pt-BR (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-pt-PT (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-ro (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-ru (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-si (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-sk (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-sl (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-sr (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-ss (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-st (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-sv (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-ta (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-te (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-th (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-tn (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-tr (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-ts (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-uk (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-ve (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-xh (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-zh-Hans (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-zh-Hant (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-langpack-zu (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-math (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-ogltrans (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-opensymbol-fonts (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-pdfimport (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-pyuno (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-ure (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-ure-common (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-voikko GPLv3+ libreoffice-wiki-publisher (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-writer (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-x11 (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreoffice-xsltfilter (MPLv1.1 or LGPLv3+) and LGPLv3 and LGPLv2+ and BSD and (MPLv1.1 or GPLv2 or LGPLv2 or Netscape) and Public Domain and ASL 2.0 and MPLv2.0 and CC0 libreofficekit MPLv2.0 libreport GPLv2+ libreport-anaconda GPLv2+ libreport-cli GPLv2+ libreport-gtk GPLv2+ libreport-newt GPLv2+ libreport-plugin-bugzilla GPLv2+ libreport-plugin-kerneloops GPLv2+ libreport-plugin-logger GPLv2+ libreport-plugin-mailx GPLv2+ libreport-plugin-reportuploader GPLv2+ libreport-plugin-rhtsupport GPLv2+ libreport-plugin-ureport GPLv2+ libreport-rhel GPLv2+ libreport-rhel-anaconda-bugzilla GPLv2+ libreport-rhel-bugzilla GPLv2+ libreport-web GPLv2+ librepository LGPLv2 libreswan GPLv2 librevenge (LGPLv2+ or MPLv2.0) and BSD librevenge-gdb (LGPLv2+ or MPLv2.0) and BSD librpmem BSD librpmem-debug BSD librpmem-devel BSD librsvg2 LGPLv2+ librsvg2-devel LGPLv2+ librsvg2-tools LGPLv2+ libsamplerate BSD libsane-hpaio GPLv2+ libseccomp-devel LGPLv2 libselinux-python Public Domain libselinux-ruby Public Domain libserf ASL 2.0 libserializer LGPLv2+ libshout LGPLv2+ libsigc++20 LGPLv2+ libslirp BSD and MIT libslirp-devel BSD and MIT libSM MIT libSM-devel MIT libsmi GPLv2+ and BSD libsndfile LGPLv2+ and GPLv2+ and BSD libsndfile-utils LGPLv2+ and GPLv2+ and BSD libsoup-devel LGPLv2 libspectre GPLv2+ libspiro GPLv3+ libsrtp BSD libssh-devel LGPLv2+ libstaroffice MPLv2.0 or LGPLv2+ libstdc++-devel GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD libstdc++-docs GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD libstoragemgmt-nfs-plugin LGPLv2+ libtar MIT libtasn1-devel GPLv3+ and LGPLv2+ libtasn1-tools GPLv3+ libthai LGPLv2+ libtheora BSD libtiff libtiff libtiff-devel libtiff libtimezonemap GPLv3 libtool GPLv2+ and LGPLv2+ and GFDL libtool-ltdl-devel LGPLv2+ libtpms BSD libtpms-devel BSD libtsan2 GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD libucil GPLv2+ libudisks2 LGPLv2+ libunicap GPLv2+ libusal GPLv2 libusbmuxd LGPLv2+ libuv MIT and BSD and ISC libv4l LGPLv2+ and GPLv2 libva MIT libva-devel MIT libvdpau MIT libverto-libev MIT libvirt LGPLv2+ libvirt-client LGPLv2+ libvirt-daemon LGPLv2+ libvirt-daemon-config-network LGPLv2+ libvirt-daemon-config-nwfilter LGPLv2+ libvirt-daemon-driver-interface LGPLv2+ libvirt-daemon-driver-network LGPLv2+ libvirt-daemon-driver-nodedev LGPLv2+ libvirt-daemon-driver-nwfilter LGPLv2+ libvirt-daemon-driver-qemu LGPLv2+ libvirt-daemon-driver-secret LGPLv2+ libvirt-daemon-driver-storage LGPLv2+ libvirt-daemon-driver-storage-core LGPLv2+ libvirt-daemon-driver-storage-disk LGPLv2+ libvirt-daemon-driver-storage-gluster LGPLv2+ libvirt-daemon-driver-storage-iscsi LGPLv2+ libvirt-daemon-driver-storage-iscsi-direct LGPLv2+ libvirt-daemon-driver-storage-logical LGPLv2+ libvirt-daemon-driver-storage-mpath LGPLv2+ libvirt-daemon-driver-storage-rbd LGPLv2+ libvirt-daemon-driver-storage-scsi LGPLv2+ libvirt-daemon-kvm LGPLv2+ libvirt-dbus LGPLv2+ libvirt-devel LGPLv2+ libvirt-docs LGPLv2+ libvirt-gconfig LGPLv2+ libvirt-glib LGPLv2+ libvirt-gobject LGPLv2+ libvirt-libs LGPLv2+ libvirt-lock-sanlock LGPLv2+ libvirt-nss LGPLv2+ libvirt-wireshark LGPLv2+ libvisio MPLv2.0 libvisual LGPLv2+ libvma GPLv2 or BSD libvma-utils GPLv2 or BSD libvmem BSD libvmem-devel BSD libvmmalloc BSD libvmmalloc-devel BSD libvncserver GPLv2+ libvoikko GPLv2+ libvorbis BSD libvpx BSD libwacom MIT libwacom-data MIT libwayland-client MIT libwayland-cursor MIT libwayland-egl MIT libwayland-server MIT libwebp BSD libwebp-devel BSD libwinpr ASL 2.0 libwinpr-devel ASL 2.0 libwmf LGPLv2+ and GPLv2+ and GPL+ libwmf-lite LGPLv2+ and GPLv2+ and GPL+ libwnck3 LGPLv2+ libwpd LGPLv2+ or MPLv2.0 libwpe BSD libwpg LGPLv2+ or MPLv2.0 libwps LGPLv2+ or MPLv2.0 libwsman1 BSD libX11 MIT libX11-common MIT libX11-devel MIT libX11-xcb MIT libXau MIT libXau-devel MIT libXaw MIT libXaw-devel MIT libxcb MIT libxcb-devel MIT libXcomposite MIT libXcomposite-devel MIT libXcursor MIT libXcursor-devel MIT libXdamage MIT libXdamage-devel MIT libXdmcp MIT libxdp GPLv2 libXext MIT libXext-devel MIT libXfixes MIT libXfixes-devel MIT libXfont2 MIT libXft MIT libXft-devel MIT libXi MIT libXi-devel MIT libXinerama MIT libXinerama-devel MIT libxkbcommon MIT libxkbcommon-devel MIT libxkbcommon-x11 MIT libxkbfile MIT libxklavier LGPLv2+ libxml2-devel MIT libXmu MIT libXmu-devel MIT libXNVCtrl GPLv2+ libXp MIT libXp-devel MIT libXpm MIT libXpm-devel MIT libXrandr MIT libXrandr-devel MIT libXrender MIT libXrender-devel MIT libXres MIT libXScrnSaver MIT libXScrnSaver-devel MIT libxshmfence MIT libxshmfence-devel MIT libxslt-devel MIT libXt MIT libXt-devel MIT libXtst MIT libXtst-devel MIT libXv MIT libXv-devel MIT libXvMC MIT libXxf86dga MIT libXxf86dga-devel MIT libXxf86misc MIT libXxf86misc-devel MIT libXxf86vm MIT libXxf86vm-devel MIT libyami ASL 2.0 libyang BSD libzdnn ASL 2.0 libzhuyin GPLv3+ libzip BSD libzip-devel BSD libzip-tools BSD libzmf MPLv2.0 libzpc MIT linuxconsoletools GPLv2+ linuxptp GPLv2+ lklug-fonts GPLv2 lld NCSA lld-devel NCSA lld-libs NCSA lldb NCSA lldb-devel NCSA lldpd ISC lldpd-devel ISC llvm Apache-2.0 WITH LLVM-exception OR NCSA llvm-cmake-utils Apache-2.0 WITH LLVM-exception OR NCSA llvm-devel Apache-2.0 WITH LLVM-exception OR NCSA llvm-doc Apache-2.0 WITH LLVM-exception OR NCSA llvm-googletest Apache-2.0 WITH LLVM-exception OR NCSA llvm-libs Apache-2.0 WITH LLVM-exception OR NCSA llvm-static Apache-2.0 WITH LLVM-exception OR NCSA llvm-test Apache-2.0 WITH LLVM-exception OR NCSA llvm-toolset Apache-2.0 WITH LLVM-exception OR NCSA lm_sensors-sensord GPLv2+ and Verbatim and MIT log4j ASL 2.0 log4j-jcl ASL 2.0 log4j-slf4j ASL 2.0 log4j-web ASL 2.0 lohit-assamese-fonts OFL lohit-bengali-fonts OFL lohit-devanagari-fonts OFL lohit-gujarati-fonts OFL lohit-gurmukhi-fonts OFL lohit-kannada-fonts OFL lohit-malayalam-fonts OFL lohit-marathi-fonts OFL lohit-nepali-fonts OFL lohit-odia-fonts OFL lohit-tamil-fonts OFL lohit-telugu-fonts OFL lorax GPLv2+ lorax-composer GPLv2+ lorax-lmc-novirt GPLv2+ lorax-lmc-virt GPLv2+ lorax-templates-generic GPLv2+ lorax-templates-rhel GPLv2+ lpsolve LGPLv2+ lshw-gui GPLv2 ltrace GPLv2+ lttng-ust LGPLv2 and GPLv2 and MIT lua MIT lua-expat MIT lua-guestfs LGPLv2+ lua-json MIT lua-lpeg MIT lua-socket MIT lucene ASL 2.0 lucene-analysis ASL 2.0 lucene-analyzers-smartcn ASL 2.0 lucene-queries ASL 2.0 lucene-queryparser ASL 2.0 lucene-sandbox ASL 2.0 luksmeta LGPLv2+ lz4-java ASL 2.0 and (BSD and GPLv2+) lz4-java-javadoc ASL 2.0 and (BSD and GPLv2+) m17n-db LGPLv2+ m17n-lib LGPLv2+ madan-fonts GPL+ mailman GPLv2+ make43 GPLv3+ make43-devel GPLv3+ malaga GPLv2+ malaga-suomi-voikko GPLv2+ mallard-rng MIT man-pages-overrides GPL+ and GPLv2+ and BSD and MIT and Copyright only and IEEE mariadb GPLv2 with exceptions and LGPLv2 and BSD mariadb-backup GPLv2 with exceptions and LGPLv2 and BSD mariadb-common GPLv2 with exceptions and LGPLv2 and BSD mariadb-connector-c LGPLv2+ mariadb-connector-c-config LGPLv2+ mariadb-connector-c-devel LGPLv2+ mariadb-connector-odbc LGPLv2+ mariadb-devel GPLv2 with exceptions and LGPLv2 and BSD mariadb-embedded GPLv2 with exceptions and LGPLv2 and BSD mariadb-embedded-devel GPLv2 with exceptions and LGPLv2 and BSD mariadb-errmsg GPLv2 with exceptions and LGPLv2 and BSD mariadb-gssapi-server GPLv2 with exceptions and LGPLv2 and BSD mariadb-java-client BSD and LGPLv2+ mariadb-oqgraph-engine GPLv2 with exceptions and LGPLv2 and BSD mariadb-pam GPLv2 with exceptions and LGPLv2 and BSD mariadb-server GPLv2 with exceptions and LGPLv2 and BSD mariadb-server-galera GPLv2 with exceptions and LGPLv2 and BSD mariadb-server-utils GPLv2 with exceptions and LGPLv2 and BSD mariadb-test GPLv2 with exceptions and LGPLv2 and BSD marisa BSD or LGPLv2+ matchbox-window-manager GPLv2+ maven ASL 2.0 and MIT maven-lib ASL 2.0 and MIT maven-openjdk11 ASL 2.0 and MIT maven-openjdk17 ASL 2.0 and MIT maven-openjdk21 ASL 2.0 and MIT maven-openjdk8 ASL 2.0 and MIT maven-resolver ASL 2.0 maven-resolver-api ASL 2.0 maven-resolver-connector-basic ASL 2.0 maven-resolver-impl ASL 2.0 maven-resolver-spi ASL 2.0 maven-resolver-transport-wagon ASL 2.0 maven-resolver-util ASL 2.0 maven-shared-utils ASL 2.0 maven-wagon ASL 2.0 maven-wagon-file ASL 2.0 maven-wagon-http ASL 2.0 maven-wagon-http-shared ASL 2.0 maven-wagon-provider-api ASL 2.0 mc GPLv3+ mcpp BSD mdevctl LGPLv2 meanwhile LGPLv2+ mecab BSD or LGPLv2+ or GPL+ mecab-devel BSD or LGPLv2+ or GPL+ mecab-ipadic mecab-ipadic mecab-ipadic-EUCJP mecab-ipadic media-player-info BSD memcached BSD memkind BSD mercurial GPLv2+ mercurial-chg GPLv2+ mercurial-hgk GPLv2+ mesa-dri-drivers MIT mesa-filesystem MIT mesa-libEGL MIT mesa-libEGL-devel MIT mesa-libgbm MIT mesa-libGL MIT mesa-libGL-devel MIT mesa-libglapi MIT mesa-libGLU MIT mesa-libGLU-devel MIT mesa-libGLw MIT mesa-libGLw-devel MIT mesa-libOSMesa MIT mesa-libxatracker MIT mesa-vdpau-drivers MIT mesa-vulkan-devel MIT mesa-vulkan-drivers MIT metacity GPLv2+ micropipenv LGPLv3+ mod_auth_gssapi MIT mod_auth_mellon GPLv2+ mod_auth_mellon-diagnostics GPLv2+ mod_auth_openidc ASL 2.0 mod_authnz_pam ASL 2.0 mod_dav_svn ASL 2.0 mod_fcgid ASL 2.0 mod_http2 ASL 2.0 mod_intercept_form_submit ASL 2.0 mod_ldap ASL 2.0 mod_lookup_identity ASL 2.0 mod_md ASL 2.0 mod_proxy_html ASL 2.0 mod_security ASL 2.0 mod_security-mlogc ASL 2.0 mod_security_crs ASL 2.0 mod_session ASL 2.0 mod_ssl ASL 2.0 modulemd-tools MIT motif LGPLv2+ motif-devel LGPLv2+ motif-static LGPLv2+ mousetweaks GPLv3 and GFDL mozilla-filesystem MPLv1.1 mozvoikko GPLv2+ mpdecimal BSD mpfr-devel LGPLv3+ and GPLv3+ and GFDL mpg123 LGPLv2+ mpg123-libs LGPLv2+ mpg123-plugins-pulseaudio LGPLv2+ mpich MIT mpich-devel MIT mpich-doc MIT mpitests-mpich CPL and BSD mpitests-mvapich2 CPL and BSD mpitests-mvapich2-psm2 CPL and BSD mpitests-openmpi CPL and BSD mrtg GPLv2+ mstflint GPLv2+ or BSD mt-st GPL+ mtdev MIT mtr-gtk GPLv2 mtx GPLv2 multilib-rpm-config GPLv2+ munge GPLv3+ and LGPLv3+ munge-libs GPLv3+ and LGPLv3+ mutt GPLv2+ and Public Domain mutter GPLv2+ mvapich2 BSD and MIT mvapich2-devel BSD and MIT mvapich2-doc BSD and MIT mvapich2-psm2 BSD and MIT mvapich2-psm2-devel BSD and MIT mysql GPLv2 with exceptions and LGPLv2 and BSD mysql-common GPLv2 with exceptions and LGPLv2 and BSD mysql-devel GPLv2 with exceptions and LGPLv2 and BSD mysql-errmsg GPLv2 with exceptions and LGPLv2 and BSD mysql-libs GPLv2 with exceptions and LGPLv2 and BSD mysql-selinux GPL-3.0-only mysql-server GPLv2 with exceptions and LGPLv2 and BSD mysql-test GPLv2 with exceptions and LGPLv2 and BSD mythes BSD and MIT mythes-bg GPLv2+ or LGPLv2+ or MPLv1.1 mythes-ca GPL+ mythes-cs MIT mythes-da GPLv2 or LGPLv2 or MPLv1.1 mythes-de LGPLv2+ mythes-el GPLv2+ mythes-en BSD and Artistic clarified mythes-es LGPLv2+ mythes-fr LGPLv2+ mythes-ga GFDL mythes-hu GPLv2+ and (GPLv2+ or LGPLv2+ or MPLv1.1) and GPLv2 and (GPL+ or LGPLv2+ or MPLv1.1) mythes-it AGPLv3+ mythes-lb EUPL 1.1 mythes-lv LGPLv2+ mythes-mi Public Domain mythes-nb GPL+ mythes-ne LGPLv2 mythes-nl BSD or CC-BY mythes-nn GPL+ mythes-pl LGPLv2 mythes-pt GPLv2+ mythes-ro GPLv2+ mythes-ru LGPLv2+ mythes-sk MIT mythes-sl LGPLv2+ mythes-sv MIT mythes-uk (GPLv2+ or LGPLv2+) and (GPLv2+ or LGPLv2+ or MPLv1.1) and GPLv2+ nafees-web-naskh-fonts Bitstream Vera nautilus GPLv3+ nautilus-extensions LGPLv2+ nautilus-sendto GPLv2+ navilu-fonts OFL nbdfuse LGPLv2+ and BSD nbdkit BSD nbdkit-bash-completion BSD nbdkit-basic-filters BSD nbdkit-basic-plugins BSD nbdkit-curl-plugin BSD nbdkit-devel BSD nbdkit-example-plugins BSD nbdkit-gzip-filter BSD nbdkit-gzip-plugin BSD nbdkit-linuxdisk-plugin BSD nbdkit-nbd-plugin BSD nbdkit-python-plugin BSD nbdkit-server BSD nbdkit-ssh-plugin BSD nbdkit-tar-filter BSD nbdkit-tar-plugin BSD nbdkit-tmpdisk-plugin BSD nbdkit-vddk-plugin BSD nbdkit-xz-filter BSD ncompress Public Domain ndctl-devel LGPLv2 neon LGPLv2+ net-snmp BSD net-snmp-agent-libs BSD net-snmp-devel BSD net-snmp-perl BSD net-snmp-utils BSD netavark ASL 2.0 and BSD and MIT netcf LGPLv2+ netcf-devel LGPLv2+ netcf-libs LGPLv2+ netpbm BSD and GPLv2 and IJG and MIT and Public Domain netpbm-progs BSD and GPLv2 and IJG and MIT and Public Domain netstandard-targeting-pack-2.1 0BSD AND Apache-2.0 AND (Apache-2.0 WITH LLVM-exception) AND APSL-2.0 AND BSD-2-Clause AND BSD-3-Clause AND BSD-4-Clause AND BSL-1.0 AND bzip2-1.0.6 AND CC0-1.0 AND CC-BY-3.0 AND CC-BY-4.0 AND CC-PDDC AND CNRI-Python AND EPL-1.0 AND GPL-2.0-only AND (GPL-2.0-only WITH GCC-exception-2.0) AND GPL-2.0-or-later AND GPL-3.0-only AND ICU AND ISC AND LGPL-2.1-only AND LGPL-2.1-or-later AND LicenseRef-Fedora-Public-Domain AND LicenseRef-ISO-8879 AND MIT AND MIT-Wu AND MS-PL AND MS-RL AND NCSA AND OFL-1.1 AND OpenSSL AND Unicode-DFS-2015 AND Unicode-DFS-2016 AND W3C-19980720 AND X11 AND Zlib nettle-devel LGPLv3+ or GPLv2+ network-manager-applet GPLv2+ network-scripts-ppp BSD and LGPLv2+ and GPLv2+ and Public Domain NetworkManager-cloud-setup GPLv2+ and LGPLv2+ NetworkManager-libreswan GPLv2+ NetworkManager-libreswan-gnome GPLv2+ newt-devel LGPLv2 nginx BSD nginx-all-modules BSD nginx-filesystem BSD nginx-mod-devel BSD nginx-mod-http-image-filter BSD nginx-mod-http-perl BSD nginx-mod-http-xslt-filter BSD nginx-mod-mail BSD nginx-mod-stream BSD nispor ASL 2.0 nispor-devel ASL 2.0 nm-connection-editor GPLv2+ nmap Nmap nmap-ncat Nmap nmstate LGPLv2+ nmstate-libs ASL 2.0 nmstate-plugin-ovsdb LGPLv2+ nodejs MIT and ASL 2.0 and ISC and BSD nodejs-devel MIT and ASL 2.0 and ISC and BSD nodejs-docs MIT and ASL 2.0 and ISC and BSD nodejs-full-i18n MIT and ASL 2.0 and ISC and BSD nodejs-nodemon MIT nodejs-packaging MIT nodejs-packaging-bundler MIT npm MIT and ASL 2.0 and ISC and BSD nspr MPLv2.0 nspr-devel MPLv2.0 nss MPLv2.0 nss-altfiles LGPLv2+ nss-devel MPLv2.0 nss-pam-ldapd LGPLv2+ nss-softokn MPLv2.0 nss-softokn-devel MPLv2.0 nss-softokn-freebl MPLv2.0 nss-softokn-freebl-devel MPLv2.0 nss-sysinit MPLv2.0 nss-tools MPLv2.0 nss-util MPLv2.0 nss-util-devel MPLv2.0 nss_wrapper BSD nss_wrapper-libs BSD ntpstat MIT objectweb-asm BSD ocaml-srpm-macros GPLv2+ oci-seccomp-bpf-hook ASL 2.0 oci-systemd-hook GPLv3+ oci-umount GPLv3+ ocl-icd BSD oddjob BSD oddjob-mkhomedir BSD omping ISC ongres-scram BSD ongres-scram-client BSD oniguruma BSD open-sans-fonts ASL 2.0 open-vm-tools GPLv2 open-vm-tools-desktop GPLv2 open-vm-tools-salt-minion GPLv2 open-vm-tools-sdmp GPLv2 openal-soft LGPLv2+ openblas BSD openblas-srpm-macros MIT openblas-threads BSD openchange GPLv3+ and Public Domain opencl-filesystem Public Domain opencv-contrib BSD opencv-core BSD opendnssec BSD OpenEXR-libs BSD openjpeg2 BSD and MIT openjpeg2-devel-docs BSD and MIT openjpeg2-tools BSD and MIT openmpi BSD and MIT and Romio openmpi-devel BSD and MIT and Romio openscap LGPLv2+ openscap-devel LGPLv2+ openscap-engine-sce LGPLv2+ openscap-python3 LGPLv2+ openscap-scanner LGPLv2+ openscap-utils LGPLv2+ openslp BSD openssh-askpass BSD opentest4j ASL 2.0 openwsman-client BSD openwsman-python3 BSD openwsman-server BSD opus BSD opus-devel BSD orc BSD orc-compiler BSD orc-devel BSD orca LGPLv2+ osad GPLv2 osbuild Apache-2.0 osbuild-composer Apache-2.0 osbuild-composer-core Apache-2.0 osbuild-composer-worker Apache-2.0 osbuild-depsolve-dnf Apache-2.0 osbuild-luks2 Apache-2.0 osbuild-lvm2 Apache-2.0 osbuild-ostree Apache-2.0 osbuild-selinux Apache-2.0 oscap-anaconda-addon GPLv2+ osinfo-db LGPLv2+ osinfo-db-tools GPLv2+ ostree LGPLv2+ ostree-devel LGPLv2+ ostree-grub2 LGPLv2+ ostree-libs LGPLv2+ overpass-fonts OFL or LGPLv2+ overpass-mono-fonts OFL or LGPLv2+ owasp-java-encoder BSD owasp-java-encoder-javadoc BSD pacemaker-cluster-libs GPL-2.0-or-later AND LGPL-2.1-or-later pacemaker-libs GPL-2.0-or-later AND LGPL-2.1-or-later pacemaker-schemas GPL-2.0-or-later PackageKit GPLv2+ and LGPLv2+ PackageKit-command-not-found GPLv2+ and LGPLv2+ PackageKit-cron GPLv2+ and LGPLv2+ PackageKit-glib GPLv2+ and LGPLv2+ PackageKit-gstreamer-plugin GPLv2+ and LGPLv2+ PackageKit-gtk3-module GPLv2+ and LGPLv2+ pakchois LGPLv2+ paktype-naqsh-fonts GPLv2 with exceptions paktype-naskh-basic-fonts GPLv2 with exceptions paktype-tehreer-fonts GPLv2 with exceptions pango LGPLv2+ pango-devel LGPLv2+ pangomm LGPLv2+ papi BSD papi-devel BSD papi-libs BSD paps LGPLv2+ paps-libs LGPLv2+ paratype-pt-sans-caption-fonts OFL paratype-pt-sans-fonts OFL parfait ASL 2.0 parfait-examples ASL 2.0 parfait-javadoc ASL 2.0 patchutils GPLv2+ pavucontrol GPLv2+ pcaudiolib GPLv3+ pcm BSD pcp GPLv2+ and LGPLv2+ and CC-BY pcp-conf LGPLv2+ pcp-devel GPLv2+ and LGPLv2+ pcp-doc GPLv2+ and CC-BY pcp-export-pcp2elasticsearch GPLv2+ pcp-export-pcp2graphite GPLv2+ pcp-export-pcp2influxdb GPLv2+ pcp-export-pcp2json GPLv2+ pcp-export-pcp2spark GPLv2+ pcp-export-pcp2xml GPLv2+ pcp-export-pcp2zabbix GPLv2+ pcp-export-zabbix-agent GPLv2+ pcp-gui GPLv2+ and LGPLv2+ and LGPLv2+ with exceptions pcp-import-collectl2pcp LGPLv2+ pcp-import-ganglia2pcp LGPLv2+ pcp-import-iostat2pcp LGPLv2+ pcp-import-mrtg2pcp LGPLv2+ pcp-import-sar2pcp LGPLv2+ pcp-libs LGPLv2+ pcp-libs-devel GPLv2+ and LGPLv2+ pcp-parfait-agent ASL 2.0 pcp-pmda-activemq GPLv2+ pcp-pmda-apache GPLv2+ pcp-pmda-bash GPLv2+ pcp-pmda-bcc ASL 2.0 and GPLv2+ pcp-pmda-bind2 GPLv2+ pcp-pmda-bonding GPLv2+ pcp-pmda-bpftrace ASL 2.0 and GPLv2+ pcp-pmda-cifs GPLv2+ pcp-pmda-cisco GPLv2+ pcp-pmda-dbping GPLv2+ pcp-pmda-denki GPLv2+ pcp-pmda-dm GPLv2+ pcp-pmda-docker GPLv2+ pcp-pmda-ds389 GPLv2+ pcp-pmda-ds389log GPLv2+ pcp-pmda-elasticsearch GPLv2+ pcp-pmda-gfs2 GPLv2+ pcp-pmda-gluster GPLv2+ pcp-pmda-gpfs GPLv2+ pcp-pmda-gpsd GPLv2+ pcp-pmda-hacluster GPLv2+ pcp-pmda-haproxy GPLv2+ pcp-pmda-infiniband GPLv2+ pcp-pmda-json GPLv2+ pcp-pmda-libvirt GPLv2+ pcp-pmda-lio GPLv2+ pcp-pmda-lmsensors GPLv2+ pcp-pmda-logger GPLv2+ pcp-pmda-lustre GPLv2+ pcp-pmda-lustrecomm GPLv2+ pcp-pmda-mailq GPLv2+ pcp-pmda-memcache GPLv2+ pcp-pmda-mic GPLv2+ pcp-pmda-mongodb GPLv2+ pcp-pmda-mounts GPLv2+ pcp-pmda-mssql GPLv2+ pcp-pmda-mysql GPLv2+ pcp-pmda-named GPLv2+ pcp-pmda-netcheck GPLv2+ pcp-pmda-netfilter GPLv2+ pcp-pmda-news GPLv2+ pcp-pmda-nfsclient GPLv2+ pcp-pmda-nginx GPLv2+ pcp-pmda-nvidia-gpu GPLv2+ pcp-pmda-openmetrics GPLv2+ pcp-pmda-openvswitch GPLv2+ pcp-pmda-oracle GPLv2+ pcp-pmda-pdns GPLv2+ pcp-pmda-perfevent GPLv2+ pcp-pmda-podman GPLv2+ pcp-pmda-postfix GPLv2+ pcp-pmda-postgresql GPLv2+ pcp-pmda-rabbitmq GPLv2+ pcp-pmda-redis GPLv2+ pcp-pmda-roomtemp GPLv2+ pcp-pmda-rsyslog GPLv2+ pcp-pmda-samba GPLv2+ pcp-pmda-sendmail GPLv2+ pcp-pmda-shping GPLv2+ pcp-pmda-slurm GPLv2+ pcp-pmda-smart GPLv2+ pcp-pmda-snmp GPLv2+ pcp-pmda-sockets GPLv2+ pcp-pmda-statsd GPLv2+ pcp-pmda-summary GPLv2+ pcp-pmda-systemd GPLv2+ pcp-pmda-trace GPLv2+ pcp-pmda-unbound GPLv2+ pcp-pmda-weblog GPLv2+ pcp-pmda-zimbra GPLv2+ pcp-pmda-zswap GPLv2+ pcp-selinux GPLv2+ and CC-BY pcp-system-tools GPLv2+ pcp-testsuite GPLv2+ pcp-zeroconf GPLv2+ pentaho-libxml LGPLv2 pentaho-reporting-flow-engine LGPLv2+ peripety MIT perl GPL+ or Artistic perl-Algorithm-Diff GPL+ or Artistic perl-App-cpanminus GPL+ or Artistic perl-Archive-Tar GPL+ or Artistic perl-Archive-Zip (GPL+ or Artistic) and BSD perl-Attribute-Handlers GPL+ or Artistic perl-Authen-SASL GPL+ or Artistic perl-autodie GPL+ or Artistic perl-AutoLoader GPL+ or Artistic perl-AutoSplit GPL+ or Artistic perl-autouse GPL+ or Artistic perl-B GPL+ or Artistic perl-B-Debug GPL+ or Artistic perl-B-Lint GPL+ or Artistic perl-base GPL+ or Artistic perl-Benchmark GPL+ or Artistic perl-bignum GPL+ or Artistic perl-Bit-Vector (GPLv2+ or Artistic) and LGPLv2+ perl-blib GPL+ or Artistic perl-Carp GPL+ or Artistic perl-Carp-Clan GPL+ or Artistic perl-CGI (GPL+ or Artistic) and Artistic 2.0 perl-Class-Inspector GPL+ or Artistic perl-Class-ISA GPL+ or Artistic perl-Class-Struct GPL+ or Artistic perl-Compress-Bzip2 GPL+ or Artistic perl-Compress-Raw-Bzip2 GPL+ or Artistic perl-Compress-Raw-Lzma GPL+ or Artistic perl-Compress-Raw-Zlib (GPL+ or Artistic) and zlib perl-Config-Extensions GPL+ or Artistic perl-Config-Perl-V GPL+ or Artistic perl-constant GPL+ or Artistic perl-Convert-ASN1 GPL+ or Artistic perl-core GPL+ or Artistic perl-CPAN GPL+ or Artistic perl-CPAN-DistnameInfo GPL+ or Artistic perl-CPAN-Meta GPL+ or Artistic perl-CPAN-Meta-Check GPL+ or Artistic perl-CPAN-Meta-Requirements GPL+ or Artistic perl-CPAN-Meta-YAML GPL+ or Artistic perl-Crypt-OpenSSL-Bignum GPL+ or Artistic perl-Crypt-OpenSSL-Random GPL+ or Artistic perl-Crypt-OpenSSL-RSA GPL+ or Artistic perl-Data-Dump GPL+ or Artistic perl-Data-Dumper GPL+ or Artistic perl-Data-OptList GPL+ or Artistic perl-Data-Section GPL+ or Artistic perl-Date-Calc GPL+ or Artistic perl-DB_File GPL+ or Artistic perl-DBD-MySQL GPL+ or Artistic perl-DBD-Pg GPLv2+ or Artistic perl-DBD-SQLite (GPL+ or Artistic) and Public Domain perl-DBI GPL+ or Artistic perl-DBM_Filter GPL+ or Artistic perl-debugger GPL+ or Artistic perl-deprecate GPL+ or Artistic perl-devel (GPL+ or Artistic) and UCD perl-Devel-Peek GPL+ or Artistic perl-Devel-PPPort GPL+ or Artistic perl-Devel-SelfStubber GPL+ or Artistic perl-Devel-Size GPL+ or Artistic perl-diagnostics GPL+ or Artistic perl-Digest GPL+ or Artistic perl-Digest-HMAC GPL+ or Artistic perl-Digest-MD5 (GPL+ or Artistic) and BSD perl-Digest-SHA GPL+ or Artistic perl-DirHandle GPL+ or Artistic perl-doc (GPL+ or Artistic) and UCD and Public Domain perl-Dumpvalue GPL+ or Artistic perl-DynaLoader GPL+ or Artistic perl-Encode (GPL+ or Artistic) and Artistic 2.0 and UCD perl-Encode-Detect MPLv1.1 or GPLv2+ or LGPLv2+ perl-Encode-devel (GPL+ or Artistic) and UCD perl-Encode-Locale GPL+ or Artistic perl-encoding GPL+ or Artistic perl-encoding-warnings GPL+ or Artistic perl-English GPL+ or Artistic perl-Env GPL+ or Artistic perl-Errno GPL+ or Artistic perl-Error (GPL+ or Artistic) and MIT perl-experimental GPL+ or Artistic perl-Exporter GPL+ or Artistic perl-ExtUtils-CBuilder GPL+ or Artistic perl-ExtUtils-Command GPL+ or Artistic perl-ExtUtils-Constant GPL+ or Artistic perl-ExtUtils-Embed GPL+ or Artistic perl-ExtUtils-Install GPL+ or Artistic perl-ExtUtils-MakeMaker GPL+ or Artistic perl-ExtUtils-Manifest GPL+ or Artistic perl-ExtUtils-Miniperl GPL+ or Artistic perl-ExtUtils-MM-Utils GPL+ or Artistic perl-ExtUtils-ParseXS GPL+ or Artistic perl-FCGI OML perl-Fcntl GPL+ or Artistic perl-Fedora-VSP GPLv3+ perl-fields GPL+ or Artistic perl-File-Basename GPL+ or Artistic perl-File-CheckTree GPL+ or Artistic perl-File-Compare GPL+ or Artistic perl-File-Copy GPL+ or Artistic perl-File-DosGlob GPL+ or Artistic perl-File-Fetch GPL+ or Artistic perl-File-Find GPL+ or Artistic perl-File-HomeDir GPL+ or Artistic perl-File-Listing GPL+ or Artistic perl-File-Path GPL+ or Artistic perl-File-pushd ASL 2.0 perl-File-ShareDir GPL+ or Artistic perl-File-Slurp GPL+ or Artistic perl-File-stat GPL+ or Artistic perl-File-Temp GPL+ or Artistic perl-File-Which GPL+ or Artistic perl-FileCache GPL+ or Artistic perl-FileHandle GPL+ or Artistic perl-filetest GPL+ or Artistic perl-Filter GPL+ or Artistic perl-Filter-Simple GPL+ or Artistic perl-FindBin GPL+ or Artistic perl-GDBM_File GPL+ or Artistic perl-generators GPL+ perl-Getopt-Long GPLv2+ or Artistic perl-Getopt-Std GPL+ or Artistic perl-Git GPLv2 perl-Git-SVN GPLv2 perl-GSSAPI GPL+ or Artistic perl-Hash-Util GPL+ or Artistic perl-Hash-Util-FieldHash GPL+ or Artistic perl-hivex LGPLv2 perl-homedir GPL+ or Artistic perl-HTML-Parser GPL+ or Artistic perl-HTML-Tagset GPL+ or Artistic perl-HTTP-Cookies GPL+ or Artistic perl-HTTP-Date GPL+ or Artistic perl-HTTP-Message GPL+ or Artistic perl-HTTP-Negotiate GPL+ or Artistic perl-HTTP-Tiny GPL+ or Artistic perl-I18N-Collate GPL+ or Artistic perl-I18N-Langinfo GPL+ or Artistic perl-I18N-LangTags GPL+ or Artistic perl-if GPL+ or Artistic perl-Importer GPL+ or Artistic perl-inc-latest ASL 2.0 perl-interpreter GPL+ or Artistic perl-IO GPL+ or Artistic perl-IO-Compress GPL+ or Artistic perl-IO-Compress-Lzma GPL+ or Artistic perl-IO-HTML GPL+ or Artistic perl-IO-Multiplex GPL+ or Artistic perl-IO-Socket-INET6 GPL+ or Artistic perl-IO-Socket-IP GPL+ or Artistic perl-IO-Socket-SSL (GPL+ or Artistic) and MPLv2.0 perl-IO-String GPL+ or Artistic perl-IO-Zlib GPL+ or Artistic perl-IPC-Cmd GPL+ or Artistic perl-IPC-Open3 GPL+ or Artistic perl-IPC-System-Simple GPL+ or Artistic perl-IPC-SysV GPL+ or Artistic perl-JSON GPL+ or Artistic perl-JSON-PP GPL+ or Artistic perl-LDAP GPL+ or Artistic perl-less GPL+ or Artistic perl-lib GPL+ or Artistic perl-libintl-perl GPLv3+ and LGPLv2+ perl-libnet GPL+ or Artistic perl-libnetcfg GPL+ or Artistic perl-libs (GPL+ or Artistic) and BSD and HSRL and MIT and UCD and Public domain perl-libwww-perl GPL+ or Artistic perl-libxml-perl (GPL+ or Artistic) and Public Domain perl-local-lib GPL+ or Artistic perl-locale GPL+ or Artistic perl-Locale-Codes GPL+ or Artistic perl-Locale-Maketext GPL+ or Artistic perl-Locale-Maketext-Simple MIT perl-LWP-MediaTypes (GPL+ or Artistic) and Public Domain perl-LWP-Protocol-https GPL+ or Artistic perl-macros GPL+ or Artistic perl-Mail-AuthenticationResults GPL+ or Artistic perl-Mail-DKIM GPL+ or Artistic perl-Mail-Sender GPL+ or Artistic perl-Mail-SPF BSD perl-MailTools GPL+ or Artistic perl-Math-BigInt GPL+ or Artistic perl-Math-BigInt-FastCalc GPL+ or Artistic perl-Math-BigRat GPL+ or Artistic perl-Math-Complex GPL+ or Artistic perl-Memoize GPL+ or Artistic perl-meta-notation GPL+ or Artistic perl-MIME-Base64 (GPL+ or Artistic) and MIT perl-Module-Build GPL+ or Artistic perl-Module-CoreList GPL+ or Artistic perl-Module-CoreList-tools GPL+ or Artistic perl-Module-CPANfile GPL+ or Artistic perl-Module-Load GPL+ or Artistic perl-Module-Load-Conditional GPL+ or Artistic perl-Module-Loaded GPL+ or Artistic perl-Module-Metadata GPL+ or Artistic perl-Module-Pluggable GPL+ or Artistic perl-Module-Runtime GPL+ or Artistic perl-Mozilla-CA MPLv2.0 perl-Mozilla-LDAP GPLv2+ and LGPLv2+ and MPLv1.1 perl-mro GPL+ or Artistic perl-MRO-Compat GPL+ or Artistic perl-NDBM_File GPL+ or Artistic perl-Net GPL+ or Artistic perl-Net-DNS (GPL+ or Artistic) and MIT perl-Net-HTTP GPL+ or Artistic perl-Net-Ping GPL+ or Artistic perl-Net-Server GPL+ or Artistic perl-Net-SMTP-SSL GPL+ or Artistic perl-Net-SSLeay Artistic 2.0 perl-NetAddr-IP GPLv2+ and (GPLv2+ or Artistic clarified) perl- GPL+ or Artistic perl-NTLM GPL+ or Artistic perl-Object-HashBase GPL+ or Artistic perl-Object-HashBase-tools GPL+ or Artistic perl-ODBM_File GPL+ or Artistic perl-Opcode GPL+ or Artistic perl-open GPL+ or Artistic perl-overload GPL+ or Artistic perl-overloading GPL+ or Artistic perl-Package-Generator GPL+ or Artistic perl-Params-Check GPL+ or Artistic perl-Params-Util GPL+ or Artistic perl-parent GPL+ or Artistic perl-Parse-PMFile GPL+ or Artistic perl-PathTools (GPL+ or Artistic) and BSD perl-PCP-LogImport GPLv2+ perl-PCP-LogSummary GPLv2+ perl-PCP-MMV GPLv2+ perl-PCP-PMDA GPLv2+ perl-Perl-OSType GPL+ or Artistic perl-perlfaq (GPL+ or Artistic) and Public Domain perl-PerlIO-via-QuotedPrint GPL+ or Artistic perl-ph GPL+ or Artistic perl-Pod-Checker GPL+ or Artistic perl-Pod-Escapes GPL+ or Artistic perl-Pod-Functions GPL+ or Artistic perl-Pod-Html GPL+ or Artistic perl-Pod-LaTeX GPL+ or Artistic perl-Pod-Parser GPL+ or Artistic perl-Pod-Perldoc GPL+ or Artistic perl-Pod-Plainer GPL+ or Artistic perl-Pod-Simple GPL+ or Artistic perl-Pod-Usage GPL+ or Artistic perl-podlators (GPL+ or Artistic) and FSFAP perl-POSIX GPL+ or Artistic perl-Safe GPL+ or Artistic perl-Scalar-List-Utils GPL+ or Artistic perl-Search-Dict GPL+ or Artistic perl-SelectSaver GPL+ or Artistic perl-SelfLoader GPL+ or Artistic perl-sigtrap GPL+ or Artistic perl-SNMP_Session Artistic 2.0 perl-Socket GPL+ or Artistic perl-Socket6 BSD perl-Software-License GPL+ or Artistic perl-sort GPL+ or Artistic perl-srpm-macros GPLv3+ perl-Storable GPL+ or Artistic perl-String-CRC32 Public Domain perl-String-ShellQuote (GPL+ or Artistic) and GPLv2+ perl-Sub-Exporter GPL+ or Artistic perl-Sub-Install GPL+ or Artistic perl-subs GPL+ or Artistic perl-Symbol GPL+ or Artistic perl-Sys-Guestfs LGPLv2+ perl-Sys-Hostname GPL+ or Artistic perl-Sys-Syslog GPL+ or Artistic perl-Sys-Virt GPLv2+ or Artistic perl-Term-ANSIColor GPL+ or Artistic perl-Term-Cap GPL+ or Artistic perl-Term-Complete GPL+ or Artistic perl-Term-ReadLine GPL+ or Artistic perl-Term-Table GPL+ or Artistic perl-TermReadKey (Copyright only) and (Artistic or GPL+) perl-Test GPL+ or Artistic perl-Test-Harness GPL+ or Artistic perl-Test-Simple (GPL+ or Artistic) and CC0 and Public Domain perl-tests GPL+ or Artistic perl-Text-Abbrev GPL+ or Artistic perl-Text-Balanced GPL+ or Artistic perl-Text-Diff (GPL+ or Artistic) and (GPLv2+ or Artistic) and MIT perl-Text-Glob GPL+ or Artistic perl-Text-ParseWords GPL+ or Artistic perl-Text-Soundex (Copyright only) and (GPL+ or Artistic) perl-Text-Tabs+Wrap TTWL perl-Text-Template GPL+ or Artistic perl-Text-Unidecode GPL+ or Artistic perl-Thread GPL+ or Artistic perl-Thread-Queue GPL+ or Artistic perl-Thread-Semaphore GPL+ or Artistic perl-threads GPL+ or Artistic perl-threads-shared GPL+ or Artistic perl-Tie GPL+ or Artistic perl-Tie-File GPLv2+ or Artistic perl-Tie-Memoize GPLv2+ or Artistic perl-Tie-RefHash GPL+ or Artistic perl-Time GPL+ or Artistic perl-Time-HiRes GPL+ or Artistic perl-Time-Local GPL+ or Artistic perl-Time-Piece (GPL+ or Artistic) and BSD perl-TimeDate GPL+ or Artistic perl-Tk (GPL+ or Artistic) and SWL perl-Try-Tiny MIT perl-Unicode-Collate (GPL+ or Artistic) and Unicode perl-Unicode-Normalize GPL+ or Artistic perl-Unicode-UCD GPL+ or Artistic perl-Unix-Syslog Artistic 2.0 perl-URI GPL+ or Artistic perl-User-pwent GPL+ or Artistic perl-utils GPL+ or Artistic perl-vars GPL+ or Artistic perl-version GPL+ or Artistic perl-vmsish GPL+ or Artistic perl-WWW-RobotRules GPL+ or Artistic perl-XML-Catalog GPL+ or Artistic perl-XML-LibXML (GPL+ or Artistic) and MIT perl-XML-NamespaceSupport GPL+ or Artistic perl-XML-Parser GPL+ or Artistic perl-XML-SAX GPL+ or Artistic perl-XML-SAX-Base GPL+ or Artistic perl-XML-Simple GPL+ or Artistic perl-XML-TokeParser GPL+ or Artistic perl-XML-XPath Artistic 2.0 and (GPL+ or Artistic) perl-YAML GPL+ or Artistic pesign GPLv2 pg_repack BSD pgaudit PostgreSQL php PHP and Zend and BSD and MIT and ASL 1.0 and NCSA php-bcmath PHP and LGPLv2+ php-cli PHP-3.01 AND Zend-2.0 AND BSD-2-Clause AND MIT AND Apache-1.0 AND NCSA AND PostgreSQL php-common PHP and BSD php-dba PHP php-dbg PHP and Zend and BSD and MIT and ASL 1.0 and NCSA php-devel PHP-3.01 AND Zend-2.0 AND BSD-2-Clause AND MIT AND Apache-1.0 AND NCSA AND BSL-1.0 php-embedded PHP and Zend and BSD and MIT and ASL 1.0 and NCSA php-enchant PHP-3.01 php-ffi PHP php-fpm PHP-3.01 AND Zend-2.0 AND BSD-2-Clause AND MIT AND Apache-1.0 AND NCSA AND BSL-1.0 php-gd PHP-3.01 php-gmp PHP-3.01 php-intl PHP-3.01 php-json PHP php-ldap PHP php-mbstring PHP and LGPLv2 and OpenLDAP php-mysqlnd PHP php-odbc PHP php-opcache PHP-3.01 php-pdo PHP php-pear BSD and LGPLv3+ php-pecl-apcu PHP php-pecl-apcu-devel PHP php-pecl-rrd BSD php-pecl-xdebug PHP php-pecl-xdebug3 Xdebug-1.03 php-pecl-zip PHP php-pgsql PHP php-process PHP php-recode PHP php-snmp PHP-3.01 php-soap PHP php-xml PHP php-xmlrpc PHP and BSD pidgin BSD and GPLv2+ and GPLv2 and LGPLv2+ and MIT pidgin-sipe GPLv2+ pinentry GPLv2+ pinentry-emacs GPLv2+ pinentry-gnome3 GPLv2+ pinentry-gtk GPLv2+ pinfo GPLv2 pipewire MIT pipewire-devel MIT pipewire-doc MIT pipewire-libs MIT pipewire-utils MIT pipewire0.2-devel LGPLv2+ pipewire0.2-libs LGPLv2+ pixman MIT pixman-devel MIT pki-servlet-engine ASL 2.0 platform-python Python platform-python-coverage ASL 2.0 and MIT and (MIT or GPL) platform-python-debug Python platform-python-devel Python plexus-cipher ASL 2.0 plexus-classworlds ASL 2.0 and Plexus plexus-containers-component-annotations ASL 2.0 and MIT and xpp plexus-interpolation ASL 2.0 and ASL 1.1 and MIT plexus-sec-dispatcher ASL 2.0 plexus-utils ASL 1.1 and ASL 2.0 and xpp and BSD and Public Domain plymouth GPLv2+ plymouth-core-libs GPLv2+ plymouth-graphics-libs GPLv2+ plymouth-plugin-fade-throbber GPLv2+ plymouth-plugin-label GPLv2+ plymouth-plugin-script GPLv2+ plymouth-plugin-space-flares GPLv2+ plymouth-plugin-throbgress GPLv2+ plymouth-plugin-two-step GPLv2+ plymouth-scripts GPLv2+ plymouth-system-theme GPLv2+ plymouth-theme-charge GPLv2+ plymouth-theme-fade-in GPLv2+ plymouth-theme-script GPLv2+ plymouth-theme-solar GPLv2+ plymouth-theme-spinfinity GPLv2+ plymouth-theme-spinner GPLv2+ pmdk-convert BSD pmempool BSD pmix BSD pmreorder BSD pnm2ppa GPLv2+ podman ASL 2.0 podman-catatonit ASL 2.0 and GPLv3+ podman-docker Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 podman-gvproxy ASL 2.0 and GPLv3+ podman-plugins ASL 2.0 and GPLv3+ podman-remote Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 podman-tests Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 policycoreutils-gui GPLv2 policycoreutils-sandbox GPLv2 poppler (GPLv2 or GPLv3) and GPLv2+ and LGPLv2+ and MIT poppler-data BSD and GPLv2 poppler-glib (GPLv2 or GPLv3) and GPLv2+ and LGPLv2+ and MIT poppler-qt5 (GPLv2 or GPLv3) and GPLv2+ and LGPLv2+ and MIT poppler-utils (GPLv2 or GPLv3) and GPLv2+ and LGPLv2+ and MIT postfix-cdb (IBM and GPLv2+) or (EPL-2.0 and GPLv2+) postfix-ldap (IBM and GPLv2+) or (EPL-2.0 and GPLv2+) postfix-mysql (IBM and GPLv2+) or (EPL-2.0 and GPLv2+) postfix-pcre (IBM and GPLv2+) or (EPL-2.0 and GPLv2+) postfix-perl-scripts (IBM and GPLv2+) or (EPL-2.0 and GPLv2+) postfix-pgsql (IBM and GPLv2+) or (EPL-2.0 and GPLv2+) postfix-sqlite (IBM and GPLv2+) or (EPL-2.0 and GPLv2+) postgres-decoderbufs MIT postgresql PostgreSQL postgresql-contrib PostgreSQL postgresql-docs PostgreSQL postgresql-jdbc BSD postgresql-jdbc-javadoc BSD postgresql-odbc LGPLv2+ postgresql-odbc-tests LGPLv2+ postgresql-plperl PostgreSQL postgresql-plpython3 PostgreSQL postgresql-pltcl PostgreSQL postgresql-private-devel PostgreSQL postgresql-private-libs PostgreSQL postgresql-server PostgreSQL postgresql-server-devel PostgreSQL postgresql-static PostgreSQL postgresql-test PostgreSQL postgresql-test-rpm-macros PostgreSQL postgresql-upgrade PostgreSQL postgresql-upgrade-devel PostgreSQL potrace GPLv2+ powertop GPLv2 pptp GPLv2+ procmail GPLv2+ or Artistic prometheus-jmx-exporter ASL 2.0 prometheus-jmx-exporter-openjdk11 ASL 2.0 prometheus-jmx-exporter-openjdk17 ASL 2.0 prometheus-jmx-exporter-openjdk8 ASL 2.0 protobuf BSD protobuf-c BSD protobuf-c-compiler BSD protobuf-c-devel BSD protobuf-compiler BSD protobuf-lite BSD pulseaudio LGPLv2+ pulseaudio-libs LGPLv2+ pulseaudio-libs-devel LGPLv2+ pulseaudio-libs-glib2 LGPLv2+ pulseaudio-module-bluetooth LGPLv2+ pulseaudio-module-x11 LGPLv2+ pulseaudio-utils LGPLv2+ purple-sipe GPLv2+ pygobject2 LGPLv2+, MIT pygobject2-codegen LGPLv2+, MIT pygobject2-devel LGPLv2+, MIT pygobject2-doc LGPLv2+, MIT pygtk2 LGPLv2+ pygtk2-codegen LGPLv2+ pygtk2-devel LGPLv2+ pygtk2-doc LGPLv2+ pykickstart GPLv2 and MIT python-nose-docs LGPLv2+ and Public Domain python-podman-api LGPLv2 python-psycopg2-doc LGPLv3+ with exceptions python-pymongo-doc ASL 2.0 and MIT python-qt5-rpm-macros GPLv3 python-rpm-macros MIT python-sqlalchemy-doc MIT python-srpm-macros MIT python-virtualenv-doc MIT python2 Python python2-attrs MIT python2-babel BSD python2-backports Public Domain python2-backports-ssl_match_hostname Python python2-bson ASL 2.0 and MIT python2-cairo MPLv1.1 or LGPLv2 python2-cairo-devel MPLv1.1 or LGPLv2 python2-chardet LGPLv2 python2-coverage ASL 2.0 and MIT and (MIT or GPLv2) python2-Cython ASL 2.0 python2-debug Python python2-devel Python python2-dns MIT python2-docs Python python2-docs-info Python python2-docutils Public Domain and BSD and Python and GPLv3+ python2-funcsigs ASL 2.0 python2-idna BSD and Python and Unicode python2-ipaddress Python python2-jinja2 BSD python2-libs Python python2-lxml BSD python2-markupsafe BSD python2-mock BSD python2-nose LGPLv2+ and Public Domain python2-numpy BSD and Python python2-numpy-doc BSD and Python python2-numpy-f2py BSD and Python python2-pip MIT and Python and ASL 2.0 and BSD and ISC and LGPLv2 and MPLv2.0 and (ASL 2.0 or BSD) python2-pip-wheel MIT and Python and ASL 2.0 and BSD and ISC and LGPLv2 and MPLv2.0 and (ASL 2.0 or BSD) python2-pluggy MIT python2-psycopg2 LGPLv3+ with exceptions python2-psycopg2-debug LGPLv3+ with exceptions python2-psycopg2-tests LGPLv3+ with exceptions python2-py MIT and Public Domain python2-pygments BSD python2-pymongo ASL 2.0 and MIT python2-pymongo-gridfs ASL 2.0 and MIT python2-PyMySQL MIT python2-pysocks BSD python2-pytest MIT python2-pytest-mock MIT python2-pytz MIT python2-pyyaml MIT python2-requests ASL 2.0 python2-rpm-macros MIT python2-scipy BSD and Boost and Public Domain python2-scour ASL 2.0 python2-setuptools MIT python2-setuptools-wheel MIT python2-setuptools_scm MIT python2-six MIT python2-sqlalchemy MIT python2-test Python python2-tkinter Python python2-tools Python python2-urllib3 MIT python2-virtualenv MIT python2-wheel MIT python2-wheel-wheel MIT python3-abrt GPLv2+ python3-abrt-addon GPLv2+ python3-abrt-container-addon GPLv2+ python3-abrt-doc GPLv2+ python3-argcomplete ASL 2.0 python3-argh LGPLv3+ python3-attrs MIT python3-augeas LGPLv2+ python3-babel BSD python3-bcc ASL 2.0 python3-bind MPLv2.0 python3-bind9.16 MPLv2.0 python3-blivet LGPLv2+ python3-blockdev LGPLv2+ python3-brlapi LGPLv2+ python3-brotli MIT python3-bson ASL 2.0 and MIT python3-bytesize LGPLv2+ python3-cairo MPLv1.1 or LGPLv2 python3-clang NCSA python3-click BSD python3-coverage ASL 2.0 and MIT and (MIT or GPL) python3-cpio LGPLv2+ python3-createrepo_c GPLv2+ python3-criu GPLv2 python3-cups GPLv2+ python3-custodia GPLv3+ python3-dasbus LGPLv2+ python3-dbus-client-gen MPLv2.0 python3-dbus-python-client-gen MPLv2.0 python3-dbus-signature-pyparsing ASL 2.0 python3-distro ASL 2.0 python3-dnf-plugin-modulesync GPLv2+ python3-dnf-plugin-spacewalk GPLv2 python3-docs Python python3-docutils Public Domain and BSD and Python and GPLv3+ python3-enchant LGPLv2+ python3-evdev BSD python3-flask BSD python3-freeradius GPLv2+ and LGPLv2+ python3-gevent MIT python3-gobject LGPLv2+ and MIT python3-gobject-base LGPLv2+ and MIT python3-greenlet MIT python3-gssapi ISC python3-hivex LGPLv2 python3-html5lib MIT python3-humanize MIT python3-hwdata GPLv2 python3-idle Python python3-idm-pki GPLv2 and LGPLv2 python3-into-dbus-python ASL 2.0 python3-ipaclient GPLv3+ python3-ipalib GPLv3+ python3-ipaserver GPLv3+ python3-ipatests GPLv3+ python3-itsdangerous BSD python3-jabberpy LGPLv2+ python3-jinja2 BSD python3-jmespath MIT python3-jsonpatch BSD python3-jsonpointer BSD python3-jsonschema MIT python3-justbases LGPLv2+ python3-justbytes LGPLv2+ python3-jwcrypto LGPLv3+ python3-kdcproxy MIT python3-keycloak-httpd-client-install GPLv3 python3-kickstart GPLv2 and MIT python3-koan GPLv2+ python3-langtable GPLv3+ python3-lasso GPLv2+ python3-ldap Python python3-leapp ASL 2.0 python3-lib389 GPLv3+ and (ASL 2.0 or MIT) python3-libguestfs LGPLv2+ python3-libmodulemd MIT python3-libmount LGPLv2+ python3-libnbd LGPLv2+ python3-libnmstate LGPLv2+ python3-libreport GPLv2+ python3-libvirt LGPLv2+ python3-libvoikko GPLv2+ python3-lit NCSA python3-lldb NCSA python3-louis LGPLv3+ python3-lxml BSD python3-mako (MIT and Python) and (BSD or GPLv2) python3-markupsafe BSD python3-meh GPLv2+ python3-meh-gui GPLv2+ python3-mod_wsgi ASL 2.0 python3-netaddr BSD python3-netifaces MIT python3-networkx BSD python3-networkx-core BSD python3-newt LGPLv2 python3-nispor ASL 2.0 python3-nose LGPLv2+ and Public Domain python3-ntplib MIT python3-numpy BSD and Python python3-numpy-f2py BSD and Python python3-ordered-set MIT python3-osa-common GPLv2 python3-osad GPLv2 python3-osbuild Apache-2.0 python3-pcp GPLv2+ python3-pexpect MIT python3-pid ASL 2.0 python3-pillow MIT python3-pip MIT and Python and ASL 2.0 and BSD and ISC and LGPLv2 and MPLv2.0 and (ASL 2.0 or BSD) python3-pluggy MIT python3-podman ASL 2.0 python3-prettytable BSD python3-productmd LGPLv2+ python3-protobuf BSD python3-psutil BSD python3-psycopg2 LGPLv3+ with exceptions python3-ptyprocess ISC python3-py MIT and Public Domain python3-pyasn1 BSD python3-pyasn1-modules BSD python3-pyatspi LGPLv2 and GPLv2 python3-pycurl LGPLv2+ or MIT python3-pydbus LGPLv2+ python3-pyghmi ASL 2.0 python3-pygments BSD python3-pymongo ASL 2.0 and MIT python3-pymongo-gridfs ASL 2.0 and MIT python3-PyMySQL MIT python3-pyodbc MIT python3-pyOpenSSL ASL 2.0 python3-pyparted GPLv2+ python3-pyqt5-sip GPLv2 or GPLv3 and (GPLv3+ with exceptions) python3-pyserial Python python3-pytest MIT python3-pytoml MIT python3-pytz MIT python3-pyusb BSD python3-pyxdg LGPLv2 python3-qrcode BSD python3-qrcode-core BSD python3-qt5 GPLv3 python3-qt5-base GPLv3 python3-reportlab BSD python3-requests-file ASL 2.0 python3-requests-ftp ASL 2.0 python3-rhn-check GPLv2 python3-rhn-client-tools GPLv2 python3-rhn-setup GPLv2 python3-rhn-setup-gnome GPLv2 python3-rhn-virtualization-common GPLv2 python3-rhn-virtualization-host GPLv2 python3-rhncfg GPLv2 python3-rhncfg-actions GPLv2 python3-rhncfg-client GPLv2 python3-rhncfg-management GPLv2 python3-rhnlib GPLv2 python3-rhnpush GPLv2 python3-rpm-generators GPLv2+ python3-rpm-macros MIT python3-rpmfluff GPLv2+ python3-sanlock GPLv2 and GPLv2+ and LGPLv2+ python3-scipy BSD and LGPLv2+ python3-scour ASL 2.0 python3-semantic_version BSD python3-simpleline GPLv2+ python3-spacewalk-abrt GPLv2 python3-spacewalk-backend-libs GPLv2 python3-spacewalk-koan GPLv2 python3-spacewalk-oscap GPLv2 python3-spacewalk-usix GPLv2 python3-speechd GPLv2+ python3-sqlalchemy MIT python3-subversion ASL 2.0 python3-suds LGPLv3+ python3-sushy ASL 2.0 python3-tbb ASL 2.0 python3-test Python python3-tkinter Python python3-tomli MIT python3-tracer GPL-2.0-or-later python3-unbound BSD python3-virtualenv MIT python3-webencodings BSD python3-werkzeug BSD python3-wheel MIT python3-wheel-wheel MIT python3-wx-siplib GPLv2 or GPLv3 and (GPLv3+ with exceptions) python3-yubico BSD python3.11 Python python3.11-cffi MIT python3.11-charset-normalizer MIT python3.11-cryptography (ASL 2.0 or BSD) and Python and BSD and MIT and (MIT or ASL 2.0) and ASL 2.0 python3.11-devel Python python3.11-idna BSD and Python and Unicode python3.11-libs Python python3.11-lxml BSD and MIT python3.11-mod_wsgi ASL 2.0 python3.11-numpy BSD and Python and ASL 2.0 python3.11-numpy-f2py BSD and Python and ASL 2.0 python3.11-pip MIT and Python and ASL 2.0 and BSD and ISC and LGPLv2 and MPLv2.0 and (ASL 2.0 or BSD) python3.11-pip-wheel MIT and Python and ASL 2.0 and BSD and ISC and LGPLv2 and MPLv2.0 and (ASL 2.0 or BSD) python3.11-ply BSD python3.11-psycopg2 LGPLv3+ with exceptions python3.11-pycparser BSD python3.11-PyMySQL MIT python3.11-pysocks BSD python3.11-pyyaml MIT python3.11-requests ASL 2.0 python3.11-rpm-macros MIT python3.11-scipy BSD and Boost and Public Domain python3.11-setuptools MIT and ASL 2.0 and (BSD or ASL 2.0) and Python python3.11-setuptools-wheel MIT and ASL 2.0 and (BSD or ASL 2.0) and Python python3.11-six MIT python3.11-tkinter Python python3.11-urllib3 MIT python3.11-wheel MIT and (ASL 2.0 or BSD) python3.12 Python python3.12-cffi MIT and Python python3.12-charset-normalizer MIT python3.12-cryptography (ASL 2.0 or BSD) and Python and BSD and MIT and ASL 2.0 and (MIT or ASL 2.0) and Unicode python3.12-devel Python python3.12-idna BSD python3.12-libs Python and CC0 python3.12-lxml BSD and MIT python3.12-mod_wsgi ASL 2.0 and CC-BY python3.12-numpy BSD and MIT and ASL 2.0 and zlib python3.12-numpy-f2py BSD and MIT and ASL 2.0 and zlib python3.12-pip MIT and Python and ASL 2.0 and BSD and ISC and LGPLv2 and MPLv2.0 and (ASL 2.0 or BSD) python3.12-pip-wheel MIT and Python and ASL 2.0 and BSD and ISC and LGPLv2 and MPLv2.0 and (ASL 2.0 or BSD) python3.12-ply BSD python3.12-psycopg2 LGPLv3+ with exceptions python3.12-pycparser BSD python3.12-PyMySQL MIT python3.12-pyyaml MIT python3.12-requests ASL 2.0 python3.12-rpm-macros MIT python3.12-scipy BSD and MIT and Boost and Qhull and Public Domain python3.12-setuptools MIT and ASL 2.0 and (BSD or ASL 2.0) and Python python3.12-tkinter Python python3.12-urllib3 MIT python3.12-wheel MIT and (ASL 2.0 or BSD) python36 Python python36-debug Python python36-devel Python python36-rpm-macros Python python38 Python python38-asn1crypto MIT python38-babel BSD python38-cffi MIT python38-chardet LGPLv2 python38-cryptography ASL 2.0 or BSD python38-Cython ASL 2.0 python38-debug Python python38-devel Python python38-idle Python python38-idna BSD and Python and Unicode python38-jinja2 BSD python38-libs Python python38-lxml BSD python38-markupsafe BSD python38-mod_wsgi ASL 2.0 python38-numpy BSD python38-numpy-doc BSD and Python and ASL 2.0 python38-numpy-f2py BSD and Python and ASL 2.0 python38-pip MIT and Python and ASL 2.0 and BSD and ISC and LGPLv2 and MPLv2.0 and (ASL 2.0 or BSD) python38-pip-wheel MIT and Python and ASL 2.0 and BSD and ISC and LGPLv2 and MPLv2.0 and (ASL 2.0 or BSD) python38-ply BSD python38-psutil BSD python38-psycopg2 LGPLv3+ with exceptions python38-psycopg2-doc LGPLv3+ with exceptions python38-psycopg2-tests LGPLv3+ with exceptions python38-pycparser BSD python38-PyMySQL MIT python38-pysocks BSD python38-pytz MIT python38-pyyaml MIT python38-requests ASL 2.0 python38-resolvelib ISC python38-rpm-macros Python python38-scipy BSD and Boost and Public Domain python38-setuptools MIT and (BSD or ASL 2.0) python38-setuptools-wheel MIT and (BSD or ASL 2.0) python38-six MIT python38-test Python python38-tkinter Python python38-urllib3 MIT python38-wheel MIT python38-wheel-wheel MIT python39 Python python39-cffi MIT python39-chardet LGPLv2 python39-cryptography ASL 2.0 or BSD python39-devel Python python39-idle Python python39-idna BSD and Python and Unicode python39-libs Python python39-lxml BSD python39-mod_wsgi ASL 2.0 python39-numpy BSD python39-numpy-doc BSD and Python and ASL 2.0 python39-numpy-f2py BSD and Python and ASL 2.0 python39-pip MIT and Python and ASL 2.0 and BSD and ISC and LGPLv2 and MPLv2.0 and (ASL 2.0 or BSD) python39-pip-wheel MIT and Python and ASL 2.0 and BSD and ISC and LGPLv2 and MPLv2.0 and (ASL 2.0 or BSD) python39-ply BSD python39-psutil BSD python39-psycopg2 LGPL-3.0-or-later WITH openvpn-openssl-exception python39-psycopg2-doc LGPL-3.0-or-later WITH openvpn-openssl-exception python39-psycopg2-tests LGPL-3.0-or-later WITH openvpn-openssl-exception python39-pycparser BSD python39-PyMySQL MIT python39-pysocks BSD python39-pyyaml MIT python39-requests ASL 2.0 python39-rpm-macros MIT python39-scipy BSD and Boost and Public Domain python39-setuptools MIT and (BSD or ASL 2.0) python39-setuptools-wheel MIT and (BSD or ASL 2.0) python39-six MIT python39-test Python python39-tkinter Python python39-toml MIT python39-urllib3 MIT python39-wheel MIT and (ASL 2.0 or BSD) python39-wheel-wheel MIT and (ASL 2.0 or BSD) qatengine BSD-3-Clause AND OpenSSL qatlib BSD and (BSD or GPLv2) qatlib-service BSD and (BSD or GPLv2) qatzip BSD-3-Clause qatzip-libs BSD-3-Clause qemu-guest-agent GPLv2 and GPLv2+ and CC-BY qemu-img GPLv2 and GPLv2+ and CC-BY qemu-kvm GPLv2 and GPLv2+ and CC-BY qemu-kvm-block-curl GPLv2 and GPLv2+ and CC-BY qemu-kvm-block-gluster GPLv2 and GPLv2+ and CC-BY qemu-kvm-block-iscsi GPLv2 and GPLv2+ and CC-BY qemu-kvm-block-rbd GPLv2 and GPLv2+ and CC-BY qemu-kvm-block-ssh GPLv2 and GPLv2+ and CC-BY qemu-kvm-common GPLv2 and GPLv2+ and CC-BY qemu-kvm-core GPLv2 and GPLv2+ and CC-BY qemu-kvm-docs GPLv2 and GPLv2+ and CC-BY qemu-kvm-hw-usbredir GPLv2 and GPLv2+ and CC-BY qemu-kvm-ui-opengl GPLv2 and GPLv2+ and CC-BY qemu-kvm-ui-spice GPLv2 and GPLv2+ and CC-BY qgnomeplatform LGPLv2+ qgpgme LGPLv2+ and GPLv3+ qpdf (Artistic 2.0 or ASL 2.0) and MIT qpdf-doc (Artistic 2.0 or ASL 2.0) and MIT qpdf-libs (Artistic 2.0 or ASL 2.0) and MIT qperf GPLv2 or BSD qrencode LGPLv2+ qrencode-libs LGPLv2+ qt5-assistant LGPLv3 or LGPLv2 qt5-designer LGPLv3 or LGPLv2 qt5-doctools LGPLv3 or LGPLv2 qt5-linguist LGPLv3 or LGPLv2 qt5-qdbusviewer LGPLv3 or LGPLv2 qt5-qt3d LGPLv2 with exceptions or GPLv3 with exceptions qt5-qt3d-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qt3d-doc GFDL qt5-qt3d-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtbase LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtbase-common LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtbase-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtbase-doc GFDL qt5-qtbase-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtbase-gui LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtbase-mysql LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtbase-odbc LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtbase-postgresql LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtbase-private-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtcanvas3d LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtcanvas3d-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtconnectivity LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtconnectivity-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtconnectivity-doc GFDL qt5-qtconnectivity-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtdeclarative LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtdeclarative-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtdeclarative-doc GFDL qt5-qtdeclarative-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtdoc GFDL qt5-qtgraphicaleffects LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtgraphicaleffects-doc GFDL qt5-qtimageformats LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtimageformats-doc GFDL qt5-qtlocation LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtlocation-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtlocation-doc GFDL qt5-qtlocation-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtmultimedia LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtmultimedia-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtmultimedia-doc GFDL qt5-qtmultimedia-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtquickcontrols LGPLv2 or LGPLv3 and GFDL qt5-qtquickcontrols-doc GFDL qt5-qtquickcontrols-examples LGPLv2 or LGPLv3 and GFDL qt5-qtquickcontrols2 GPLv2+ or LGPLv3 and GFDL qt5-qtquickcontrols2-doc GFDL qt5-qtquickcontrols2-examples GPLv2+ or LGPLv3 and GFDL qt5-qtscript LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtscript-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtscript-doc GFDL qt5-qtscript-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtsensors LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtsensors-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtsensors-doc GFDL qt5-qtsensors-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtserialbus LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtserialbus-doc GFDL qt5-qtserialbus-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtserialport LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtserialport-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtserialport-doc GFDL qt5-qtserialport-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtsvg LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtsvg-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtsvg-doc GFDL qt5-qtsvg-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-qttools LGPLv3 or LGPLv2 qt5-qttools-common LGPLv3 or LGPLv2 qt5-qttools-devel LGPLv3 or LGPLv2 qt5-qttools-doc GFDL qt5-qttools-examples LGPLv3 or LGPLv2 qt5-qttools-libs-designer LGPLv3 or LGPLv2 qt5-qttools-libs-designercomponents LGPLv3 or LGPLv2 qt5-qttools-libs-help LGPLv3 or LGPLv2 qt5-qttranslations LGPLv2 with exceptions or GPLv3 with exceptions and GFDL qt5-qtwayland LGPLv3 qt5-qtwayland-doc GFDL qt5-qtwayland-examples LGPLv3 qt5-qtwebchannel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtwebchannel-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtwebchannel-doc GFDL qt5-qtwebchannel-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtwebsockets LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtwebsockets-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtwebsockets-doc GFDL qt5-qtwebsockets-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtx11extras LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtx11extras-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtx11extras-doc GFDL qt5-qtxmlpatterns LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtxmlpatterns-devel LGPLv2 with exceptions or GPLv3 with exceptions qt5-qtxmlpatterns-doc GFDL qt5-qtxmlpatterns-examples LGPLv2 with exceptions or GPLv3 with exceptions qt5-rpm-macros GPLv3 qt5-srpm-macros GPLv3 radvd BSD with advertising raptor2 GPLv2+ or LGPLv2+ or ASL 2.0 rarian LGPLv2+ rarian-compat GPLv2+ rasqal LGPLv2+ or ASL 2.0 rear GPLv3 recode GPLv2+ redfish-finder GPLv2 redhat-backgrounds Licensed only for approved usage, see COPYING for details. redhat-cloud-client-configuration GPLv2+ redhat-logos-ipa Licensed only for approved usage, see COPYING for details. redhat-lsb GPLv2 redhat-lsb-core GPLv2 redhat-lsb-cxx GPLv2 redhat-lsb-desktop GPLv2 redhat-lsb-languages GPLv2 redhat-lsb-printing GPLv2 redhat-lsb-submod-multimedia GPLv2 redhat-lsb-submod-security GPLv2 redhat-menus GPL+ redhat-rpm-config GPL+ redhat-support-lib-python Apache-2.0 redhat-support-tool Apache-2.0 redis BSD and MIT redis-devel BSD and MIT redis-doc CC-BY-SA redland LGPLv2+ or ASL 2.0 relaxngDatatype BSD rest LGPLv2 resteasy ASL 2.0 resteasy-javadoc ASL 2.0 rhc GPLv3 rhc-worker-playbook GPLv2+ rhel-system-roles GPLv3+ and MIT and BSD and Python rhn-check GPLv2 rhn-client-tools GPLv2 rhn-custom-info GPLv2 rhn-setup GPLv2 rhn-setup-gnome GPLv2 rhn-virtualization-host GPLv2 rhncfg GPLv2 rhncfg-actions GPLv2 rhncfg-client GPLv2 rhncfg-management GPLv2 rhnlib GPLv2 rhnpush GPLv2 rhnsd GPLv2 rhsm-gtk GPLv2 rhythmbox GPLv2+ with exceptions and GFDL rig GPLv2 rpm-build GPLv2+ rpm-mpi-hooks MIT rpm-ostree LGPLv2+ rpm-ostree-libs LGPLv2+ rpm-plugin-fapolicyd GPLv2+ rpmdevtools GPLv2+ and GPLv2 rpmemd BSD rpmlint GPLv2 rrdtool GPLv2+ with exceptions rrdtool-perl GPLv2+ with exceptions rshim GPLv2 rsyslog (GPLv3+ and ASL 2.0) rsyslog-crypto (GPLv3+ and ASL 2.0) rsyslog-doc (GPLv3+ and ASL 2.0) rsyslog-elasticsearch (GPLv3+ and ASL 2.0) rsyslog-gnutls (GPLv3+ and ASL 2.0) rsyslog-gssapi (GPLv3+ and ASL 2.0) rsyslog-kafka (GPLv3+ and ASL 2.0) rsyslog-mmaudit (GPLv3+ and ASL 2.0) rsyslog-mmfields (GPLv3+ and ASL 2.0) rsyslog-mmjsonparse (GPLv3+ and ASL 2.0) rsyslog-mmkubernetes (GPLv3+ and ASL 2.0) rsyslog-mmnormalize (GPLv3+ and ASL 2.0) rsyslog-mmsnmptrapd (GPLv3+ and ASL 2.0) rsyslog-mysql (GPLv3+ and ASL 2.0) rsyslog-omamqp1 (GPLv3+ and ASL 2.0) rsyslog-openssl (GPLv3+ and ASL 2.0) rsyslog-pgsql (GPLv3+ and ASL 2.0) rsyslog-relp (GPLv3+ and ASL 2.0) rsyslog-snmp (GPLv3+ and ASL 2.0) rsyslog-udpspoof (GPLv3+ and ASL 2.0) rt-tests GPLv2 rtkit GPLv3+ and BSD rtla GPLv2 ruby (Ruby or BSD) and Public Domain and MIT and CC0 and zlib and UCD ruby-bundled-gems (Ruby OR BSD-2-Clause) AND BSD-3-Clause AND ISC AND Public Domain AND MIT and CC0 AND zlib AND Unicode-DFS-2015 ruby-default-gems (Ruby or BSD) and Public Domain and MIT and CC0 and zlib and UCD ruby-devel (Ruby or BSD) and Public Domain and MIT and CC0 and zlib and UCD ruby-doc (Ruby OR BSD-2-Clause) AND BSD-3-Clause AND ISC AND Public Domain AND MIT and CC0 AND zlib AND Unicode-DFS-2015 ruby-hivex LGPLv2 ruby-irb (Ruby or BSD) and Public Domain and MIT and CC0 and zlib and UCD ruby-libguestfs LGPLv2+ ruby-libs Ruby or BSD rubygem-abrt MIT rubygem-abrt-doc MIT rubygem-bigdecimal Ruby OR BSD-2-Clause rubygem-bson ASL 2.0 rubygem-bson-doc ASL 2.0 rubygem-bundler MIT rubygem-bundler-doc MIT rubygem-did_you_mean MIT rubygem-io-console (Ruby or BSD) and Public Domain and MIT and CC0 and zlib and UCD rubygem-irb (Ruby or BSD) and Public Domain and MIT and CC0 and zlib and UCD rubygem-json (Ruby or GPLv2) and UCD rubygem-minitest MIT rubygem-mongo ASL 2.0 rubygem-mongo-doc ASL 2.0 rubygem-mysql2 MIT rubygem-mysql2-doc MIT rubygem-net-telnet (Ruby or BSD) and Public Domain and MIT and CC0 and zlib and UCD rubygem-openssl Ruby or BSD rubygem-pg (BSD or Ruby) and PostgreSQL rubygem-pg-doc (BSD-2-Clause OR Ruby) AND PostgreSQL rubygem-power_assert Ruby or BSD rubygem-psych MIT rubygem-racc Ruby OR BSD-2-Clause rubygem-rake MIT rubygem-rbs Ruby or BSD rubygem-rdoc GPLv2 and Ruby and MIT and OFL rubygem-rexml BSD rubygem-rss BSD rubygem-test-unit (Ruby or BSD) and (Ruby or BSD or Python) and (Ruby or BSD or LGPLv2+) rubygem-typeprof MIT rubygem-xmlrpc Ruby or BSD rubygems Ruby or MIT rubygems-devel Ruby or MIT runc ASL 2.0 rust (Apache-2.0 OR MIT) AND (Artistic-2.0 AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 AND Unicode-DFS-2016) rust-analyzer (Apache-2.0 OR MIT) AND (Artistic-2.0 AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 AND Unicode-DFS-2016) rust-debugger-common (Apache-2.0 OR MIT) AND (Artistic-2.0 AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 AND Unicode-DFS-2016) rust-doc (Apache-2.0 OR MIT) AND (Artistic-2.0 AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 AND Unicode-DFS-2016) rust-gdb (Apache-2.0 OR MIT) AND (Artistic-2.0 AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 AND Unicode-DFS-2016) rust-lldb (Apache-2.0 OR MIT) AND (Artistic-2.0 AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 AND Unicode-DFS-2016) rust-src (Apache-2.0 OR MIT) AND (Artistic-2.0 AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 AND Unicode-DFS-2016) rust-srpm-macros MIT rust-std-static (Apache-2.0 OR MIT) AND (Artistic-2.0 AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 AND Unicode-DFS-2016) rust-std-static-wasm32-unknown-unknown (Apache-2.0 OR MIT) AND (Artistic-2.0 AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 AND Unicode-DFS-2016) rust-std-static-wasm32-wasi (Apache-2.0 OR MIT) AND (Artistic-2.0 AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 AND Unicode-DFS-2016) rust-toolset (Apache-2.0 OR MIT) AND (Artistic-2.0 AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 AND Unicode-DFS-2016) rustfmt (Apache-2.0 OR MIT) AND (Artistic-2.0 AND BSD-3-Clause AND ISC AND MIT AND MPL-2.0 AND Unicode-DFS-2016) s390utils MIT s390utils-chreipl-fcp-mpath MIT s390utils-cmsfs GPLv2 s390utils-cmsfs-fuse MIT s390utils-cpacfstatsd MIT s390utils-cpuplugd MIT s390utils-hmcdrvfs MIT s390utils-iucvterm MIT s390utils-mon_statd MIT s390utils-osasnmpd MIT s390utils-se-data MIT s390utils-zdsfs MIT s390utils-ziomon MIT saab-fonts GPLv2+ with exceptions sac W3C samba-vfs-iouring GPL-3.0-or-later AND LGPL-3.0-or-later samyak-devanagari-fonts GPLv3+ with exceptions samyak-fonts-common GPLv3+ with exceptions samyak-gujarati-fonts GPLv3+ with exceptions samyak-malayalam-fonts GPLv3+ with exceptions samyak-odia-fonts GPLv3+ with exceptions samyak-tamil-fonts GPLv3+ with exceptions sane-backends GPLv2+ and GPLv2+ with exceptions and Public Domain and IJG and LGPLv2+ and MIT sane-backends-daemon GPLv2+ and GPLv2+ with exceptions and Public Domain and IJG and LGPLv2+ and MIT sane-backends-devel GPLv2+ and GPLv2+ with exceptions and Public Domain and IJG and LGPLv2+ and MIT sane-backends-doc GPLv2+ and GPLv2+ with exceptions and Public Domain and IJG and LGPLv2+ and MIT sane-backends-drivers-cameras GPLv2+ and GPLv2+ with exceptions and Public Domain and IJG and LGPLv2+ and MIT sane-backends-drivers-scanners GPLv2+ and GPLv2+ with exceptions and Public Domain and IJG and LGPLv2+ and MIT sane-backends-libs GPLv2+ and GPLv2+ with exceptions and Public Domain and IJG and LGPLv2+ and MIT sane-frontends GPLv2+ and LGPLv2+ and GPLv2+ with exceptions sanlk-reset GPLv2 and GPLv2+ and LGPLv2+ sanlock GPLv2 and GPLv2+ and LGPLv2+ sanlock-lib GPLv2 and GPLv2+ and LGPLv2+ sassist MIT sat4j EPL-1.0 or LGPLv2 satyr GPLv2+ sbc GPLv2 and LGPLv2+ sbd GPL-2.0-or-later sblim-cmpi-base EPL-1.0 sblim-gather EPL sblim-indication_helper EPL-1.0 sblim-sfcb EPL-1.0 sblim-sfcc EPL-1.0 sblim-sfcCommon EPL sblim-wbemcli EPL-1.0 scala BSD and CC0 and Public Domain scala-apidoc BSD and CC0 and Public Domain scala-swing BSD and CC0 and Public Domain scap-security-guide BSD-3-Clause scap-security-guide-doc BSD-3-Clause scap-workbench GPLv3+ scl-utils GPLv2+ scl-utils-build GPLv2+ scrub GPLv2+ SDL LGPLv2+ SDL-devel LGPLv2+ seabios LGPLv3 seabios-bin LGPLv3 seahorse GPLv2+ and LGPLv2+ seavgabios-bin LGPLv3 sendmail Sendmail sendmail-cf Sendmail sendmail-doc Sendmail sendmail-milter Sendmail setools GPLv2 setools-console-analyses GPLv2 setools-gui GPLv2 setroubleshoot GPLv2+ setroubleshoot-plugins GPLv2+ setroubleshoot-server GPLv2+ sevctl ASL 2.0 sgabios ASL 2.0 sgabios-bin ASL 2.0 si-units BSD si-units-javadoc BSD sil-abyssinica-fonts OFL sil-nuosu-fonts OFL sil-padauk-book-fonts OFL sil-padauk-fonts OFL sil-scheherazade-fonts OFL sip GPLv2 or GPLv3 and (GPLv3+ with exceptions) sisu EPL-1.0 and BSD sisu-inject EPL-1.0 and BSD sisu-plexus EPL-1.0 and BSD skkdic GPLv2+ skopeo ASL 2.0 skopeo-tests ASL 2.0 slang-devel GPLv2+ slapi-nis GPLv3 slf4j MIT and ASL 2.0 slf4j-jdk14 MIT and ASL 2.0 slirp4netns GPLv2 SLOF BSD smc-anjalioldlipi-fonts OFL smc-dyuthi-fonts GPLv3+ with exceptions smc-fonts-common GPLv3+ with exceptions and GPLv2+ with exceptions and GPLv2+ and GPLv2 and GPL+ smc-kalyani-fonts GPLv3+ with exceptions smc-meera-fonts GPLv2+ with exceptions smc-rachana-fonts GPLv2+ smc-raghumalayalam-fonts GPLv2 smc-suruma-fonts GPLv3 with exceptions snactor ASL 2.0 socat GPLv2 softhsm BSD softhsm-devel BSD sos-collector GPLv2 sound-theme-freedesktop GPLv2+ and LGPLv2+ and CC-BY-SA and CC-BY soundtouch LGPLv2+ source-highlight GPLv3+ spacewalk-abrt GPLv2 spacewalk-client-cert GPLv2 spacewalk-koan GPLv2 spacewalk-oscap GPLv2 spacewalk-remote-utils GPLv2 spacewalk-usix GPLv2 spamassassin ASL 2.0 speech-dispatcher GPLv2+ and GPLv2 speech-dispatcher-espeak-ng GPLv2+ and GPLv2 speex BSD speexdsp BSD spice-client-win-x64 GPLv2+ spice-client-win-x86 GPLv2+ spice-glib LGPLv2+ spice-glib-devel LGPLv2+ spice-gtk LGPLv2+ spice-gtk-tools LGPLv2+ spice-gtk3 LGPLv2+ spice-gtk3-devel LGPLv2+ spice-gtk3-vala LGPLv2+ spice-protocol BSD and LGPLv2+ spice-qxl-wddm-dod ASL 2.0 spice-server LGPLv2+ spice-streaming-agent ASL 2.0 spice-vdagent GPLv3+ spice-vdagent-win-x64 GPLv2+ spice-vdagent-win-x86 GPLv2+ spirv-tools ASL 2.0 spirv-tools-libs ASL 2.0 splix GPLv2 squid GPLv2+ and (LGPLv2+ and MIT and BSD and Public Domain) sscg GPLv3+ with exceptions sshpass GPLv2 sssd-idp GPLv3+ stalld GPLv2 startup-notification LGPLv2 startup-notification-devel LGPLv2 stax-ex CDDL-1.1 or GPLv2 stix-fonts OFL stix-math-fonts OFL stratis-cli ASL 2.0 stratisd MPLv2.0 stratisd-dracut MPLv2.0 stress-ng GPLv2+ subscription-manager-initial-setup-addon GPLv2 subscription-manager-migration-data CC0 subversion ASL 2.0 subversion-devel ASL 2.0 subversion-gnome ASL 2.0 subversion-javahl ASL 2.0 subversion-libs ASL 2.0 subversion-perl ASL 2.0 subversion-tools ASL 2.0 suitesparse (LGPLv2+ or BSD) and LGPLv2+ and GPLv2+ supermin GPLv2+ supermin-devel GPLv2+ sushi GPLv2+ with exceptions swig GPLv3+ and BSD swig-doc BSD swig-gdb BSD switcheroo-control GPLv3 swtpm BSD swtpm-devel BSD swtpm-libs BSD swtpm-tools BSD swtpm-tools-pkcs11 BSD synce4l GPL-2.0-or-later sysfsutils GPLv2 sysstat GPLv2+ system-config-printer-libs GPLv2+ system-config-printer-udev GPLv2+ systemtap GPLv2+ systemtap-client GPLv2+ systemtap-devel GPLv2+ systemtap-exporter GPLv2+ systemtap-initscript GPLv2+ systemtap-runtime GPLv2+ systemtap-runtime-java GPLv2+ systemtap-runtime-python3 GPLv2+ systemtap-runtime-virtguest GPLv2+ systemtap-runtime-virthost GPLv2+ systemtap-sdt-devel GPLv2+ and Public Domain systemtap-server GPLv2+ taglib LGPLv2 or MPLv1.1 tagsoup ASL 2.0 and (GPLv2+ or AFL) tang GPLv3+ targetcli ASL 2.0 tbb ASL 2.0 tbb-devel ASL 2.0 tbb-doc ASL 2.0 tcl TCL tcl-brlapi LGPLv2+ tcpdump BSD with advertising tcsh BSD teckit LGPLv2+ or CPL telnet BSD telnet-server BSD tesseract ASL 2.0 tex-fonts-hebrew GPL+ and LPPL texlive Artistic 2.0 and GPLv2 and GPLv2+ and LGPLv2+ and LPPL and MIT and Public Domain and UCD and Utopia texlive-adjustbox LPPL texlive-ae LPPL texlive-algorithms LGPLv2+ texlive-amscls LPPL texlive-amsfonts OFL texlive-amsmath LPPL texlive-anyfontsize LPPL texlive-anysize Public Domain texlive-appendix LPPL texlive-arabxetex LPPL texlive-arphic Freely redistributable without restriction texlive-attachfile LPPL texlive-avantgar GPL+ texlive-awesomebox WTFPL texlive-babel LPPL texlive-babel-english LPPL texlive-babelbib LPPL texlive-base Artistic 2.0 and GPLv2 and GPLv2+ and LGPLv2+ and LPPL and MIT and Public Domain and UCD and Utopia texlive-beamer GPL+ texlive-bera Bitstream Vera texlive-beton LPPL texlive-bibtex Knuth texlive-bibtopic GPL+ texlive-bidi LPPL texlive-bigfoot GPLv2+ texlive-bookman GPL+ texlive-booktabs GPL+ texlive-breakurl LPPL texlive-breqn LPPL texlive-capt-of LPPL texlive-caption LPPL texlive-carlisle LPPL texlive-changebar LPPL texlive-changepage LPPL texlive-charter Copyright only texlive-chngcntr LPPL texlive-cite Copyright only texlive-cjk GPL+ texlive-classpack LPPL texlive-cm Knuth texlive-cm-lgc GPL+ texlive-cm-super GPL+ texlive-cmap LPPL texlive-cmextra LPPL texlive-cns LPPL texlive-collectbox LPPL texlive-collection-basic Artistic 2.0 and GPLv2 and GPLv2+ and LGPLv2+ and LPPL and MIT and Public Domain and UCD and Utopia texlive-collection-fontsrecommended Artistic 2.0 and GPLv2 and GPLv2+ and LGPLv2+ and LPPL and MIT and Public Domain and UCD and Utopia texlive-collection-htmlxml Artistic 2.0 and GPLv2 and GPLv2+ and LGPLv2+ and LPPL and MIT and Public Domain and UCD and Utopia texlive-collection-latex Artistic 2.0 and GPLv2 and GPLv2+ and LGPLv2+ and LPPL and MIT and Public Domain and UCD and Utopia texlive-collection-latexrecommended Artistic 2.0 and GPLv2 and GPLv2+ and LGPLv2+ and LPPL and MIT and Public Domain and UCD and Utopia texlive-collection-xetex Artistic 2.0 and GPLv2 and GPLv2+ and LGPLv2+ and LPPL and MIT and Public Domain and UCD and Utopia texlive-colortbl LPPL texlive-context GPL+ or LPPL texlive-courier GPL+ texlive-crop LPPL texlive-csquotes LPPL texlive-ctable LPPL texlive-ctablestack LPPL texlive-currfile LPPL texlive-datetime LPPL texlive-dvipdfmx GPL+ texlive-dvipng LGPLv2+ texlive-dvips GPL+ texlive-dvisvgm GPL+ texlive-ec ec texlive-eepic Public Domain texlive-enctex GPL+ texlive-enumitem LPPL texlive-environ LPPL texlive-epsf Public Domain texlive-epstopdf BSD texlive-eqparbox LPPL texlive-eso-pic LPPL 1.2 texlive-etex Knuth texlive-etex-pkg LPPL texlive-etoolbox LPPL texlive-euenc LPPL texlive-euler LPPL texlive-euro LPPL texlive-eurosym Eurosym texlive-extsizes LPPL texlive-fancybox LPPL 1.2 texlive-fancyhdr LPPL texlive-fancyref GPL+ texlive-fancyvrb LPPL texlive-filecontents LPPL texlive-filehook LPPL texlive-finstrut LPPL texlive-fix2col LPPL texlive-fixlatvian LPPL texlive-float LPPL texlive-fmtcount LPPL texlive-fncychap LPPL texlive-fontawesome LPPL texlive-fontbook LPPL texlive-fonts-tlwg GPL+ texlive-fontspec LPPL texlive-fontware LPPL texlive-fontwrap GPL+ texlive-footmisc LPPL texlive-fp LPPL texlive-fpl GPL+ texlive-framed Copyright only texlive-garuda-c90 LPPL texlive-geometry LPPL texlive-glyphlist LPPL texlive-graphics LPPL texlive-graphics-cfg Public Domain texlive-graphics-def LPPL texlive-gsftopk GPL+ texlive-helvetic GPL+ texlive-hyperref LPPL texlive-hyph-utf8 Copyright only texlive-hyphen-base LPPL texlive-hyphenat LPPL texlive-ifetex LPPL texlive-ifluatex LPPL texlive-ifmtarg LPPL texlive-ifoddpage LPPL texlive-iftex LPPL texlive-ifxetex LPPL texlive-import Public Domain texlive-index LPPL texlive-jadetex MIT texlive-jknapltx GPL+ texlive-kastrup LPPL texlive-kerkis LPPL texlive-knuth-lib Knuth texlive-knuth-local Knuth texlive-koma-script LPPL texlive-kpathsea LGPLv2+ texlive-l3experimental LPPL texlive-l3kernel LPPL texlive-l3packages LPPL texlive-lastpage GPLv2+ texlive-latex LPPL texlive-latex-fonts LPPL texlive-latex2man LPPL texlive-latexconfig LPPL texlive-lettrine LPPL texlive-lib Artistic 2.0 and GPLv2 and GPLv2+ and LGPLv2+ and LPPL and MIT and Public Domain and UCD and Utopia texlive-linegoal LPPL texlive-lineno LPPL texlive-listings LPPL texlive-lm LPPL texlive-lm-math LPPL texlive-ltabptch LPPL texlive-ltxmisc Public Domain texlive-lua-alt-getopt MIT texlive-lualatex-math LPPL texlive-lualibs GPLv2+ texlive-luaotfload GPLv2+ texlive-luatex GPLv2+ texlive-luatex85 LPPL texlive-luatexbase Public Domain texlive-makecmds LPPL texlive-makeindex MakeIndex texlive-manfnt-font LPPL texlive-marginnote LPPL texlive-marvosym OFL texlive-mathpazo GPL+ texlive-mathspec LPPL texlive-mathtools LPPL texlive-mdwtools GPL+ texlive-memoir LPPL texlive-metafont Knuth texlive-metalogo LPPL texlive-metapost LGPLv2+ texlive-mflogo LPPL texlive-mflogo-font Knuth texlive-mfnfss LPPL texlive-mfware Knuth texlive-microtype LPPL texlive-mnsymbol Public Domain texlive-mparhack GPL+ texlive-mptopdf LPPL texlive-ms LPPL texlive-multido LPPL texlive-multirow LPPL texlive-natbib LPPL texlive-ncctools LPPL texlive-ncntrsbk GPL+ texlive-needspace LPPL texlive-norasi-c90 LPPL texlive-ntgclass LPPL texlive-oberdiek LPPL texlive-overpic LPPL texlive-palatino GPL+ texlive-paralist LPPL texlive-parallel LPPL texlive-parskip LPPL texlive-passivetex MIT texlive-pdfpages LPPL texlive-pdftex GPL+ texlive-pgf LPPL texlive-philokalia OFL texlive-placeins Public Domain texlive-plain LPPL texlive-polyglossia LPPL texlive-powerdot LPPL texlive-preprint LPPL texlive-psfrag psfrag texlive-pslatex LPPL texlive-psnfss LPPL texlive-pspicture LPPL texlive-pst-3d LPPL texlive-pst-arrow LPPL texlive-pst-blur LPPL texlive-pst-coil LPPL texlive-pst-eps LPPL texlive-pst-fill LPPL texlive-pst-grad LPPL texlive-pst-math LPPL texlive-pst-node LPPL texlive-pst-plot LPPL texlive-pst-slpe LPPL texlive-pst-text LPPL texlive-pst-tools LPPL texlive-pst-tree LPPL texlive-pstricks LPPL texlive-pstricks-add LPPL texlive-ptext LPPL 1.2 texlive-pxfonts GPL+ texlive-qstest LPPL texlive-rcs GPL+ texlive-realscripts LPPL texlive-rsfs Rsfs texlive-sansmath Public Domain texlive-sauerj LPPL texlive-scheme-basic Artistic 2.0 and GPLv2 and GPLv2+ and LGPLv2+ and LPPL and MIT and Public Domain and UCD and Utopia texlive-section LPPL texlive-sectsty LPPL texlive-seminar LPPL 1.2 texlive-sepnum LPPL texlive-setspace Copyright only texlive-showexpl LPPL texlive-soul LPPL texlive-stmaryrd LPPL texlive-subfig LPPL texlive-subfigure LPPL texlive-svn-prov LPPL texlive-symbol GPL+ texlive-t2 LPPL texlive-tabu LPPL texlive-tabulary LPPL texlive-tetex GPL+ and GPLv2+ and LPPL texlive-tex Knuth texlive-tex-gyre LPPL texlive-tex-gyre-math LPPL texlive-tex-ini-files Public Domain texlive-tex4ht LPPL texlive-texconfig LPPL texlive-texlive-common-doc Artistic 2.0 and GPLv2 and GPLv2+ and LGPLv2+ and LPPL and MIT and Public Domain and UCD and Utopia texlive-texlive-docindex LPPL texlive-texlive-en LPPL texlive-texlive-msg-translations LPPL texlive-texlive-scripts LPPL texlive-texlive.infra LPPL texlive-textcase LPPL texlive-textpos GPL+ texlive-threeparttable Threeparttable texlive-thumbpdf LPPL texlive-times GPL+ texlive-tipa LPPL texlive-titlesec LPPL texlive-titling LPPL texlive-tocloft LPPL texlive-tools LPPL texlive-translator LPPL or GPL+ texlive-trimspaces LPPL texlive-txfonts GPL+ texlive-type1cm LPPL texlive-typehtml LPPL texlive-ucharclasses Public Domain texlive-ucs LPPL texlive-uhc LPPL texlive-ulem Copyright only texlive-underscore LPPL texlive-unicode-data LPPL and Unicode texlive-unicode-math LPPL texlive-unisugar LPPL texlive-updmap-map Public Domain texlive-upquote LPPL 1.2 texlive-url LPPL texlive-utopia Utopia texlive-varwidth LPPL texlive-wadalab Wadalab texlive-was LPPL texlive-wasy Public Domain texlive-wasy2-ps Public Domain texlive-wasysym LPPL texlive-wrapfig LPPL texlive-xcolor LPPL texlive-xdvi MIT texlive-xecjk LPPL texlive-xecolor LPPL texlive-xecyr LPPL texlive-xeindex LPPL texlive-xepersian LPPL texlive-xesearch LPPL texlive-xetex MIT texlive-xetex-itrans LPPL texlive-xetex-pstricks Public Domain texlive-xetex-tibetan LPPL texlive-xetexconfig LPPL texlive-xetexfontinfo ASL 2.0 texlive-xifthen LPPL texlive-xkeyval LPPL texlive-xltxtra LPPL texlive-xmltex LPPL texlive-xmltexconfig LPPL texlive-xstring LPPL texlive-xtab LPPL texlive-xunicode LPPL texlive-zapfchan GPL+ texlive-zapfding GPL+ tftp BSD tftp-server BSD thai-scalable-fonts-common GPLv2+ and Bitstream Vera thai-scalable-garuda-fonts GPLv2+ and Bitstream Vera thai-scalable-kinnari-fonts GPLv2+ and Bitstream Vera thai-scalable-laksaman-fonts GPLv2+ and Bitstream Vera thai-scalable-loma-fonts GPLv2+ and Bitstream Vera thai-scalable-norasi-fonts GPLv2+ and Bitstream Vera thai-scalable-purisa-fonts GPLv2+ and Bitstream Vera thai-scalable-sawasdee-fonts GPLv2+ and Bitstream Vera thai-scalable-tlwgmono-fonts GPLv2+ and Bitstream Vera thai-scalable-tlwgtypewriter-fonts GPLv2+ and Bitstream Vera thai-scalable-tlwgtypist-fonts GPLv2+ and Bitstream Vera thai-scalable-tlwgtypo-fonts GPLv2+ and Bitstream Vera thai-scalable-umpush-fonts GPLv2+ and Bitstream Vera thai-scalable-waree-fonts GPLv2+ and Bitstream Vera theora-tools BSD thermald GPLv2+ thunderbird MPLv1.1 or GPLv2+ or LGPLv2+ tibetan-machine-uni-fonts GPLv3+ with exceptions tigervnc GPLv2+ tigervnc-icons GPLv2+ tigervnc-license GPLv2+ tigervnc-selinux GPLv2+ tigervnc-server GPLv2+ tigervnc-server-minimal GPLv2+ tigervnc-server-module GPLv2+ tinycdb Public Domain tix TCL tk TCL tk-devel TCL tlog GPLv2+ tog-pegasus MIT tog-pegasus-libs MIT tokyocabinet LGPLv2+ tomcat ASL 2.0 tomcat-admin-webapps ASL 2.0 tomcat-docs-webapp ASL 2.0 tomcat-el-3.0-api ASL 2.0 tomcat-jsp-2.3-api ASL 2.0 tomcat-lib ASL 2.0 tomcat-servlet-4.0-api ASL 2.0 tomcat-webapps ASL 2.0 toolbox ASL 2.0 toolbox-tests ASL 2.0 torque-libs OpenPBS and TORQUEv1.1 totem GPLv2+ with exceptions totem-nautilus GPLv2+ with exceptions totem-pl-parser LGPLv2+ tpm2-pkcs11 BSD tpm2-pkcs11-tools BSD tracer-common GPL-2.0-or-later tracker GPLv2+ tracker-miners GPLv2+ and LGPLv2+ ttmkfdir LGPLv2+ tuned-gtk GPLv2+ tuned-profiles-postgresql GPLv2+ tuned-utils GPLv2+ tuned-utils-systemtap GPLv2+ turbojpeg IJG twolame-libs LGPLv2+ tzdata-java Public Domain ucs-miscfixed-fonts Public Domain ucx BSD ucx-cma BSD ucx-devel BSD ucx-ib BSD ucx-rdmacm BSD udftools GPLv2+ udica GPLv3+ udisks2 GPLv2+ udisks2-iscsi LGPLv2+ udisks2-lsm LGPLv2+ udisks2-lvm2 LGPLv2+ unbound BSD unbound-devel BSD unbound-libs BSD unicode-ucd MIT unit-api BSD unit-api-javadoc BSD univocity-parsers ASL 2.0 unixODBC GPLv2+ and LGPLv2+ unixODBC-devel GPLv2+ and LGPLv2+ uom-lib BSD uom-lib-javadoc BSD uom-parent BSD uom-se BSD uom-se-javadoc BSD uom-systems BSD uom-systems-javadoc BSD upower GPLv2+ urlview GPLv2+ urw-base35-bookman-fonts AGPLv3 urw-base35-c059-fonts AGPLv3 urw-base35-d050000l-fonts AGPLv3 urw-base35-fonts AGPLv3 urw-base35-fonts-common AGPLv3 urw-base35-gothic-fonts AGPLv3 urw-base35-nimbus-mono-ps-fonts AGPLv3 urw-base35-nimbus-roman-fonts AGPLv3 urw-base35-nimbus-sans-fonts AGPLv3 urw-base35-p052-fonts AGPLv3 urw-base35-standard-symbols-ps-fonts AGPLv3 urw-base35-z003-fonts AGPLv3 usbguard GPLv2+ usbguard-dbus GPLv2+ usbguard-notifier GPLv2+ usbguard-selinux GPLv2+ usbguard-tools GPLv2+ usbmuxd GPLv3+ or GPLv2+ usbredir LGPLv2+ usbredir-devel LGPLv2+ usermode-gtk GPLv2+ utf8proc Unicode and MIT uuid MIT valgrind GPLv2+ valgrind-devel GPLv2+ varnish BSD varnish-devel BSD varnish-docs BSD varnish-modules BSD velocity ASL 2.0 vhostmd GPLv2+ vim-common Vim and MIT vim-enhanced Vim and MIT vim-filesystem Vim and MIT vim-X11 Vim and MIT vinagre GPLv2+ vino GPLv2+ virt-dib GPLv2+ virt-install GPLv2+ virt-manager GPLv2+ virt-manager-common GPLv2+ virt-p2v-maker GPLv2+ virt-top GPLv2+ virt-v2v GPLv2+ virt-v2v-bash-completion GPLv2+ virt-v2v-man-pages-ja GPLv2+ virt-v2v-man-pages-uk GPLv2+ virt-viewer GPLv2+ virt-who GPLv2+ virtio-win Apache-2.0 AND BSD-3-Clause AND GPL-2.0-only AND GPL-2.0-or-later voikko-tools GPLv2+ volume_key GPLv2 and (MPLv1.1 or GPLv2 or LGPLv2) volume_key-devel GPLv2 and (MPLv1.1 or GPLv2 or LGPLv2) volume_key-libs GPLv2 and (MPLv1.1 or GPLv2 or LGPLv2) vorbis-tools GPLv2 vsftpd GPLv2 with exceptions vte-profile GPLv3+ vte291 LGPLv2+ vulkan-headers ASL 2.0 vulkan-loader ASL 2.0 vulkan-loader-devel ASL 2.0 vulkan-tools ASL 2.0 vulkan-validation-layers ASL 2.0 WALinuxAgent ASL 2.0 WALinuxAgent-udev ASL 2.0 wavpack BSD wayland-devel MIT wayland-protocols-devel MIT webkit2gtk3 LGPLv2 webkit2gtk3-devel LGPLv2 webkit2gtk3-jsc LGPLv2 webkit2gtk3-jsc-devel LGPLv2 webrtc-audio-processing BSD and MIT weldr-client ASL 2.0 wget GPLv3+ whois GPLv2+ whois-nls GPLv2+ wireshark GPL+ wireshark-cli GPL+ wodim GPLv2 woff2 MIT wpebackend-fdo BSD wqy-microhei-fonts ASL 2.0 or GPLv3 with exceptions wqy-unibit-fonts GPLv2 with exceptions wsmancli BSD x3270-x11 BSD xalan-j2 ASL 2.0 and W3C xapian-core GPLv2+ xapian-core-libs GPLv2+ Xaw3d MIT and GPLv3+ xcb-util MIT xcb-util-image MIT xcb-util-keysyms MIT xcb-util-renderutil MIT xcb-util-wm MIT xdg-desktop-portal LGPLv2+ xdg-desktop-portal-gtk LGPLv2+ xdg-user-dirs GPLv2+ and MIT xdg-user-dirs-gtk GPL+ xdg-utils MIT xdp-tools GPLv2 xerces-j2 ASL 2.0 and W3C xinetd xinetd xkeyboard-config MIT xkeyboard-config-devel MIT xml-commons-apis ASL 2.0 and W3C and Public Domain xml-commons-resolver ASL 2.0 xmlgraphics-commons ASL 2.0 xmlsec1 MIT xmlsec1-nss MIT xmlsec1-openssl MIT xmlstarlet MIT xmlstreambuffer CDDL-1.0 or GPLv2 with exceptions xmlto GPLv2+ xorg-sgml-doctools MIT xorg-x11-docs MIT xorg-x11-drivers MIT xorg-x11-drv-ati MIT xorg-x11-drv-dummy MIT xorg-x11-drv-evdev MIT xorg-x11-drv-evdev-devel MIT xorg-x11-drv-fbdev MIT xorg-x11-drv-intel MIT xorg-x11-drv-libinput MIT xorg-x11-drv-nouveau MIT xorg-x11-drv-qxl MIT xorg-x11-drv-v4l MIT xorg-x11-drv-vesa MIT xorg-x11-drv-vmware MIT xorg-x11-drv-wacom GPLv2+ xorg-x11-drv-wacom-serial-support GPLv2+ xorg-x11-font-utils MIT xorg-x11-fonts-100dpi MIT and Lucida and Public Domain xorg-x11-fonts-75dpi MIT and Lucida and Public Domain xorg-x11-fonts-cyrillic MIT and Lucida and Public Domain xorg-x11-fonts-ethiopic MIT and Lucida and Public Domain xorg-x11-fonts-ISO8859-1-100dpi MIT and Lucida and Public Domain xorg-x11-fonts-ISO8859-1-75dpi MIT and Lucida and Public Domain xorg-x11-fonts-ISO8859-14-100dpi MIT and Lucida and Public Domain xorg-x11-fonts-ISO8859-14-75dpi MIT and Lucida and Public Domain xorg-x11-fonts-ISO8859-15-100dpi MIT and Lucida and Public Domain xorg-x11-fonts-ISO8859-15-75dpi MIT and Lucida and Public Domain xorg-x11-fonts-ISO8859-2-100dpi MIT and Lucida and Public Domain xorg-x11-fonts-ISO8859-2-75dpi MIT and Lucida and Public Domain xorg-x11-fonts-ISO8859-9-100dpi MIT and Lucida and Public Domain xorg-x11-fonts-ISO8859-9-75dpi MIT and Lucida and Public Domain xorg-x11-fonts-misc MIT and Lucida and Public Domain xorg-x11-fonts-Type1 MIT and Lucida and Public Domain xorg-x11-proto-devel MIT xorg-x11-server-common MIT xorg-x11-server-utils MIT xorg-x11-server-Xdmx MIT xorg-x11-server-Xephyr MIT xorg-x11-server-Xnest MIT xorg-x11-server-Xorg MIT xorg-x11-server-Xspice MIT xorg-x11-server-Xvfb MIT and GPLv2 xorg-x11-server-Xwayland MIT xorg-x11-utils MIT xorg-x11-xauth MIT xorg-x11-xbitmaps MIT xorg-x11-xinit MIT xorg-x11-xinit-session MIT xorg-x11-xkb-utils MIT xorriso GPLv2+ xrestop GPLv2+ xsane GPLv2+ and LGPLv2+ xsane-common GPLv2+ and LGPLv2+ xsane-gimp GPLv2+ and LGPLv2+ xsom CDDL-1.1 or GPLv2 with exceptions xterm MIT xterm-resize MIT xxhash BSD-2-Clause AND GPL-2.0-or-later xxhash-libs BSD-2-Clause xz-java Public Domain yajl ISC yara BSD-3-Clause yelp LGPLv2+ and ASL 2.0 and GPLv2+ yelp-libs LGPLv2+ and ASL 2.0 and GPLv2+ yelp-tools GPLv2+ yelp-xsl LGPLv2+ and GPLv2+ yp-tools GPLv2 ypbind GPLv2 ypserv GPLv2 zenity LGPLv2+ zsh-html MIT zstd BSD and GPLv2 zziplib LGPLv2+ or MPLv1.1 zziplib-utils LGPLv2+ or MPLv1.1 2.1. AppStream modules The following table lists packages in the AppStream repository by module and stream. Note that not all packages in the AppStream repository are distributed within a module. For all packages in the AppStream repository, see Chapter 2, The AppStream repository . Module Stream Packages 389-ds 1.4 389-ds-base, 389-ds-base-debuginfo, 389-ds-base-debugsource, 389-ds-base-devel, 389-ds-base-legacy-tools, 389-ds-base-legacy-tools-debuginfo, 389-ds-base-libs, 389-ds-base-libs-debuginfo, 389-ds-base-snmp, 389-ds-base-snmp-debuginfo, python3-lib389 ant 1.10 ant, ant-lib container-tools 1.0 buildah, buildah-debuginfo, buildah-debugsource, container-selinux, containernetworking-plugins, containernetworking-plugins-debuginfo, containernetworking-plugins-debugsource, containers-common, crit, criu, criu-debuginfo, criu-debugsource, fuse-overlayfs, fuse-overlayfs-debuginfo, fuse-overlayfs-debugsource, oci-systemd-hook, oci-systemd-hook-debuginfo, oci-systemd-hook-debugsource, oci-umount, oci-umount-debuginfo, oci-umount-debugsource, podman, podman-debuginfo, podman-debugsource, podman-docker, python3-criu, runc, runc-debuginfo, runc-debugsource, skopeo, skopeo-debuginfo, skopeo-debugsource, slirp4netns, slirp4netns-debuginfo, slirp4netns-debugsource container-tools 2.0 buildah, buildah-debuginfo, buildah-debugsource, buildah-tests, buildah-tests-debuginfo, cockpit-podman, conmon, container-selinux, containernetworking-plugins, containernetworking-plugins-debuginfo, containernetworking-plugins-debugsource, containers-common, crit, criu, criu-debuginfo, criu-debugsource, fuse-overlayfs, fuse-overlayfs-debuginfo, fuse-overlayfs-debugsource, podman, podman-debuginfo, podman-debugsource, podman-docker, podman-remote, podman-remote-debuginfo, podman-tests, python-podman-api, python3-criu, runc, runc-debuginfo, runc-debugsource, skopeo, skopeo-debuginfo, skopeo-debugsource, skopeo-tests, slirp4netns, slirp4netns-debuginfo, slirp4netns-debugsource, toolbox, udica container-tools 3.0 buildah, buildah-debuginfo, buildah-debugsource, buildah-tests, buildah-tests-debuginfo, cockpit-podman, conmon, conmon-debuginfo, conmon-debugsource, container-selinux, containernetworking-plugins, containernetworking-plugins-debuginfo, containernetworking-plugins-debugsource, containers-common, crit, criu, criu-debuginfo, criu-debugsource, crun, crun-debuginfo, crun-debugsource, fuse-overlayfs, fuse-overlayfs-debuginfo, fuse-overlayfs-debugsource, libslirp, libslirp-debuginfo, libslirp-debugsource, libslirp-devel, oci-seccomp-bpf-hook, oci-seccomp-bpf-hook-debuginfo, oci-seccomp-bpf-hook-debugsource, podman, podman-catatonit, podman-catatonit-debuginfo, podman-debuginfo, podman-debugsource, podman-docker, podman-plugins, podman-plugins-debuginfo, podman-remote, podman-remote-debuginfo, podman-tests, python3-criu, runc, runc-debuginfo, runc-debugsource, skopeo, skopeo-debuginfo, skopeo-debugsource, skopeo-tests, slirp4netns, slirp4netns-debuginfo, slirp4netns-debugsource, toolbox, toolbox-debuginfo, toolbox-debugsource, toolbox-tests, udica container-tools 4.0 aardvark-dns, buildah, buildah-debuginfo, buildah-debugsource, buildah-tests, buildah-tests-debuginfo, cockpit-podman, conmon, conmon-debuginfo, conmon-debugsource, container-selinux, containernetworking-plugins, containernetworking-plugins-debuginfo, containernetworking-plugins-debugsource, containers-common, crit, criu, criu-debuginfo, criu-debugsource, criu-devel, criu-libs, criu-libs-debuginfo, crun, crun-debuginfo, crun-debugsource, fuse-overlayfs, fuse-overlayfs-debuginfo, fuse-overlayfs-debugsource, libslirp, libslirp-debuginfo, libslirp-debugsource, libslirp-devel, netavark, oci-seccomp-bpf-hook, oci-seccomp-bpf-hook-debuginfo, oci-seccomp-bpf-hook-debugsource, podman, podman-catatonit, podman-catatonit-debuginfo, podman-debuginfo, podman-debugsource, podman-docker, podman-gvproxy, podman-gvproxy-debuginfo, podman-plugins, podman-plugins-debuginfo, podman-remote, podman-remote-debuginfo, podman-tests, python-podman, python3-criu, python3-podman, runc, runc-debuginfo, runc-debugsource, skopeo, skopeo-debuginfo, skopeo-debugsource, skopeo-tests, slirp4netns, slirp4netns-debuginfo, slirp4netns-debugsource, toolbox, toolbox-debuginfo, toolbox-debugsource, toolbox-tests, udica container-tools rhel8 aardvark-dns, buildah, buildah-debuginfo, buildah-debugsource, buildah-tests, buildah-tests-debuginfo, cockpit-podman, conmon, conmon-debuginfo, conmon-debugsource, container-selinux, containernetworking-plugins, containernetworking-plugins-debuginfo, containernetworking-plugins-debugsource, containers-common, crit, criu, criu-debuginfo, criu-debugsource, criu-devel, criu-libs, criu-libs-debuginfo, crun, crun-debuginfo, crun-debugsource, fuse-overlayfs, fuse-overlayfs-debuginfo, fuse-overlayfs-debugsource, libslirp, libslirp-debuginfo, libslirp-debugsource, libslirp-devel, netavark, oci-seccomp-bpf-hook, oci-seccomp-bpf-hook-debuginfo, oci-seccomp-bpf-hook-debugsource, podman, podman-catatonit, podman-catatonit-debuginfo, podman-debuginfo, podman-debugsource, podman-docker, podman-gvproxy, podman-gvproxy-debuginfo, podman-plugins, podman-plugins-debuginfo, podman-remote, podman-remote-debuginfo, podman-tests, python-podman, python3-criu, python3-podman, runc, runc-debuginfo, runc-debugsource, skopeo, skopeo-tests, slirp4netns, slirp4netns-debuginfo, slirp4netns-debugsource, toolbox, toolbox-debuginfo, toolbox-debugsource, toolbox-tests, udica eclipse rhel8 apache-commons-compress, apache-commons-jxpath, apiguardian, batik, batik-css, batik-util, eclipse, eclipse-debuginfo, eclipse-debugsource, eclipse-ecf, eclipse-ecf-core, eclipse-ecf-runtime, eclipse-emf, eclipse-emf-core, eclipse-emf-runtime, eclipse-emf-xsd, eclipse-equinox-osgi, eclipse-jdt, eclipse-p2-discovery, eclipse-pde, eclipse-platform, eclipse-platform-debuginfo, eclipse-swt, eclipse-swt-debuginfo, felix-gogo-command, felix-gogo-runtime, felix-gogo-shell, felix-scr, glassfish-annotation-api, glassfish-el, glassfish-el-api, glassfish-jsp, glassfish-jsp-api, glassfish-servlet-api, google-gson, hamcrest, hamcrest-core, icu4j, jetty, jetty-continuation, jetty-http, jetty-io, jetty-security, jetty-server, jetty-servlet, jetty-util, jsch, junit, junit5, jzlib, lucene, lucene-analysis, lucene-analyzers-smartcn, lucene-queries, lucene-queryparser, lucene-sandbox, objectweb-asm, opentest4j, sat4j, univocity-parsers, xml-commons-apis, xmlgraphics-commons, xz-java freeradius 3.0 freeradius, freeradius-debuginfo, freeradius-debugsource, freeradius-devel, freeradius-doc, freeradius-krb5, freeradius-krb5-debuginfo, freeradius-ldap, freeradius-ldap-debuginfo, freeradius-mysql, freeradius-mysql-debuginfo, freeradius-perl, freeradius-perl-debuginfo, freeradius-postgresql, freeradius-postgresql-debuginfo, freeradius-rest, freeradius-rest-debuginfo, freeradius-sqlite, freeradius-sqlite-debuginfo, freeradius-unixODBC, freeradius-unixODBC-debuginfo, freeradius-utils, freeradius-utils-debuginfo, python3-freeradius, python3-freeradius-debuginfo gimp 2.8 gimp, gimp-debuginfo, gimp-debugsource, gimp-devel, gimp-devel-tools, gimp-devel-tools-debuginfo, gimp-libs, gimp-libs-debuginfo, pygobject2, pygobject2-codegen, pygobject2-debuginfo, pygobject2-debugsource, pygobject2-devel, pygobject2-doc, pygtk2, pygtk2-codegen, pygtk2-debuginfo, pygtk2-debugsource, pygtk2-devel, pygtk2-doc, python2-cairo, python2-cairo-debuginfo, python2-cairo-devel, python2-pycairo, python2-pycairo-debugsource go-toolset rhel8 delve, delve-debuginfo, delve-debugsource, go-toolset, golang, golang-bin, golang-docs, golang-misc, golang-src, golang-tests httpd 2.4 httpd, httpd-debuginfo, httpd-debugsource, httpd-devel, httpd-filesystem, httpd-manual, httpd-tools, httpd-tools-debuginfo, mod_http2, mod_http2-debuginfo, mod_http2-debugsource, mod_ldap, mod_ldap-debuginfo, mod_md, mod_md-debuginfo, mod_md-debugsource, mod_proxy_html, mod_proxy_html-debuginfo, mod_session, mod_session-debuginfo, mod_ssl, mod_ssl-debuginfo idm client ipa, ipa-client, ipa-client-common, ipa-client-debuginfo, ipa-client-epn, ipa-client-samba, ipa-common, ipa-debuginfo, ipa-debugsource, ipa-healthcheck, ipa-healthcheck-core, ipa-python-compat, ipa-selinux, python-jwcrypto, python-qrcode, python-yubico, python3-ipaclient, python3-ipalib, python3-jwcrypto, python3-pyusb, python3-qrcode, python3-qrcode-core, python3-yubico, pyusb idm DL1 bind-dyndb-ldap, bind-dyndb-ldap-debuginfo, bind-dyndb-ldap-debugsource, custodia, ipa, ipa-client, ipa-client-common, ipa-client-debuginfo, ipa-client-epn, ipa-client-samba, ipa-common, ipa-debuginfo, ipa-debugsource, ipa-healthcheck, ipa-healthcheck-core, ipa-python-compat, ipa-selinux, ipa-server, ipa-server-common, ipa-server-debuginfo, ipa-server-dns, ipa-server-trust-ad, ipa-server-trust-ad-debuginfo, opendnssec, opendnssec-debuginfo, opendnssec-debugsource, python-jwcrypto, python-kdcproxy, python-qrcode, python-yubico, python3-custodia, python3-ipaclient, python3-ipalib, python3-ipaserver, python3-ipatests, python3-jwcrypto, python3-kdcproxy, python3-pyusb, python3-qrcode, python3-qrcode-core, python3-yubico, pyusb, slapi-nis, slapi-nis-debuginfo, slapi-nis-debugsource, softhsm, softhsm-debuginfo, softhsm-debugsource, softhsm-devel inkscape 0.92.3 inkscape, inkscape-debuginfo, inkscape-debugsource, inkscape-docs, inkscape-view, inkscape-view-debuginfo, python-scour, python2-scour javapackages-runtime 201801 javapackages-filesystem, javapackages-tools jaxb 4 jakarta-activation2, jaxb, jaxb-api4, jaxb-codemodel, jaxb-core, jaxb-dtd-parser, jaxb-istack-commons, jaxb-istack-commons-runtime, jaxb-istack-commons-tools, jaxb-relaxng-datatype, jaxb-rngom, jaxb-runtime, jaxb-txw2, jaxb-xjc, jaxb-xsom jmc rhel8 directory-maven-plugin, directory-maven-plugin-javadoc, ee4j-parent, HdrHistogram, HdrHistogram-javadoc, jaf, jaf-javadoc, jmc, jmc-core, jmc-core-javadoc, lz4-java, lz4-java-javadoc, owasp-java-encoder, owasp-java-encoder-javadoc libselinux-python 2.8 libselinux, libselinux-python, libselinux-python-debuginfo llvm-toolset rhel8 clang, clang-analyzer, clang-debuginfo, clang-debugsource, clang-devel, clang-devel-debuginfo, clang-libs, clang-libs-debuginfo, clang-resource-filesystem, clang-tools-extra, clang-tools-extra-debuginfo, clang-tools-extra-devel, compiler-rt, compiler-rt-debuginfo, compiler-rt-debugsource, git-clang-format, libomp, libomp-debuginfo, libomp-debugsource, libomp-devel, lld, lld-debuginfo, lld-debugsource, lld-devel, lld-libs, lld-libs-debuginfo, lldb, lldb-debuginfo, lldb-debugsource, lldb-devel, llvm, llvm-cmake-utils, llvm-debuginfo, llvm-debugsource, llvm-devel, llvm-devel-debuginfo, llvm-doc, llvm-googletest, llvm-libs, llvm-libs-debuginfo, llvm-static, llvm-test, llvm-test-debuginfo, llvm-toolset, python-lit, python3-clang, python3-lit, python3-lldb log4j 2 disruptor, jctools, log4j, log4j-jcl, log4j-slf4j, log4j-web mailman 2.1 mailman, mailman-debuginfo, mailman-debugsource mariadb 10.3 galera, galera-debuginfo, galera-debugsource, Judy, Judy-debuginfo, Judy-debugsource, mariadb, mariadb-backup, mariadb-backup-debuginfo, mariadb-common, mariadb-debuginfo, mariadb-debugsource, mariadb-devel, mariadb-embedded, mariadb-embedded-debuginfo, mariadb-embedded-devel, mariadb-errmsg, mariadb-gssapi-server, mariadb-gssapi-server-debuginfo, mariadb-oqgraph-engine, mariadb-oqgraph-engine-debuginfo, mariadb-server, mariadb-server-debuginfo, mariadb-server-galera, mariadb-server-utils, mariadb-server-utils-debuginfo, mariadb-test, mariadb-test-debuginfo mariadb 10.5 galera, galera-debuginfo, galera-debugsource, Judy, Judy-debuginfo, Judy-debugsource, mariadb, mariadb-backup, mariadb-backup-debuginfo, mariadb-common, mariadb-debuginfo, mariadb-debugsource, mariadb-devel, mariadb-embedded, mariadb-embedded-debuginfo, mariadb-embedded-devel, mariadb-errmsg, mariadb-gssapi-server, mariadb-gssapi-server-debuginfo, mariadb-oqgraph-engine, mariadb-oqgraph-engine-debuginfo, mariadb-pam, mariadb-pam-debuginfo, mariadb-server, mariadb-server-debuginfo, mariadb-server-galera, mariadb-server-utils, mariadb-server-utils-debuginfo, mariadb-test, mariadb-test-debuginfo mariadb 10.11 galera, galera-debuginfo, galera-debugsource, Judy, Judy-debuginfo, Judy-debugsource, mariadb, mariadb-backup, mariadb-backup-debuginfo, mariadb-common, mariadb-debuginfo, mariadb-debugsource, mariadb-devel, mariadb-embedded, mariadb-embedded-debuginfo, mariadb-embedded-devel, mariadb-errmsg, mariadb-gssapi-server, mariadb-gssapi-server-debuginfo, mariadb-oqgraph-engine, mariadb-oqgraph-engine-debuginfo, mariadb-pam, mariadb-pam-debuginfo, mariadb-server, mariadb-server-debuginfo, mariadb-server-galera, mariadb-server-utils, mariadb-server-utils-debuginfo, mariadb-test, mariadb-test-debuginfo maven 3.5 aopalliance, apache-commons-cli, apache-commons-codec, apache-commons-io, apache-commons-lang3, apache-commons-logging, atinject, cdi-api, geronimo-annotation, glassfish-el, glassfish-el-api, google-guice, guava20, hawtjni, hawtjni-runtime, httpcomponents-client, httpcomponents-core, jansi, jansi-native, jboss-interceptors-1.2-api, jcl-over-slf4j, jsoup, maven, maven-lib, maven-resolver, maven-resolver-api, maven-resolver-connector-basic, maven-resolver-impl, maven-resolver-spi, maven-resolver-transport-wagon, maven-resolver-util, maven-shared-utils, maven-wagon, maven-wagon-file, maven-wagon-http, maven-wagon-http-shared, maven-wagon-provider-api, plexus-cipher, plexus-classworlds, plexus-containers, plexus-containers-component-annotations, plexus-interpolation, plexus-sec-dispatcher, plexus-utils, sisu, sisu-inject, sisu-plexus, slf4j maven 3.6 aopalliance, apache-commons-cli, apache-commons-codec, apache-commons-io, apache-commons-lang3, atinject, cdi-api, geronimo-annotation, google-guice, guava, httpcomponents-client, httpcomponents-core, jansi, jcl-over-slf4j, jsoup, jsr-305, maven, maven-lib, maven-openjdk11, maven-openjdk17, maven-openjdk8, maven-resolver, maven-shared-utils, maven-wagon, plexus-cipher, plexus-classworlds, plexus-containers, plexus-containers-component-annotations, plexus-interpolation, plexus-sec-dispatcher, plexus-utils, sisu, slf4j maven 3.8 apache-commons-cli, apache-commons-codec, apache-commons-io, apache-commons-lang3, atinject, cdi-api, google-guice, guava, httpcomponents-client, httpcomponents-core, jakarta-annotations, jansi, jansi-debuginfo, jansi-debugsource, jcl-over-slf4j, jsr-305, maven, maven-lib, maven-openjdk11, maven-openjdk17, maven-openjdk21, maven-openjdk8, maven-resolver, maven-shared-utils, maven-wagon, plexus-cipher, plexus-classworlds, plexus-containers, plexus-containers-component-annotations, plexus-interpolation, plexus-sec-dispatcher, plexus-utils, sisu, slf4j mercurial 4.8 mercurial, mercurial-debuginfo, mercurial-debugsource, mercurial-hgk mercurial 6.2 mercurial, mercurial-chg, mercurial-chg-debuginfo, mercurial-debuginfo, mercurial-debugsource, mercurial-hgk mod_auth_openidc 2.3 cjose, cjose-debuginfo, cjose-debugsource, cjose-devel, mod_auth_openidc, mod_auth_openidc-debuginfo, mod_auth_openidc-debugsource mysql 8.0 mecab, mecab-debuginfo, mecab-debugsource, mecab-devel, mecab-ipadic, mecab-ipadic-EUCJP, mysql, mysql-common, mysql-debuginfo, mysql-debugsource, mysql-devel, mysql-devel-debuginfo, mysql-errmsg, mysql-libs, mysql-libs-debuginfo, mysql-server, mysql-server-debuginfo, mysql-test, mysql-test-debuginfo nginx 1.14 nginx, nginx-all-modules, nginx-debuginfo, nginx-debugsource, nginx-filesystem, nginx-mod-http-image-filter, nginx-mod-http-image-filter-debuginfo, nginx-mod-http-perl, nginx-mod-http-perl-debuginfo, nginx-mod-http-xslt-filter, nginx-mod-http-xslt-filter-debuginfo, nginx-mod-mail, nginx-mod-mail-debuginfo, nginx-mod-stream, nginx-mod-stream-debuginfo nginx 1.16 nginx, nginx-all-modules, nginx-debuginfo, nginx-debugsource, nginx-filesystem, nginx-mod-http-image-filter, nginx-mod-http-image-filter-debuginfo, nginx-mod-http-perl, nginx-mod-http-perl-debuginfo, nginx-mod-http-xslt-filter, nginx-mod-http-xslt-filter-debuginfo, nginx-mod-mail, nginx-mod-mail-debuginfo, nginx-mod-stream, nginx-mod-stream-debuginfo nginx 1.18 nginx, nginx-all-modules, nginx-debuginfo, nginx-debugsource, nginx-filesystem, nginx-mod-http-image-filter, nginx-mod-http-image-filter-debuginfo, nginx-mod-http-perl, nginx-mod-http-perl-debuginfo, nginx-mod-http-xslt-filter, nginx-mod-http-xslt-filter-debuginfo, nginx-mod-mail, nginx-mod-mail-debuginfo, nginx-mod-stream, nginx-mod-stream-debuginfo nginx 1.20 nginx, nginx-all-modules, nginx-debuginfo, nginx-debugsource, nginx-filesystem, nginx-mod-devel, nginx-mod-http-image-filter, nginx-mod-http-image-filter-debuginfo, nginx-mod-http-perl, nginx-mod-http-perl-debuginfo, nginx-mod-http-xslt-filter, nginx-mod-http-xslt-filter-debuginfo, nginx-mod-mail, nginx-mod-mail-debuginfo, nginx-mod-stream, nginx-mod-stream-debuginfo nginx 1.22 nginx, nginx-all-modules, nginx-debuginfo, nginx-debugsource, nginx-filesystem, nginx-mod-devel, nginx-mod-http-image-filter, nginx-mod-http-image-filter-debuginfo, nginx-mod-http-perl, nginx-mod-http-perl-debuginfo, nginx-mod-http-xslt-filter, nginx-mod-http-xslt-filter-debuginfo, nginx-mod-mail, nginx-mod-mail-debuginfo, nginx-mod-stream, nginx-mod-stream-debuginfo nginx 1.24 nginx, nginx-all-modules, nginx-debuginfo, nginx-debugsource, nginx-filesystem, nginx-mod-devel, nginx-mod-http-image-filter, nginx-mod-http-image-filter-debuginfo, nginx-mod-http-perl, nginx-mod-http-perl-debuginfo, nginx-mod-http-xslt-filter, nginx-mod-http-xslt-filter-debuginfo, nginx-mod-mail, nginx-mod-mail-debuginfo, nginx-mod-stream, nginx-mod-stream-debuginfo nodejs 10 nodejs, nodejs-debuginfo, nodejs-debugsource, nodejs-devel, nodejs-docs, nodejs-full-i18n, nodejs-nodemon, nodejs-packaging, npm nodejs 12 nodejs, nodejs-debuginfo, nodejs-debugsource, nodejs-devel, nodejs-docs, nodejs-full-i18n, nodejs-nodemon, nodejs-packaging, npm nodejs 14 nodejs, nodejs-debuginfo, nodejs-debugsource, nodejs-devel, nodejs-docs, nodejs-full-i18n, nodejs-nodemon, nodejs-packaging, npm nodejs 16 nodejs, nodejs-debuginfo, nodejs-debugsource, nodejs-devel, nodejs-docs, nodejs-full-i18n, nodejs-nodemon, nodejs-packaging, npm nodejs 18 nodejs, nodejs-debuginfo, nodejs-debugsource, nodejs-devel, nodejs-docs, nodejs-full-i18n, nodejs-nodemon, nodejs-packaging, nodejs-packaging-bundler, npm nodejs 20 nodejs, nodejs-debuginfo, nodejs-debugsource, nodejs-devel, nodejs-docs, nodejs-full-i18n, nodejs-nodemon, nodejs-packaging, nodejs-packaging-bundler, npm parfait 0.5 parfait, parfait-examples, parfait-javadoc, pcp-parfait-agent, si-units, si-units-javadoc, unit-api, unit-api-javadoc, uom-lib, uom-lib-javadoc, uom-parent, uom-se, uom-se-javadoc, uom-systems, uom-systems-javadoc perl 5.24 perl, perl-Algorithm-Diff, perl-Archive-Tar, perl-Archive-Zip, perl-Attribute-Handlers, perl-autodie, perl-B-Debug, perl-bignum, perl-Carp, perl-Compress-Bzip2, perl-Compress-Bzip2-debuginfo, perl-Compress-Bzip2-debugsource, perl-Compress-Raw-Bzip2, perl-Compress-Raw-Bzip2-debuginfo, perl-Compress-Raw-Bzip2-debugsource, perl-Compress-Raw-Zlib, perl-Compress-Raw-Zlib-debuginfo, perl-Compress-Raw-Zlib-debugsource, perl-Config-Perl-V, perl-constant, perl-core, perl-CPAN, perl-CPAN-Meta, perl-CPAN-Meta-Requirements, perl-CPAN-Meta-YAML, perl-Data-Dumper, perl-Data-Dumper-debuginfo, perl-Data-Dumper-debugsource, perl-Data-OptList, perl-Data-Section, perl-DB_File, perl-DB_File-debuginfo, perl-DB_File-debugsource, perl-debuginfo, perl-debugsource, perl-devel, perl-Devel-Peek, perl-Devel-Peek-debuginfo, perl-Devel-PPPort, perl-Devel-PPPort-debuginfo, perl-Devel-PPPort-debugsource, perl-Devel-SelfStubber, perl-Devel-Size, perl-Devel-Size-debuginfo, perl-Devel-Size-debugsource, perl-Digest, perl-Digest-MD5, perl-Digest-MD5-debuginfo, perl-Digest-MD5-debugsource, perl-Digest-SHA, perl-Digest-SHA-debuginfo, perl-Digest-SHA-debugsource, perl-Encode, perl-Encode-debuginfo, perl-Encode-debugsource, perl-Encode-devel, perl-encoding, perl-Env, perl-Errno, perl-experimental, perl-Exporter, perl-ExtUtils-CBuilder, perl-ExtUtils-Command, perl-ExtUtils-Embed, perl-ExtUtils-Install, perl-ExtUtils-MakeMaker, perl-ExtUtils-Manifest, perl-ExtUtils-Miniperl, perl-ExtUtils-MM-Utils, perl-ExtUtils-ParseXS, perl-Fedora-VSP, perl-File-Fetch, perl-File-HomeDir, perl-File-Path, perl-File-Temp, perl-File-Which, perl-Filter, perl-Filter-debuginfo, perl-Filter-debugsource, perl-Filter-Simple, perl-generators, perl-Getopt-Long, perl-homedir, perl-HTTP-Tiny, perl-inc-latest, perl-interpreter, perl-IO, perl-IO-Compress, perl-IO-debuginfo, perl-IO-Socket-IP, perl-IO-Zlib, perl-IPC-Cmd, perl-IPC-System-Simple, perl-IPC-SysV, perl-IPC-SysV-debuginfo, perl-IPC-SysV-debugsource, perl-JSON-PP, perl-libnet, perl-libnetcfg, perl-libs, perl-libs-debuginfo, perl-local-lib, perl-Locale-Codes, perl-Locale-Maketext, perl-Locale-Maketext-Simple, perl-macros, perl-Math-BigInt, perl-Math-BigInt-FastCalc, perl-Math-BigInt-FastCalc-debuginfo, perl-Math-BigInt-FastCalc-debugsource, perl-Math-BigRat, perl-Math-Complex, perl-Memoize, perl-MIME-Base64, perl-MIME-Base64-debuginfo, perl-MIME-Base64-debugsource, perl-Module-Build, perl-Module-CoreList, perl-Module-CoreList-tools, perl-Module-Load, perl-Module-Load-Conditional, perl-Module-Loaded, perl-Module-Metadata, perl-MRO-Compat, perl-Net-Ping, perl-open, perl-Package-Generator, perl-Params-Check, perl-Params-Util, perl-Params-Util-debuginfo, perl-Params-Util-debugsource, perl-parent, perl-PathTools, perl-PathTools-debuginfo, perl-PathTools-debugsource, perl-Perl-OSType, perl-perlfaq, perl-PerlIO-via-QuotedPrint, perl-Pod-Checker, perl-Pod-Escapes, perl-Pod-Html, perl-Pod-Parser, perl-Pod-Perldoc, perl-Pod-Simple, perl-Pod-Usage, perl-podlators, perl-Scalar-List-Utils, perl-Scalar-List-Utils-debuginfo, perl-Scalar-List-Utils-debugsource, perl-SelfLoader, perl-Socket, perl-Socket-debuginfo, perl-Socket-debugsource, perl-Software-License, perl-Storable, perl-Storable-debuginfo, perl-Storable-debugsource, perl-Sub-Exporter, perl-Sub-Install, perl-Sys-Syslog, perl-Sys-Syslog-debuginfo, perl-Sys-Syslog-debugsource, perl-Term-ANSIColor, perl-Term-Cap, perl-Test, perl-Test-Harness, perl-Test-Simple, perl-tests, perl-Text-Balanced, perl-Text-Diff, perl-Text-Glob, perl-Text-ParseWords, perl-Text-Tabs+Wrap, perl-Text-Template, perl-Thread-Queue, perl-threads, perl-threads-debuginfo, perl-threads-debugsource, perl-threads-shared, perl-threads-shared-debuginfo, perl-threads-shared-debugsource, perl-Time-HiRes, perl-Time-HiRes-debuginfo, perl-Time-HiRes-debugsource, perl-Time-Local, perl-Time-Piece, perl-Time-Piece-debuginfo, perl-Unicode-Collate, perl-Unicode-Collate-debuginfo, perl-Unicode-Collate-debugsource, perl-Unicode-Normalize, perl-Unicode-Normalize-debuginfo, perl-Unicode-Normalize-debugsource, perl-URI, perl-utils, perl-version, perl-version-debuginfo, perl-version-debugsource perl 5.26 perl, perl-Algorithm-Diff, perl-Archive-Tar, perl-Archive-Zip, perl-Attribute-Handlers, perl-autodie, perl-B-Debug, perl-bignum, perl-Carp, perl-Compress-Bzip2, perl-Compress-Raw-Bzip2, perl-Compress-Raw-Zlib, perl-Config-Perl-V, perl-constant, perl-CPAN, perl-CPAN-Meta, perl-CPAN-Meta-Requirements, perl-CPAN-Meta-YAML, perl-Data-Dumper, perl-Data-OptList, perl-Data-Section, perl-DB_File, perl-devel, perl-Devel-Peek, perl-Devel-PPPort, perl-Devel-SelfStubber, perl-Devel-Size, perl-Digest, perl-Digest-MD5, perl-Digest-SHA, perl-Encode, perl-Encode-devel, perl-encoding, perl-Env, perl-Errno, perl-experimental, perl-Exporter, perl-ExtUtils-CBuilder, perl-ExtUtils-Command, perl-ExtUtils-Embed, perl-ExtUtils-Install, perl-ExtUtils-MakeMaker, perl-ExtUtils-Manifest, perl-ExtUtils-Miniperl, perl-ExtUtils-MM-Utils, perl-ExtUtils-ParseXS, perl-Fedora-VSP, perl-File-Fetch, perl-File-HomeDir, perl-File-Path, perl-File-Temp, perl-File-Which, perl-Filter, perl-Filter-Simple, perl-generators, perl-Getopt-Long, perl-homedir, perl-HTTP-Tiny, perl-inc-latest, perl-interpreter, perl-IO, perl-IO-Compress, perl-IO-Socket-IP, perl-IO-Zlib, perl-IPC-Cmd, perl-IPC-System-Simple, perl-IPC-SysV, perl-JSON-PP, perl-libnet, perl-libnetcfg, perl-libs, perl-local-lib, perl-Locale-Codes, perl-Locale-Maketext, perl-Locale-Maketext-Simple, perl-macros, perl-Math-BigInt, perl-Math-BigInt-FastCalc, perl-Math-BigRat, perl-Math-Complex, perl-Memoize, perl-MIME-Base64, perl-Module-Build, perl-Module-CoreList, perl-Module-CoreList-tools, perl-Module-Load, perl-Module-Load-Conditional, perl-Module-Loaded, perl-Module-Metadata, perl-MRO-Compat, perl-Net-Ping, perl-open, perl-Package-Generator, perl-Params-Check, perl-Params-Util, perl-parent, perl-PathTools, perl-Perl-OSType, perl-perlfaq, perl-PerlIO-via-QuotedPrint, perl-Pod-Checker, perl-Pod-Escapes, perl-Pod-Html, perl-Pod-Parser, perl-Pod-Perldoc, perl-Pod-Simple, perl-Pod-Usage, perl-podlators, perl-Scalar-List-Utils, perl-SelfLoader, perl-Socket, perl-Software-License, perl-Storable, perl-Sub-Exporter, perl-Sub-Install, perl-Sys-Syslog, perl-Term-ANSIColor, perl-Term-Cap, perl-Test, perl-Test-Harness, perl-Test-Simple, perl-tests, perl-Text-Balanced, perl-Text-Diff, perl-Text-Glob, perl-Text-ParseWords, perl-Text-Tabs+Wrap, perl-Text-Template, perl-Thread-Queue, perl-threads, perl-threads-shared, perl-Time-HiRes, perl-Time-Local, perl-Time-Piece, perl-Unicode-Collate, perl-Unicode-Normalize, perl-URI, perl-utils, perl-version perl 5.30 perl, perl-Algorithm-Diff, perl-Archive-Tar, perl-Archive-Zip, perl-Attribute-Handlers, perl-autodie, perl-bignum, perl-Carp, perl-Compress-Bzip2, perl-Compress-Bzip2-debuginfo, perl-Compress-Bzip2-debugsource, perl-Compress-Raw-Bzip2, perl-Compress-Raw-Bzip2-debuginfo, perl-Compress-Raw-Bzip2-debugsource, perl-Compress-Raw-Zlib, perl-Compress-Raw-Zlib-debuginfo, perl-Compress-Raw-Zlib-debugsource, perl-Config-Perl-V, perl-constant, perl-CPAN, perl-CPAN-DistnameInfo, perl-CPAN-Meta, perl-CPAN-Meta-Requirements, perl-CPAN-Meta-YAML, perl-Data-Dumper, perl-Data-Dumper-debuginfo, perl-Data-Dumper-debugsource, perl-Data-OptList, perl-Data-Section, perl-DB_File, perl-DB_File-debuginfo, perl-DB_File-debugsource, perl-debuginfo, perl-debugsource, perl-devel, perl-Devel-Peek, perl-Devel-Peek-debuginfo, perl-Devel-PPPort, perl-Devel-PPPort-debuginfo, perl-Devel-PPPort-debugsource, perl-Devel-SelfStubber, perl-Devel-Size, perl-Devel-Size-debuginfo, perl-Devel-Size-debugsource, perl-Digest, perl-Digest-MD5, perl-Digest-MD5-debuginfo, perl-Digest-MD5-debugsource, perl-Digest-SHA, perl-Digest-SHA-debuginfo, perl-Digest-SHA-debugsource, perl-Encode, perl-Encode-debuginfo, perl-Encode-debugsource, perl-Encode-devel, perl-encoding, perl-Env, perl-Errno, perl-experimental, perl-Exporter, perl-ExtUtils-CBuilder, perl-ExtUtils-Command, perl-ExtUtils-Embed, perl-ExtUtils-Install, perl-ExtUtils-MakeMaker, perl-ExtUtils-Manifest, perl-ExtUtils-Miniperl, perl-ExtUtils-MM-Utils, perl-ExtUtils-ParseXS, perl-Fedora-VSP, perl-File-Fetch, perl-File-HomeDir, perl-File-Path, perl-File-Temp, perl-File-Which, perl-Filter, perl-Filter-debuginfo, perl-Filter-debugsource, perl-Filter-Simple, perl-generators, perl-Getopt-Long, perl-homedir, perl-HTTP-Tiny, perl-Importer, perl-inc-latest, perl-interpreter, perl-interpreter-debuginfo, perl-IO, perl-IO-Compress, perl-IO-debuginfo, perl-IO-Socket-IP, perl-IO-Zlib, perl-IPC-Cmd, perl-IPC-System-Simple, perl-IPC-SysV, perl-IPC-SysV-debuginfo, perl-IPC-SysV-debugsource, perl-JSON-PP, perl-libnet, perl-libnetcfg, perl-libs, perl-libs-debuginfo, perl-local-lib, perl-Locale-Maketext, perl-Locale-Maketext-Simple, perl-macros, perl-Math-BigInt, perl-Math-BigInt-FastCalc, perl-Math-BigInt-FastCalc-debuginfo, perl-Math-BigInt-FastCalc-debugsource, perl-Math-BigRat, perl-Math-Complex, perl-Memoize, perl-MIME-Base64, perl-MIME-Base64-debuginfo, perl-MIME-Base64-debugsource, perl-Module-Build, perl-Module-CoreList, perl-Module-CoreList-tools, perl-Module-Load, perl-Module-Load-Conditional, perl-Module-Loaded, perl-Module-Metadata, perl-MRO-Compat, perl-Net-Ping, perl-Object-HashBase, perl-Object-HashBase-tools, perl-open, perl-Package-Generator, perl-Params-Check, perl-Params-Util, perl-Params-Util-debuginfo, perl-Params-Util-debugsource, perl-parent, perl-PathTools, perl-PathTools-debuginfo, perl-PathTools-debugsource, perl-Perl-OSType, perl-perlfaq, perl-PerlIO-via-QuotedPrint, perl-Pod-Checker, perl-Pod-Escapes, perl-Pod-Html, perl-Pod-Parser, perl-Pod-Perldoc, perl-Pod-Simple, perl-Pod-Usage, perl-podlators, perl-Scalar-List-Utils, perl-Scalar-List-Utils-debuginfo, perl-Scalar-List-Utils-debugsource, perl-SelfLoader, perl-Socket, perl-Socket-debuginfo, perl-Socket-debugsource, perl-Software-License, perl-Storable, perl-Storable-debuginfo, perl-Storable-debugsource, perl-Sub-Exporter, perl-Sub-Install, perl-Sys-Syslog, perl-Sys-Syslog-debuginfo, perl-Sys-Syslog-debugsource, perl-Term-ANSIColor, perl-Term-Cap, perl-Term-Table, perl-Test, perl-Test-Harness, perl-Test-Simple, perl-tests, perl-Text-Balanced, perl-Text-Diff, perl-Text-Glob, perl-Text-ParseWords, perl-Text-Tabs+Wrap, perl-Text-Template, perl-Thread-Queue, perl-threads, perl-threads-debuginfo, perl-threads-debugsource, perl-threads-shared, perl-threads-shared-debuginfo, perl-threads-shared-debugsource, perl-Time-HiRes, perl-Time-HiRes-debuginfo, perl-Time-HiRes-debugsource, perl-Time-Local, perl-Time-Piece, perl-Time-Piece-debuginfo, perl-Unicode-Collate, perl-Unicode-Collate-debuginfo, perl-Unicode-Collate-debugsource, perl-Unicode-Normalize, perl-Unicode-Normalize-debuginfo, perl-Unicode-Normalize-debugsource, perl-URI, perl-utils, perl-version, perl-version-debuginfo, perl-version-debugsource perl 5.32 perl, perl-Algorithm-Diff, perl-Archive-Tar, perl-Archive-Zip, perl-Attribute-Handlers, perl-autodie, perl-AutoLoader, perl-AutoSplit, perl-autouse, perl-B, perl-B-debuginfo, perl-base, perl-Benchmark, perl-bignum, perl-blib, perl-Carp, perl-Class-Struct, perl-Compress-Bzip2, perl-Compress-Bzip2-debuginfo, perl-Compress-Bzip2-debugsource, perl-Compress-Raw-Bzip2, perl-Compress-Raw-Bzip2-debuginfo, perl-Compress-Raw-Bzip2-debugsource, perl-Compress-Raw-Lzma, perl-Compress-Raw-Lzma-debuginfo, perl-Compress-Raw-Lzma-debugsource, perl-Compress-Raw-Zlib, perl-Compress-Raw-Zlib-debuginfo, perl-Compress-Raw-Zlib-debugsource, perl-Config-Extensions, perl-Config-Perl-V, perl-constant, perl-CPAN, perl-CPAN-DistnameInfo, perl-CPAN-Meta, perl-CPAN-Meta-Requirements, perl-CPAN-Meta-YAML, perl-Data-Dumper, perl-Data-Dumper-debuginfo, perl-Data-Dumper-debugsource, perl-Data-OptList, perl-Data-Section, perl-DB_File, perl-DB_File-debuginfo, perl-DB_File-debugsource, perl-DBM_Filter, perl-debugger, perl-debuginfo, perl-debugsource, perl-deprecate, perl-devel, perl-Devel-Peek, perl-Devel-Peek-debuginfo, perl-Devel-PPPort, perl-Devel-PPPort-debuginfo, perl-Devel-PPPort-debugsource, perl-Devel-SelfStubber, perl-Devel-Size, perl-Devel-Size-debuginfo, perl-Devel-Size-debugsource, perl-diagnostics, perl-Digest, perl-Digest-MD5, perl-Digest-MD5-debuginfo, perl-Digest-MD5-debugsource, perl-Digest-SHA, perl-Digest-SHA-debuginfo, perl-Digest-SHA-debugsource, perl-DirHandle, perl-doc, perl-Dumpvalue, perl-DynaLoader, perl-DynaLoader-debuginfo, perl-Encode, perl-Encode-debuginfo, perl-Encode-debugsource, perl-Encode-devel, perl-Encode-Locale, perl-encoding, perl-encoding-warnings, perl-English, perl-Env, perl-Errno, perl-experimental, perl-Exporter, perl-ExtUtils-CBuilder, perl-ExtUtils-Command, perl-ExtUtils-Constant, perl-ExtUtils-Embed, perl-ExtUtils-Install, perl-ExtUtils-MakeMaker, perl-ExtUtils-Manifest, perl-ExtUtils-Miniperl, perl-ExtUtils-MM-Utils, perl-ExtUtils-ParseXS, perl-Fcntl, perl-Fcntl-debuginfo, perl-Fedora-VSP, perl-fields, perl-File-Basename, perl-File-Compare, perl-File-Copy, perl-File-DosGlob, perl-File-DosGlob-debuginfo, perl-File-Fetch, perl-File-Find, perl-File-HomeDir, perl-File-Path, perl-File-stat, perl-File-Temp, perl-File-Which, perl-FileCache, perl-FileHandle, perl-filetest, perl-Filter, perl-Filter-debuginfo, perl-Filter-debugsource, perl-Filter-Simple, perl-FindBin, perl-GDBM_File, perl-GDBM_File-debuginfo, perl-generators, perl-Getopt-Long, perl-Getopt-Std, perl-Hash-Util, perl-Hash-Util-debuginfo, perl-Hash-Util-FieldHash, perl-Hash-Util-FieldHash-debuginfo, perl-homedir, perl-HTTP-Tiny, perl-I18N-Collate, perl-I18N-Langinfo, perl-I18N-Langinfo-debuginfo, perl-I18N-LangTags, perl-if, perl-Importer, perl-inc-latest, perl-interpreter, perl-interpreter-debuginfo, perl-IO, perl-IO-Compress, perl-IO-Compress-Lzma, perl-IO-debuginfo, perl-IO-Socket-IP, perl-IO-Zlib, perl-IPC-Cmd, perl-IPC-Open3, perl-IPC-System-Simple, perl-IPC-SysV, perl-IPC-SysV-debuginfo, perl-IPC-SysV-debugsource, perl-JSON-PP, perl-less, perl-lib, perl-libnet, perl-libnetcfg, perl-libs, perl-libs-debuginfo, perl-local-lib, perl-locale, perl-Locale-Maketext, perl-Locale-Maketext-Simple, perl-macros, perl-Math-BigInt, perl-Math-BigInt-FastCalc, perl-Math-BigInt-FastCalc-debuginfo, perl-Math-BigInt-FastCalc-debugsource, perl-Math-BigRat, perl-Math-Complex, perl-Memoize, perl-meta-notation, perl-MIME-Base64, perl-MIME-Base64-debuginfo, perl-MIME-Base64-debugsource, perl-Module-Build, perl-Module-CoreList, perl-Module-CoreList-tools, perl-Module-Load, perl-Module-Load-Conditional, perl-Module-Loaded, perl-Module-Metadata, perl-mro, perl-MRO-Compat, perl-mro-debuginfo, perl-NDBM_File, perl-NDBM_File-debuginfo, perl-Net, perl-Net-Ping, perl-, perl-Object-HashBase, perl-Object-HashBase-tools, perl-ODBM_File, perl-ODBM_File-debuginfo, perl-Opcode, perl-Opcode-debuginfo, perl-open, perl-overload, perl-overloading, perl-Package-Generator, perl-Params-Check, perl-Params-Util, perl-Params-Util-debuginfo, perl-Params-Util-debugsource, perl-parent, perl-PathTools, perl-PathTools-debuginfo, perl-PathTools-debugsource, perl-Perl-OSType, perl-perlfaq, perl-PerlIO-via-QuotedPrint, perl-ph, perl-Pod-Checker, perl-Pod-Escapes, perl-Pod-Functions, perl-Pod-Html, perl-Pod-Parser, perl-Pod-Perldoc, perl-Pod-Simple, perl-Pod-Usage, perl-podlators, perl-POSIX, perl-POSIX-debuginfo, perl-Safe, perl-Scalar-List-Utils, perl-Scalar-List-Utils-debuginfo, perl-Scalar-List-Utils-debugsource, perl-Search-Dict, perl-SelectSaver, perl-SelfLoader, perl-sigtrap, perl-Socket, perl-Socket-debuginfo, perl-Socket-debugsource, perl-Software-License, perl-sort, perl-Storable, perl-Storable-debuginfo, perl-Storable-debugsource, perl-Sub-Exporter, perl-Sub-Install, perl-subs, perl-Symbol, perl-Sys-Hostname, perl-Sys-Hostname-debuginfo, perl-Sys-Syslog, perl-Sys-Syslog-debuginfo, perl-Sys-Syslog-debugsource, perl-Term-ANSIColor, perl-Term-Cap, perl-Term-Complete, perl-Term-ReadLine, perl-Term-Table, perl-Test, perl-Test-Harness, perl-Test-Simple, perl-Text-Abbrev, perl-Text-Balanced, perl-Text-Diff, perl-Text-Glob, perl-Text-ParseWords, perl-Text-Tabs+Wrap, perl-Text-Template, perl-Thread, perl-Thread-Queue, perl-Thread-Semaphore, perl-threads, perl-threads-debuginfo, perl-threads-debugsource, perl-threads-shared, perl-threads-shared-debuginfo, perl-threads-shared-debugsource, perl-Tie, perl-Tie-File, perl-Tie-Memoize, perl-Tie-RefHash, perl-Time, perl-Time-HiRes, perl-Time-HiRes-debuginfo, perl-Time-HiRes-debugsource, perl-Time-Local, perl-Time-Piece, perl-Time-Piece-debuginfo, perl-Unicode-Collate, perl-Unicode-Collate-debuginfo, perl-Unicode-Collate-debugsource, perl-Unicode-Normalize, perl-Unicode-Normalize-debuginfo, perl-Unicode-Normalize-debugsource, perl-Unicode-UCD, perl-URI, perl-User-pwent, perl-utils, perl-vars, perl-version, perl-version-debuginfo, perl-version-debugsource, perl-vmsish perl-App-cpanminus 1.7044 perl-App-cpanminus, perl-CPAN-DistnameInfo, perl-CPAN-Meta-Check, perl-File-pushd, perl-Module-CPANfile, perl-Parse-PMFile, perl-String-ShellQuote perl-DBD-MySQL 4.046 perl-DBD-MySQL, perl-DBD-MySQL-debuginfo, perl-DBD-MySQL-debugsource perl-DBD-Pg 3.7 perl-DBD-Pg, perl-DBD-Pg-debuginfo, perl-DBD-Pg-debugsource perl-DBD-SQLite 1.58 perl-DBD-SQLite, perl-DBD-SQLite-debuginfo, perl-DBD-SQLite-debugsource perl-DBI 1.641 perl-DBI, perl-DBI-debuginfo, perl-DBI-debugsource perl-FCGI 0.78 perl-FCGI, perl-FCGI-debuginfo, perl-FCGI-debugsource perl-IO-Socket-SSL 2.066 perl-IO-Socket-SSL, perl-Net-SSLeay, perl-Net-SSLeay-debuginfo, perl-Net-SSLeay-debugsource perl-libwww-perl 6.34 perl-Data-Dump, perl-Digest-HMAC, perl-Encode-Locale, perl-File-Listing, perl-HTML-Parser, perl-HTML-Parser-debuginfo, perl-HTML-Parser-debugsource, perl-HTML-Tagset, perl-HTTP-Cookies, perl-HTTP-Date, perl-HTTP-Message, perl-HTTP-Negotiate, perl-IO-HTML, perl-libwww-perl, perl-LWP-MediaTypes, perl-LWP-Protocol-https, perl-Mozilla-CA, perl-Net-HTTP, perl-NTLM, perl-TimeDate, perl-Try-Tiny, perl-WWW-RobotRules perl-YAML 1.24 perl-YAML php 7.2 apcu-panel, libzip, libzip-debuginfo, libzip-debugsource, libzip-devel, libzip-tools, libzip-tools-debuginfo, php, php-bcmath, php-bcmath-debuginfo, php-cli, php-cli-debuginfo, php-common, php-common-debuginfo, php-dba, php-dba-debuginfo, php-dbg, php-dbg-debuginfo, php-debuginfo, php-debugsource, php-devel, php-embedded, php-embedded-debuginfo, php-enchant, php-enchant-debuginfo, php-fpm, php-fpm-debuginfo, php-gd, php-gd-debuginfo, php-gmp, php-gmp-debuginfo, php-intl, php-intl-debuginfo, php-json, php-json-debuginfo, php-ldap, php-ldap-debuginfo, php-mbstring, php-mbstring-debuginfo, php-mysqlnd, php-mysqlnd-debuginfo, php-odbc, php-odbc-debuginfo, php-opcache, php-opcache-debuginfo, php-pdo, php-pdo-debuginfo, php-pear, php-pecl-apcu, php-pecl-apcu-debuginfo, php-pecl-apcu-debugsource, php-pecl-apcu-devel, php-pecl-zip, php-pecl-zip-debuginfo, php-pecl-zip-debugsource, php-pgsql, php-pgsql-debuginfo, php-process, php-process-debuginfo, php-recode, php-recode-debuginfo, php-snmp, php-snmp-debuginfo, php-soap, php-soap-debuginfo, php-xml, php-xml-debuginfo, php-xmlrpc, php-xmlrpc-debuginfo php 7.3 apcu-panel, libzip, libzip-debuginfo, libzip-debugsource, libzip-devel, libzip-tools, libzip-tools-debuginfo, php, php-bcmath, php-bcmath-debuginfo, php-cli, php-cli-debuginfo, php-common, php-common-debuginfo, php-dba, php-dba-debuginfo, php-dbg, php-dbg-debuginfo, php-debuginfo, php-debugsource, php-devel, php-embedded, php-embedded-debuginfo, php-enchant, php-enchant-debuginfo, php-fpm, php-fpm-debuginfo, php-gd, php-gd-debuginfo, php-gmp, php-gmp-debuginfo, php-intl, php-intl-debuginfo, php-json, php-json-debuginfo, php-ldap, php-ldap-debuginfo, php-mbstring, php-mbstring-debuginfo, php-mysqlnd, php-mysqlnd-debuginfo, php-odbc, php-odbc-debuginfo, php-opcache, php-opcache-debuginfo, php-pdo, php-pdo-debuginfo, php-pear, php-pecl-apcu, php-pecl-apcu-debuginfo, php-pecl-apcu-debugsource, php-pecl-apcu-devel, php-pecl-rrd, php-pecl-rrd-debuginfo, php-pecl-rrd-debugsource, php-pecl-xdebug, php-pecl-xdebug-debuginfo, php-pecl-xdebug-debugsource, php-pecl-zip, php-pecl-zip-debuginfo, php-pecl-zip-debugsource, php-pgsql, php-pgsql-debuginfo, php-process, php-process-debuginfo, php-recode, php-recode-debuginfo, php-snmp, php-snmp-debuginfo, php-soap, php-soap-debuginfo, php-xml, php-xml-debuginfo, php-xmlrpc, php-xmlrpc-debuginfo php 7.4 apcu-panel, libzip, libzip-debuginfo, libzip-debugsource, libzip-devel, libzip-tools, libzip-tools-debuginfo, php, php-bcmath, php-bcmath-debuginfo, php-cli, php-cli-debuginfo, php-common, php-common-debuginfo, php-dba, php-dba-debuginfo, php-dbg, php-dbg-debuginfo, php-debuginfo, php-debugsource, php-devel, php-embedded, php-embedded-debuginfo, php-enchant, php-enchant-debuginfo, php-ffi, php-ffi-debuginfo, php-fpm, php-fpm-debuginfo, php-gd, php-gd-debuginfo, php-gmp, php-gmp-debuginfo, php-intl, php-intl-debuginfo, php-json, php-json-debuginfo, php-ldap, php-ldap-debuginfo, php-mbstring, php-mbstring-debuginfo, php-mysqlnd, php-mysqlnd-debuginfo, php-odbc, php-odbc-debuginfo, php-opcache, php-opcache-debuginfo, php-pdo, php-pdo-debuginfo, php-pear, php-pecl-apcu, php-pecl-apcu-debuginfo, php-pecl-apcu-debugsource, php-pecl-apcu-devel, php-pecl-rrd, php-pecl-rrd-debuginfo, php-pecl-rrd-debugsource, php-pecl-xdebug, php-pecl-xdebug-debuginfo, php-pecl-xdebug-debugsource, php-pecl-zip, php-pecl-zip-debuginfo, php-pecl-zip-debugsource, php-pgsql, php-pgsql-debuginfo, php-process, php-process-debuginfo, php-snmp, php-snmp-debuginfo, php-soap, php-soap-debuginfo, php-xml, php-xml-debuginfo, php-xmlrpc, php-xmlrpc-debuginfo php 8.0 apcu-panel, libzip, libzip-debuginfo, libzip-debugsource, libzip-devel, libzip-tools, libzip-tools-debuginfo, php, php-bcmath, php-bcmath-debuginfo, php-cli, php-cli-debuginfo, php-common, php-common-debuginfo, php-dba, php-dba-debuginfo, php-dbg, php-dbg-debuginfo, php-debuginfo, php-debugsource, php-devel, php-embedded, php-embedded-debuginfo, php-enchant, php-enchant-debuginfo, php-ffi, php-ffi-debuginfo, php-fpm, php-fpm-debuginfo, php-gd, php-gd-debuginfo, php-gmp, php-gmp-debuginfo, php-intl, php-intl-debuginfo, php-ldap, php-ldap-debuginfo, php-mbstring, php-mbstring-debuginfo, php-mysqlnd, php-mysqlnd-debuginfo, php-odbc, php-odbc-debuginfo, php-opcache, php-opcache-debuginfo, php-pdo, php-pdo-debuginfo, php-pear, php-pecl-apcu, php-pecl-apcu-debuginfo, php-pecl-apcu-debugsource, php-pecl-apcu-devel, php-pecl-rrd, php-pecl-rrd-debuginfo, php-pecl-rrd-debugsource, php-pecl-xdebug3, php-pecl-xdebug3-debuginfo, php-pecl-xdebug3-debugsource, php-pecl-zip, php-pecl-zip-debuginfo, php-pecl-zip-debugsource, php-pgsql, php-pgsql-debuginfo, php-process, php-process-debuginfo, php-snmp, php-snmp-debuginfo, php-soap, php-soap-debuginfo, php-xml, php-xml-debuginfo php 8.2 apcu-panel, libzip, libzip-debuginfo, libzip-debugsource, libzip-devel, libzip-tools, libzip-tools-debuginfo, php, php-bcmath, php-bcmath-debuginfo, php-cli, php-cli-debuginfo, php-common, php-common-debuginfo, php-dba, php-dba-debuginfo, php-dbg, php-dbg-debuginfo, php-debuginfo, php-debugsource, php-devel, php-embedded, php-embedded-debuginfo, php-enchant, php-enchant-debuginfo, php-ffi, php-ffi-debuginfo, php-fpm, php-fpm-debuginfo, php-gd, php-gd-debuginfo, php-gmp, php-gmp-debuginfo, php-intl, php-intl-debuginfo, php-ldap, php-ldap-debuginfo, php-mbstring, php-mbstring-debuginfo, php-mysqlnd, php-mysqlnd-debuginfo, php-odbc, php-odbc-debuginfo, php-opcache, php-opcache-debuginfo, php-pdo, php-pdo-debuginfo, php-pear, php-pecl-apcu, php-pecl-apcu-debuginfo, php-pecl-apcu-debugsource, php-pecl-apcu-devel, php-pecl-rrd, php-pecl-rrd-debuginfo, php-pecl-rrd-debugsource, php-pecl-xdebug3, php-pecl-xdebug3-debuginfo, php-pecl-xdebug3-debugsource, php-pecl-zip, php-pecl-zip-debuginfo, php-pecl-zip-debugsource, php-pgsql, php-pgsql-debuginfo, php-process, php-process-debuginfo, php-snmp, php-snmp-debuginfo, php-soap, php-soap-debuginfo, php-xml, php-xml-debuginfo pki-core 10.6 idm-jss, idm-jss-debuginfo, idm-jss-javadoc, idm-ldapjdk, idm-ldapjdk-javadoc, idm-pki-acme, idm-pki-base, idm-pki-base-java, idm-pki-ca, idm-pki-kra, idm-pki-server, idm-pki-symkey, idm-pki-symkey-debuginfo, idm-pki-tools, idm-pki-tools-debuginfo, idm-tomcatjss, jss, jss-debugsource, ldapjdk, pki-core, pki-core-debuginfo, pki-core-debugsource, python3-idm-pki, resteasy, resteasy-javadoc, tomcatjss pki-deps 10.6 apache-commons-collections, apache-commons-lang, apache-commons-net, bea-stax, bea-stax-api, fasterxml-oss-parent, glassfish-fastinfoset, glassfish-jaxb, glassfish-jaxb-api, glassfish-jaxb-core, glassfish-jaxb-runtime, glassfish-jaxb-txw2, jackson-annotations, jackson-bom, jackson-core, jackson-databind, jackson-jaxrs-json-provider, jackson-jaxrs-providers, jackson-module-jaxb-annotations, jackson-modules-base, jackson-parent, jakarta-commons-httpclient, javassist, javassist-javadoc, pki-servlet-engine, relaxngDatatype, slf4j, slf4j-jdk14, stax-ex, velocity, xalan-j2, xerces-j2, xml-commons-apis, xml-commons-resolver, xmlstreambuffer, xsom pmdk 1_fileformat_v6 daxio, daxio-debuginfo, libpmem, libpmem-debug, libpmem-debug-debuginfo, libpmem-debuginfo, libpmem-devel, libpmemblk, libpmemblk-debug, libpmemblk-debug-debuginfo, libpmemblk-debuginfo, libpmemblk-devel, libpmemlog, libpmemlog-debug, libpmemlog-debug-debuginfo, libpmemlog-debuginfo, libpmemlog-devel, libpmemobj, libpmemobj-devel, libpmemobj-doc, libpmemobj-cpp, libpmemobj-debug, libpmemobj-debug-debuginfo, libpmemobj-debuginfo, libpmemobj-devel, libpmempool, libpmempool-debug, libpmempool-debug-debuginfo, libpmempool-debuginfo, libpmempool-devel, librpmem, librpmem-debug, librpmem-debug-debuginfo, librpmem-debuginfo, librpmem-devel, pmdk, pmdk-debuginfo, pmdk-debugsource, pmempool, pmempool-debuginfo, pmreorder, rpmemd, rpmemd-debuginfo postgresql 10 postgresql, postgresql-contrib, postgresql-contrib-debuginfo, postgresql-debuginfo, postgresql-debugsource, postgresql-docs, postgresql-docs-debuginfo, postgresql-plperl, postgresql-plperl-debuginfo, postgresql-plpython3, postgresql-plpython3-debuginfo, postgresql-pltcl, postgresql-pltcl-debuginfo, postgresql-server, postgresql-server-debuginfo, postgresql-server-devel, postgresql-server-devel-debuginfo, postgresql-static, postgresql-test, postgresql-test-debuginfo, postgresql-test-rpm-macros, postgresql-upgrade, postgresql-upgrade-debuginfo, postgresql-upgrade-devel, postgresql-upgrade-devel-debuginfo postgresql 12 pg_repack, pg_repack-debuginfo, pg_repack-debugsource, pgaudit, pgaudit-debuginfo, pgaudit-debugsource, postgres-decoderbufs, postgres-decoderbufs-debuginfo, postgres-decoderbufs-debugsource, postgresql, postgresql-contrib, postgresql-contrib-debuginfo, postgresql-debuginfo, postgresql-debugsource, postgresql-docs, postgresql-docs-debuginfo, postgresql-plperl, postgresql-plperl-debuginfo, postgresql-plpython3, postgresql-plpython3-debuginfo, postgresql-pltcl, postgresql-pltcl-debuginfo, postgresql-server, postgresql-server-debuginfo, postgresql-server-devel, postgresql-server-devel-debuginfo, postgresql-static, postgresql-test, postgresql-test-debuginfo, postgresql-test-rpm-macros, postgresql-upgrade, postgresql-upgrade-debuginfo, postgresql-upgrade-devel, postgresql-upgrade-devel-debuginfo postgresql 13 pg_repack, pg_repack-debuginfo, pg_repack-debugsource, pgaudit, pgaudit-debuginfo, pgaudit-debugsource, postgres-decoderbufs, postgres-decoderbufs-debuginfo, postgres-decoderbufs-debugsource, postgresql, postgresql-contrib, postgresql-contrib-debuginfo, postgresql-debuginfo, postgresql-debugsource, postgresql-docs, postgresql-docs-debuginfo, postgresql-plperl, postgresql-plperl-debuginfo, postgresql-plpython3, postgresql-plpython3-debuginfo, postgresql-pltcl, postgresql-pltcl-debuginfo, postgresql-server, postgresql-server-debuginfo, postgresql-server-devel, postgresql-server-devel-debuginfo, postgresql-static, postgresql-test, postgresql-test-debuginfo, postgresql-test-rpm-macros, postgresql-upgrade, postgresql-upgrade-debuginfo, postgresql-upgrade-devel, postgresql-upgrade-devel-debuginfo postgresql 15 pg_repack, pg_repack-debuginfo, pg_repack-debugsource, pgaudit, pgaudit-debuginfo, pgaudit-debugsource, postgres-decoderbufs, postgres-decoderbufs-debuginfo, postgres-decoderbufs-debugsource, postgresql, postgresql-contrib, postgresql-contrib-debuginfo, postgresql-debuginfo, postgresql-debugsource, postgresql-docs, postgresql-docs-debuginfo, postgresql-plperl, postgresql-plperl-debuginfo, postgresql-plpython3, postgresql-plpython3-debuginfo, postgresql-pltcl, postgresql-pltcl-debuginfo, postgresql-private-devel, postgresql-private-libs, postgresql-private-libs-debuginfo, postgresql-server, postgresql-server-debuginfo, postgresql-server-devel, postgresql-server-devel-debuginfo, postgresql-static, postgresql-test, postgresql-test-debuginfo, postgresql-test-rpm-macros, postgresql-upgrade, postgresql-upgrade-debuginfo, postgresql-upgrade-devel, postgresql-upgrade-devel-debuginfo postgresql 16 pg_repack, pg_repack-debuginfo, pg_repack-debugsource, pgaudit, pgaudit-debuginfo, pgaudit-debugsource, postgres-decoderbufs, postgres-decoderbufs-debuginfo, postgres-decoderbufs-debugsource, postgresql, postgresql-contrib, postgresql-contrib-debuginfo, postgresql-debuginfo, postgresql-debugsource, postgresql-docs, postgresql-docs-debuginfo, postgresql-plperl, postgresql-plperl-debuginfo, postgresql-plpython3, postgresql-plpython3-debuginfo, postgresql-pltcl, postgresql-pltcl-debuginfo, postgresql-private-devel, postgresql-private-libs, postgresql-private-libs-debuginfo, postgresql-server, postgresql-server-debuginfo, postgresql-server-devel, postgresql-server-devel-debuginfo, postgresql-static, postgresql-test, postgresql-test-debuginfo, postgresql-test-rpm-macros, postgresql-upgrade, postgresql-upgrade-debuginfo, postgresql-upgrade-devel, postgresql-upgrade-devel-debuginfo postgresql 9.6 postgresql, postgresql-contrib, postgresql-contrib-debuginfo, postgresql-debuginfo, postgresql-debugsource, postgresql-docs, postgresql-docs-debuginfo, postgresql-plperl, postgresql-plperl-debuginfo, postgresql-plpython3, postgresql-plpython3-debuginfo, postgresql-pltcl, postgresql-pltcl-debuginfo, postgresql-server, postgresql-server-debuginfo, postgresql-server-devel, postgresql-server-devel-debuginfo, postgresql-static, postgresql-test, postgresql-test-debuginfo, postgresql-test-rpm-macros python27 2.7 babel, Cython, Cython-debugsource, numpy, numpy-debugsource, pytest, python-attrs, python-backports, python-backports-ssl_match_hostname, python-chardet, python-coverage, python-coverage-debugsource, python-dns, python-docs, python-docutils, python-funcsigs, python-idna, python-ipaddress, python-jinja2, python-lxml, python-lxml-debugsource, python-markupsafe, python-mock, python-nose, python-nose-docs, python-pluggy, python-psycopg2, python-psycopg2-debuginfo, python-psycopg2-debugsource, python-psycopg2-doc, python-py, python-pygments, python-pymongo, python-pymongo-debuginfo, python-pymongo-debugsource, python-PyMySQL, python-pysocks, python-pytest-mock, python-requests, python-setuptools_scm, python-sqlalchemy, python-sqlalchemy-doc, python-urllib3, python-virtualenv, python-wheel, python2, python2-attrs, python2-babel, python2-backports, python2-backports-ssl_match_hostname, python2-bson, python2-bson-debuginfo, python2-chardet, python2-coverage, python2-coverage-debuginfo, python2-Cython, python2-Cython-debuginfo, python2-debug, python2-debuginfo, python2-debugsource, python2-devel, python2-dns, python2-docs, python2-docs-info, python2-docutils, python2-funcsigs, python2-idna, python2-ipaddress, python2-jinja2, python2-libs, python2-lxml, python2-lxml-debuginfo, python2-markupsafe, python2-mock, python2-nose, python2-numpy, python2-numpy-debuginfo, python2-numpy-doc, python2-numpy-f2py, python2-pip, python2-pip-wheel, python2-pluggy, python2-psycopg2, python2-psycopg2-debug, python2-psycopg2-debug-debuginfo, python2-psycopg2-debuginfo, python2-psycopg2-tests, python2-py, python2-pygments, python2-pymongo, python2-pymongo-debuginfo, python2-pymongo-gridfs, python2-PyMySQL, python2-pysocks, python2-pytest, python2-pytest-mock, python2-pytz, python2-pyyaml, python2-pyyaml-debuginfo, python2-requests, python2-rpm-macros, python2-scipy, python2-scipy-debuginfo, python2-setuptools, python2-setuptools-wheel, python2-setuptools_scm, python2-six, python2-sqlalchemy, python2-test, python2-tkinter, python2-tools, python2-urllib3, python2-virtualenv, python2-wheel, python2-wheel-wheel, pytz, PyYAML, PyYAML-debugsource, scipy, scipy-debugsource python36 3.6 python-distro, python-docs, python-docutils, python-nose, python-nose-docs, python-pygments, python-pymongo, python-pymongo-debuginfo, python-pymongo-debugsource, python-pymongo-doc, python-PyMySQL, python-sqlalchemy, python-sqlalchemy-doc, python-virtualenv, python-virtualenv-doc, python-wheel, python3-bson, python3-bson-debuginfo, python3-distro, python3-docs, python3-docutils, python3-nose, python3-pygments, python3-pymongo, python3-pymongo-debuginfo, python3-pymongo-gridfs, python3-PyMySQL, python3-scipy, python3-scipy-debuginfo, python3-sqlalchemy, python3-virtualenv, python3-wheel, python3-wheel-wheel, python36, python36-debug, python36-devel, python36-rpm-macros, scipy, scipy-debugsource python38 3.8 babel, Cython, Cython-debugsource, mod_wsgi, numpy, numpy-debugsource, python-asn1crypto, python-cffi, python-cffi-debugsource, python-chardet, python-cryptography, python-cryptography-debugsource, python-idna, python-jinja2, python-lxml, python-lxml-debugsource, python-markupsafe, python-markupsafe-debugsource, python-ply, python-psutil, python-psutil-debugsource, python-psycopg2, python-psycopg2-debugsource, python-pycparser, python-PyMySQL, python-pysocks, python-requests, python-urllib3, python-wheel, python38, python38-asn1crypto, python38-babel, python38-cffi, python38-cffi-debuginfo, python38-chardet, python38-cryptography, python38-cryptography-debuginfo, python38-Cython, python38-Cython-debuginfo, python38-debug, python38-debuginfo, python38-debugsource, python38-devel, python38-idle, python38-idna, python38-jinja2, python38-libs, python38-lxml, python38-lxml-debuginfo, python38-markupsafe, python38-markupsafe-debuginfo, python38-mod_wsgi, python38-numpy, python38-numpy-debuginfo, python38-numpy-doc, python38-numpy-f2py, python38-pip, python38-pip-wheel, python38-ply, python38-psutil, python38-psutil-debuginfo, python38-psycopg2, python38-psycopg2-debuginfo, python38-psycopg2-doc, python38-psycopg2-tests, python38-pycparser, python38-PyMySQL, python38-pysocks, python38-pytz, python38-pyyaml, python38-pyyaml-debuginfo, python38-requests, python38-rpm-macros, python38-scipy, python38-scipy-debuginfo, python38-setuptools, python38-setuptools-wheel, python38-six, python38-test, python38-tkinter, python38-urllib3, python38-wheel, python38-wheel-wheel, python3x-pip, python3x-setuptools, python3x-six, pytz, PyYAML, PyYAML-debugsource, scipy, scipy-debugsource python39 3.9 mod_wsgi, numpy, numpy-debugsource, python-cffi, python-cffi-debugsource, python-chardet, python-cryptography, python-cryptography-debugsource, python-idna, python-lxml, python-lxml-debugsource, python-ply, python-psutil, python-psutil-debugsource, python-psycopg2, python-psycopg2-debugsource, python-pycparser, python-PyMySQL, python-pysocks, python-requests, python-toml, python-urllib3, python-wheel, python39, python39-cffi, python39-cffi-debuginfo, python39-chardet, python39-cryptography, python39-cryptography-debuginfo, python39-debuginfo, python39-debugsource, python39-devel, python39-idle, python39-idna, python39-libs, python39-lxml, python39-lxml-debuginfo, python39-mod_wsgi, python39-numpy, python39-numpy-debuginfo, python39-numpy-doc, python39-numpy-f2py, python39-pip, python39-pip-wheel, python39-ply, python39-psutil, python39-psutil-debuginfo, python39-psycopg2, python39-psycopg2-debuginfo, python39-psycopg2-doc, python39-psycopg2-tests, python39-pycparser, python39-PyMySQL, python39-pysocks, python39-pyyaml, python39-pyyaml-debuginfo, python39-requests, python39-rpm-macros, python39-scipy, python39-scipy-debuginfo, python39-setuptools, python39-setuptools-wheel, python39-six, python39-test, python39-tkinter, python39-toml, python39-urllib3, python39-wheel, python39-wheel-wheel, python3x-pip, python3x-setuptools, python3x-six, PyYAML, PyYAML-debugsource, scipy, scipy-debugsource redis 5 redis, redis-debuginfo, redis-debugsource, redis-devel, redis-doc redis 6 redis, redis-debuginfo, redis-debugsource, redis-devel, redis-doc rhn-tools 1.0 cobbler, koan, osad, python3-koan, python3-osa-common, python3-osad, python3-rhn-virtualization-common, python3-rhn-virtualization-host, python3-rhncfg, python3-rhncfg-actions, python3-rhncfg-client, python3-rhncfg-management, python3-rhnpush, python3-spacewalk-abrt, python3-spacewalk-backend-libs, python3-spacewalk-koan, python3-spacewalk-oscap, python3-spacewalk-usix, rhn-custom-info, rhn-virtualization, rhn-virtualization-host, rhncfg, rhncfg-actions, rhncfg-client, rhncfg-management, rhnpush, spacewalk-abrt, spacewalk-backend, spacewalk-client-cert, spacewalk-koan, spacewalk-oscap, spacewalk-remote-utils, spacewalk-usix ruby 2.5 ruby, ruby-debuginfo, ruby-debugsource, ruby-devel, ruby-doc, ruby-irb, ruby-libs, ruby-libs-debuginfo, rubygem-abrt, rubygem-abrt-doc, rubygem-bigdecimal, rubygem-bigdecimal-debuginfo, rubygem-bson, rubygem-bson-debuginfo, rubygem-bson-debugsource, rubygem-bson-doc, rubygem-bundler, rubygem-bundler-doc, rubygem-did_you_mean, rubygem-io-console, rubygem-io-console-debuginfo, rubygem-json, rubygem-json-debuginfo, rubygem-minitest, rubygem-mongo, rubygem-mongo-doc, rubygem-mysql2, rubygem-mysql2-debuginfo, rubygem-mysql2-debugsource, rubygem-mysql2-doc, rubygem-net-telnet, rubygem-openssl, rubygem-openssl-debuginfo, rubygem-pg, rubygem-pg-debuginfo, rubygem-pg-debugsource, rubygem-pg-doc, rubygem-power_assert, rubygem-psych, rubygem-psych-debuginfo, rubygem-rake, rubygem-rdoc, rubygem-test-unit, rubygem-xmlrpc, rubygems, rubygems-devel ruby 2.6 ruby, ruby-debuginfo, ruby-debugsource, ruby-devel, ruby-doc, ruby-libs, ruby-libs-debuginfo, rubygem-abrt, rubygem-abrt-doc, rubygem-bigdecimal, rubygem-bigdecimal-debuginfo, rubygem-bson, rubygem-bson-debuginfo, rubygem-bson-debugsource, rubygem-bson-doc, rubygem-bundler, rubygem-did_you_mean, rubygem-io-console, rubygem-io-console-debuginfo, rubygem-irb, rubygem-json, rubygem-json-debuginfo, rubygem-minitest, rubygem-mongo, rubygem-mongo-doc, rubygem-mysql2, rubygem-mysql2-debuginfo, rubygem-mysql2-debugsource, rubygem-mysql2-doc, rubygem-net-telnet, rubygem-openssl, rubygem-openssl-debuginfo, rubygem-pg, rubygem-pg-debuginfo, rubygem-pg-debugsource, rubygem-pg-doc, rubygem-power_assert, rubygem-psych, rubygem-psych-debuginfo, rubygem-rake, rubygem-rdoc, rubygem-test-unit, rubygem-xmlrpc, rubygems, rubygems-devel ruby 2.7 ruby, ruby-debuginfo, ruby-debugsource, ruby-default-gems, ruby-devel, ruby-doc, ruby-libs, ruby-libs-debuginfo, rubygem-abrt, rubygem-abrt-doc, rubygem-bigdecimal, rubygem-bigdecimal-debuginfo, rubygem-bson, rubygem-bson-debuginfo, rubygem-bson-debugsource, rubygem-bson-doc, rubygem-bundler, rubygem-io-console, rubygem-io-console-debuginfo, rubygem-irb, rubygem-json, rubygem-json-debuginfo, rubygem-minitest, rubygem-mongo, rubygem-mongo-doc, rubygem-mysql2, rubygem-mysql2-debuginfo, rubygem-mysql2-debugsource, rubygem-mysql2-doc, rubygem-net-telnet, rubygem-openssl, rubygem-openssl-debuginfo, rubygem-pg, rubygem-pg-debuginfo, rubygem-pg-debugsource, rubygem-pg-doc, rubygem-power_assert, rubygem-psych, rubygem-psych-debuginfo, rubygem-rake, rubygem-rdoc, rubygem-test-unit, rubygem-xmlrpc, rubygems, rubygems-devel ruby 3.0 ruby, ruby-debuginfo, ruby-debugsource, ruby-default-gems, ruby-devel, ruby-doc, ruby-libs, ruby-libs-debuginfo, rubygem-abrt, rubygem-abrt-doc, rubygem-bigdecimal, rubygem-bigdecimal-debuginfo, rubygem-bundler, rubygem-io-console, rubygem-io-console-debuginfo, rubygem-irb, rubygem-json, rubygem-json-debuginfo, rubygem-minitest, rubygem-mysql2, rubygem-mysql2-debuginfo, rubygem-mysql2-debugsource, rubygem-mysql2-doc, rubygem-pg, rubygem-pg-debuginfo, rubygem-pg-debugsource, rubygem-pg-doc, rubygem-power_assert, rubygem-psych, rubygem-psych-debuginfo, rubygem-rake, rubygem-rbs, rubygem-rdoc, rubygem-rexml, rubygem-rss, rubygem-test-unit, rubygem-typeprof, rubygems, rubygems-devel ruby 3.1 ruby, ruby-bundled-gems, ruby-bundled-gems-debuginfo, ruby-debuginfo, ruby-debugsource, ruby-default-gems, ruby-devel, ruby-doc, ruby-libs, ruby-libs-debuginfo, rubygem-abrt, rubygem-abrt-doc, rubygem-bigdecimal, rubygem-bigdecimal-debuginfo, rubygem-bundler, rubygem-io-console, rubygem-io-console-debuginfo, rubygem-irb, rubygem-json, rubygem-json-debuginfo, rubygem-minitest, rubygem-mysql2, rubygem-mysql2-debuginfo, rubygem-mysql2-debugsource, rubygem-mysql2-doc, rubygem-pg, rubygem-pg-debuginfo, rubygem-pg-debugsource, rubygem-pg-doc, rubygem-power_assert, rubygem-psych, rubygem-psych-debuginfo, rubygem-rake, rubygem-rbs, rubygem-rbs-debuginfo, rubygem-rdoc, rubygem-rexml, rubygem-rss, rubygem-test-unit, rubygem-typeprof, rubygems, rubygems-devel ruby 3.3 ruby, ruby-bundled-gems, ruby-bundled-gems-debuginfo, ruby-debuginfo, ruby-debugsource, ruby-default-gems, ruby-devel, ruby-doc, ruby-libs, ruby-libs-debuginfo, rubygem-abrt, rubygem-abrt-doc, rubygem-bigdecimal, rubygem-bigdecimal-debuginfo, rubygem-bundler, rubygem-io-console, rubygem-io-console-debuginfo, rubygem-irb, rubygem-json, rubygem-json-debuginfo, rubygem-minitest, rubygem-mysql2, rubygem-mysql2-debuginfo, rubygem-mysql2-debugsource, rubygem-mysql2-doc, rubygem-pg, rubygem-pg-debuginfo, rubygem-pg-debugsource, rubygem-pg-doc, rubygem-power_assert, rubygem-psych, rubygem-psych-debuginfo, rubygem-racc, rubygem-racc-debuginfo, rubygem-rake, rubygem-rbs, rubygem-rbs-debuginfo, rubygem-rdoc, rubygem-rexml, rubygem-rss, rubygem-test-unit, rubygem-typeprof, rubygems, rubygems-devel rust-toolset rhel8 cargo, cargo-debuginfo, clippy, clippy-debuginfo, rust, rust-analyzer, rust-analyzer-debuginfo, rust-debugger-common, rust-debuginfo, rust-debugsource, rust-doc, rust-gdb, rust-lldb, rust-src, rust-std-static, rust-std-static-wasm32-unknown-unknown, rust-std-static-wasm32-wasi, rust-toolset, rustfmt, rustfmt-debuginfo satellite-5-client 1.0 dnf-plugin-spacewalk, python3-dnf-plugin-spacewalk, python3-rhn-check, python3-rhn-client-tools, python3-rhn-setup, python3-rhn-setup-gnome, python3-rhnlib, rhn-check, rhn-client-tools, rhn-setup, rhn-setup-gnome, rhnlib, rhnsd, rhnsd-debuginfo, rhnsd-debugsource scala 2.10 hawtjni, hawtjni-runtime, jansi, jansi-native, jline, scala, scala-apidoc, scala-swing squid 4 libecap, libecap-debuginfo, libecap-debugsource, libecap-devel, squid, squid-debuginfo, squid-debugsource subversion 1.10 libserf, libserf-debuginfo, libserf-debugsource, mod_dav_svn, mod_dav_svn-debuginfo, subversion, subversion-debuginfo, subversion-debugsource, subversion-devel, subversion-devel-debuginfo, subversion-gnome, subversion-gnome-debuginfo, subversion-javahl, subversion-libs, subversion-libs-debuginfo, subversion-perl, subversion-perl-debuginfo, subversion-tools, subversion-tools-debuginfo, utf8proc, utf8proc-debuginfo, utf8proc-debugsource subversion 1.14 libserf, libserf-debuginfo, libserf-debugsource, mod_dav_svn, mod_dav_svn-debuginfo, python3-subversion, python3-subversion-debuginfo, subversion, subversion-debuginfo, subversion-debugsource, subversion-devel, subversion-devel-debuginfo, subversion-gnome, subversion-gnome-debuginfo, subversion-javahl, subversion-libs, subversion-libs-debuginfo, subversion-perl, subversion-perl-debuginfo, subversion-tools, subversion-tools-debuginfo, utf8proc, utf8proc-debuginfo, utf8proc-debugsource swig 3.0 swig, swig-debuginfo, swig-debugsource, swig-doc, swig-gdb swig 4.0 swig, swig-debuginfo, swig-debugsource, swig-doc, swig-gdb swig 4.1 swig, swig-debuginfo, swig-debugsource, swig-doc, swig-gdb varnish 6 varnish, varnish-devel, varnish-docs, varnish-modules, varnish-modules-debuginfo, varnish-modules-debugsource virt rhel hivex, hivex-debuginfo, hivex-debugsource, hivex-devel, libguestfs, libguestfs-appliance, libguestfs-bash-completion, libguestfs-debuginfo, libguestfs-debugsource, libguestfs-devel, libguestfs-gfs2, libguestfs-gobject, libguestfs-gobject-debuginfo, libguestfs-gobject-devel, libguestfs-inspect-icons, libguestfs-java, libguestfs-java-debuginfo, libguestfs-java-devel, libguestfs-javadoc, libguestfs-man-pages-ja, libguestfs-man-pages-uk, libguestfs-rescue, libguestfs-rsync, libguestfs-tools, libguestfs-tools-c, libguestfs-tools-c-debuginfo, libguestfs-winsupport, libguestfs-xfs, libiscsi, libiscsi-debuginfo, libiscsi-debugsource, libiscsi-devel, libiscsi-utils, libiscsi-utils-debuginfo, libnbd, libnbd-bash-completion, libnbd-debuginfo, libnbd-debugsource, libnbd-devel, libtpms, libtpms-debuginfo, libtpms-debugsource, libtpms-devel, libvirt, libvirt-client, libvirt-client-debuginfo, libvirt-daemon, libvirt-daemon-config-network, libvirt-daemon-config-nwfilter, libvirt-daemon-debuginfo, libvirt-daemon-driver-interface, libvirt-daemon-driver-interface-debuginfo, libvirt-daemon-driver-network, libvirt-daemon-driver-network-debuginfo, libvirt-daemon-driver-nodedev, libvirt-daemon-driver-nodedev-debuginfo, libvirt-daemon-driver-nwfilter, libvirt-daemon-driver-nwfilter-debuginfo, libvirt-daemon-driver-qemu, libvirt-daemon-driver-qemu-debuginfo, libvirt-daemon-driver-secret, libvirt-daemon-driver-secret-debuginfo, libvirt-daemon-driver-storage, libvirt-daemon-driver-storage-core, libvirt-daemon-driver-storage-core-debuginfo, libvirt-daemon-driver-storage-disk, libvirt-daemon-driver-storage-disk-debuginfo, libvirt-daemon-driver-storage-gluster, libvirt-daemon-driver-storage-gluster-debuginfo, libvirt-daemon-driver-storage-iscsi, libvirt-daemon-driver-storage-iscsi-debuginfo, libvirt-daemon-driver-storage-iscsi-direct, libvirt-daemon-driver-storage-iscsi-direct-debuginfo, libvirt-daemon-driver-storage-logical, libvirt-daemon-driver-storage-logical-debuginfo, libvirt-daemon-driver-storage-mpath, libvirt-daemon-driver-storage-mpath-debuginfo, libvirt-daemon-driver-storage-rbd, libvirt-daemon-driver-storage-rbd-debuginfo, libvirt-daemon-driver-storage-scsi, libvirt-daemon-driver-storage-scsi-debuginfo, libvirt-daemon-kvm, libvirt-dbus, libvirt-dbus-debuginfo, libvirt-dbus-debugsource, libvirt-debuginfo, libvirt-debugsource, libvirt-devel, libvirt-docs, libvirt-libs, libvirt-libs-debuginfo, libvirt-lock-sanlock, libvirt-lock-sanlock-debuginfo, libvirt-nss, libvirt-nss-debuginfo, libvirt-python, libvirt-python-debugsource, libvirt-wireshark, libvirt-wireshark-debuginfo, lua-guestfs, lua-guestfs-debuginfo, nbdfuse, nbdfuse-debuginfo, nbdkit, nbdkit-bash-completion, nbdkit-basic-filters, nbdkit-basic-filters-debuginfo, nbdkit-basic-plugins, nbdkit-basic-plugins-debuginfo, nbdkit-curl-plugin, nbdkit-curl-plugin-debuginfo, nbdkit-debuginfo, nbdkit-debugsource, nbdkit-devel, nbdkit-example-plugins, nbdkit-example-plugins-debuginfo, nbdkit-gzip-filter, nbdkit-gzip-filter-debuginfo, nbdkit-gzip-plugin, nbdkit-gzip-plugin-debuginfo, nbdkit-linuxdisk-plugin, nbdkit-linuxdisk-plugin-debuginfo, nbdkit-nbd-plugin, nbdkit-nbd-plugin-debuginfo, nbdkit-python-plugin, nbdkit-python-plugin-debuginfo, nbdkit-server, nbdkit-server-debuginfo, nbdkit-ssh-plugin, nbdkit-ssh-plugin-debuginfo, nbdkit-tar-filter, nbdkit-tar-filter-debuginfo, nbdkit-tar-plugin, nbdkit-tar-plugin-debuginfo, nbdkit-tmpdisk-plugin, nbdkit-tmpdisk-plugin-debuginfo, nbdkit-vddk-plugin, nbdkit-vddk-plugin-debuginfo, nbdkit-xz-filter, nbdkit-xz-filter-debuginfo, netcf, netcf-debuginfo, netcf-debugsource, netcf-devel, netcf-libs, netcf-libs-debuginfo, perl-hivex, perl-hivex-debuginfo, perl-Sys-Guestfs, perl-Sys-Guestfs-debuginfo, perl-Sys-Virt, perl-Sys-Virt-debuginfo, perl-Sys-Virt-debugsource, python3-hivex, python3-hivex-debuginfo, python3-libguestfs, python3-libguestfs-debuginfo, python3-libnbd, python3-libnbd-debuginfo, python3-libvirt, python3-libvirt-debuginfo, qemu-guest-agent, qemu-guest-agent-debuginfo, qemu-img, qemu-img-debuginfo, qemu-kvm, qemu-kvm-block-curl, qemu-kvm-block-curl-debuginfo, qemu-kvm-block-gluster, qemu-kvm-block-gluster-debuginfo, qemu-kvm-block-iscsi, qemu-kvm-block-iscsi-debuginfo, qemu-kvm-block-rbd, qemu-kvm-block-rbd-debuginfo, qemu-kvm-block-ssh, qemu-kvm-block-ssh-debuginfo, qemu-kvm-common, qemu-kvm-common-debuginfo, qemu-kvm-core, qemu-kvm-core-debuginfo, qemu-kvm-debuginfo, qemu-kvm-debugsource, qemu-kvm-docs, qemu-kvm-hw-usbredir, qemu-kvm-hw-usbredir-debuginfo, qemu-kvm-ui-opengl, qemu-kvm-ui-opengl-debuginfo, qemu-kvm-ui-spice, qemu-kvm-ui-spice-debuginfo, ruby-hivex, ruby-hivex-debuginfo, ruby-libguestfs, ruby-libguestfs-debuginfo, seabios, seabios-bin, seavgabios-bin, sgabios, sgabios-bin, SLOF, supermin, supermin-debuginfo, supermin-debugsource, supermin-devel, swtpm, swtpm-debuginfo, swtpm-debugsource, swtpm-devel, swtpm-libs, swtpm-libs-debuginfo, swtpm-tools, swtpm-tools-debuginfo, swtpm-tools-pkcs11, virt-dib, virt-dib-debuginfo, virt-v2v, virt-v2v-bash-completion, virt-v2v-debuginfo, virt-v2v-debugsource, virt-v2v-man-pages-ja, virt-v2v-man-pages-uk
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/package_manifest/AppStream-repository
A.4. tuned
A.4. tuned Tuned is a tuning daemon that can adapt the operating system to perform better under certain workloads by setting a tuning profile. It can also be configured to react to changes in CPU and network use and adjusts settings to improve performance in active devices and reduce power consumption in inactive devices. To configure dynamic tuning behavior, edit the dynamic_tuning parameter in the /etc/tuned/tuned-main.conf file. Tuned then periodically analyzes system statistics and uses them to update your system tuning settings. You can configure the time interval in seconds between these updates with the update_interval parameter. For further details about tuned, see the man page:
[ "man tuned" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-tuned
5. Developing Installer Add-ons
5. Developing Installer Add-ons 5.1. Introduction to Anaconda and Add-ons 5.1.1. Introduction to Anaconda Anaconda is the operating system installer used in Fedora, Red Hat Enterprise Linux, and their derivatives. It is a set of Python modules and scripts together with some additional files like Gtk widgets (written in C), systemd units, and dracut libraries. Together, they form a tool that allows users to set parameters of the resulting (target) system and then set such a system up on a machine. The installation process has four major steps: installation destination preparation (usually disk partitioning) package and data installation boot loader installation and configuration configuration of the newly installed system There are three ways you can control the installer and specify installation options. The most common approach is to use the graphical user interface (GUI). This interface is meant to allow users to install the system interactively with little or no configuration required before beginning the installation, and it should cover all common use cases, including setting up complicated partitioning layouts. The graphical interface also supports remote access over VNC , which allows you to use the GUI even on systems with no graphics cards or even attached monitor. However, there are still cases where this is not desired, but at the same time, you may want to perform an interactive installation. For these cases, a text mode (TUI) is available. The TUI works in a way similar to a monochrome line printer, which allows it to work even on serial consoles which do not support cursor movement, colors and other advanced features. The text mode is limited in that it only allows you to customize most common options, such as network settings, language options or installation (package) source; advanced features such as manual partitioning are not available in this interface. The third way to install a system using Anaconda is by using a Kickstart file - a plain text file with shell-like syntax which can contain data to drive the installation process. A Kickstart file allows you to partially or completely automate the installation. A certain set of commands which configures all required areas is necessary to completely automate the installation; if one or more of the required commands is missing, the installation will require interaction. If all required commands are present, the installation will be performed in a completely automatic way, without any need for interaction. Kickstart provides the highest amount of options, covering use cases where neither the TUI nor the GUI is sufficient. Every feature in Anaconda must always be supported in Kickstart; other interfaces follow only subsets of all available options, which allows them to remain clear. 5.1.2. Firstboot and Initial Setup The first boot of the newly installed system is traditionally considered a part of the installation process as well, because some parts of configuration such as user creation are often performed at this point. Previously, the Firstboot tool has been used for this purpose, allowing you to register your newly installer Red Hat Enterprise Linux system or configure Kdump . However, Firstboot relies on no longer maintained tools such as Gtk2 and the pygtk2 module. [1] For this reason, a new tool called Initial Setup was developed, which reuses code from Anaconda . This allows add-ons developed for Anaconda to be easily reused in Initial Setup . This topic is further discussed in Section 5.6, "Writing an Anaconda add-on" . 5.1.3. Anaconda and Initial Setup Add-ons Installing a new operating system is a vastly complicated use case - each user may want to do something slightly different. Designing an installer for every corner case would cause it to be cluttered with rarely-used functionality. For this reason, when the installer was being rewritten into its current form, it gained support for add-ons. Anaconda add-ons can be used to add your own Kickstart commands and options as well as new configuration screens in the graphical and text-based user interface, depending on your specific use case. Each add-on must have Kickstart support; the GUI and TUI are optional, but can be very helpful. In current releases of Red Hat Enterprise Linux (7.1 and later) and Fedora [2] (21 and later), one add-on is included by default: The Kdump add-on, which adds support for configuring kernel crash dumping during the installation. This add-on has full support in Kickstart (using the %addon com_redhat_kdump command and its options) and is fully integrated as an additional screen in the text-based and graphical interfaces. You can develop other add-ons in the same way and add them to the default installer using procedures described further in this guide. 5.1.4. Additional Information Following links contain additional information about Anaconda and Initial Setup : The Anaconda page on Fedora Project Wiki contains provides more information about the installer. Information about development of Anaconda into its current version is available at the Anaconda/NewInstaller Wiki page . The Kickstart Installations chapter of the Red Hat Enterprise Linux 7 Installation Guide provides full documentation of Kickstart, including a list of all supported commands and options. The Installing Using Anaconda chapter of the Red Hat Enterprise Linux 7 Installation Guide describes the installation process in the graphical and text user interfaces. For information about tools used for after-installation configuration, see Initial Setup and Firstboot . 5.2. Architecture of Anaconda Anaconda is a set of Python modules and scripts. It also uses several external packages and libraries, some of which were created specifically for the installer. Major components of this toolset include the following packages: pykickstart - used to parse and validate Kickstart files and also to provide a data structure which stores values which drive the installation yum - the package manager which handles installation of packages and resolving dependencies blivet - originally split from the anaconda package as pyanaconda.storage ; used to handle all activities related to storage management pyanaconda - package containing the core of the user interface and modules for functionality unique to Anaconda , such as keyboard and timezone selection, network configuration, and user creation, as well as a number of utilities and system-oriented functions python-meh - contains an exception handler which gathers and stores additional system information in case of a crash and passes this information to the libreport library, which itself is a part of the ABRT Project . The life cycle of data during the installation process is straightforward. If a Kickstart file is provided, it is processed by the pykickstart module and imported into memory as a tree-like structure. If no Kickstart file is provided, an empty tree-like structure is created instead. If the installation is interactive (not all required Kickstart commands have been used), the structure is then updated with choices made by the user in the interactive interface. Once all required choices are made, the installation process begins and values stored in the structure are used to determine parameters of the installation. The values are also written as a Kickstart file which is saved in the /root/ directory on the installed system; therefore the installation can be replicated automatically by reusing this automatically generated Kickstart file. Elements of the tree-like structure are defined by the pykickstart package, but some of them can be overriden by modified versions from the pyanaconda.kickstart module. An important rule which governs this behavior is that there is no place to store configuration data, and the installation process is data-driven and relies on transactions as much as possible. This enforces the following features: every feature of the installer must be supported in Kickstart there is a single, obvious point in the installation process where changes are written to the target system; before this point, no lasting changes (e.g. formatting storage) are made every change made manually in the user interface is reflected in the resulting Kickstart file and can be replicated The fact that the installation is data-driven means that installation and configuration logic lies within the methods of the items in the tree-like structure. Every item is set up (the setup method) to modify the runtime environment of the installation if necessary, and then executed (the execute method) to perform the changes on the target system. These methods are further described in Section 5.6, "Writing an Anaconda add-on" . 5.3. The Hub & Spoke model One of the notable differences between Anaconda and most other operating system installers is its non-linear nature, also known as the hub and spoke model. The hub and spoke model of Anaconda has several advantages, including: users are not forced to go through the screens in some strictly defined order users are not forced to visit every screen no matter if they understand what the options configured in it mean or not it is good for the transactional mode where all desired values can be set while nothing is actually happening to the underlying machine until a special button is clicked it provides way to show an overview of the configured values it has a great support for extensibility, because additional spokes can be put on hubs without need to reorder anything and resolve some complex ordering dependencies it can be used for both graphical and text mode of the installer The diagram below shows the installer layout as well as possible interactions between hubs and spokes (screens): Figure 2. Diagram of the hub and spoke model In the diagram, screens 2-13 are called normal spokes , and screens 1 and 14 are standalone spokes . Standalone spokes are a type of screen which is a type of screen that should be used only in case it has to be visited before (or after) the following (or ) standalone spoke or hub. This may be, for example, the Welcome screen at the beginning of the installation which prompts you to choose your language for the rest of the installation. Note Screens mentioned in the rest of this section are screens from the installer's graphical interface (GUI). Central points of the hub and spoke model are hubs . There are two hubs by default: The Installation Summary hub which shows a summary of configured options before the installation begins The Configuration and Progress hub which appears after you click Begin Installation in Installation Summary , and which displays the progress of the installation process and allows you to configure additional options (set the root password and create a user account). Each spoke has several predefined properties which are reflected on the hub. These are: ready - states whether the spoke can be visited or not; for example, when the installer is configuring a package source, that spoke is not ready, is colored gray, and cannot be accessed until configuration is complete completed - marks the spoke as completed (all required values are set) or not mandatory - determines whether the spoke must be visited and confirmed by the user before continuing the installation; for example, the Installation Destination spoke must always be visited, even if you want to use automatic disk partitioning status - provides a short summary of values configured within the spoke (displayed under the spoke name in the hub) To make the user interface clearer, spokes are grouped together into categories . For example, the Localization category groups together spokes for keyboard layout selection, language support and time zone settings. Each spoke contains UI controls which display and allow you to modify values from one or more sub-trees of the in-memory tree-like structure which was discussed in Section 5.2, "Architecture of Anaconda" . As Section 5.6, "Writing an Anaconda add-on" explains, the same applies to spokes provided by add-ons. 5.4. Threads and Communication Some of the actions which need to be performed during the installation process, such as scanning disks for existing partitions or downloading package metadata, can take a long time. To prevent you from waiting and remain responsive if possible, Anaconda runs these actions in separate threads. The Gtk toolkit does not support element changes from multiple threads. The main event loop of Gtk runs in the main thread of the Anaconda process itself, and all code performing actions which involve the GUI must make sure that these actions are run in the main thread as well. The only supported way to do so is by using the GLib.idle_add , which is not always easy or desired. To alleviate this problem, several helper functions and decorators are defined in the pyanaconda.ui.gui.utils module. The most useful of those are the @gtk_action_wait and @gtk_action_nowait decorators. They change the decorated function or method in such a way that when this function or method is called, it is automatically queued into Gtk's main loop, run in the main thread, and the return value is either returned to the caller or dropped, respectively. As mentioned previously, one of the main reasons for using multiple threads is to allow the user to configure some screens while other screens which are currently busy (such as Installation Source when it downloads package metadata) configure themselves. Once the configuration is finished, the spoke which was previously busy needs to announce that it is now ready and not blocked; this is handled by a message queue called hubQ , which is being periodically checked in the main event loop. When a spoke becomes accessible, it sends a message to this queue announcing this change and that it should no longer be blocked. The same applies in a situation where a spoke needs to refresh its status or completion flag. The Configuration and Progress hub has a different queue called progressQ which serves as a medium to transfer installation progress updates. These mechanisms are also needed for the text-based interface, where the situation is more complicated; there is no main loop in text mode, instead the majority of time in this mode is spent waiting for keyboard input. 5.5. Anaconda Add-on Structure An Anaconda add-on is a Python package containing a directory with an __init__.py and other source directories (subpackages) inside. Because Python allows importing each package name only once, the package top-level directory name must be unique. At the same time, the name can be arbitrary, because add-ons are loaded regardless of their name - the only requirement is that they must be placed in a specific directory. The suggested naming convention for add-ons is therefore similar to Java packages or D-Bus service names: prefix the add-on name with the reversed domain name of your organization, using underscores ( _ ) instead of dots so that the directory name is a valid identifier for a Python package. An example add-on name following these suggestions would therefore be e.g. com_example_hello_world . This convention follows the recommended naming scheme for Python package and module names. Important Make sure to create an __init__.py file in each directory. Directories missing this file are not considered valid Python packages. When writing an add-on, keep in mind that every function supported in the installer must be supported in Kickstart; GUI and TUI support is optional. Support for each interface (Kickstart, graphical interface and text interface) must be in a separate subpackage and these subpackages must be named ks for Kickstart, gui for the graphical interface and tui for the text-based interface. The gui and tui packages must also contain a spokes subpackage. [3] Names of modules inside these packages are arbitrary; the ks/ , gui/ and tui/ directories can contain Python modules with any name. A sample directory structure for an add-on which supports every interface (Kickstart, GUI and TUI) will look similar to the following: Example 2. Sample Add-on Structure Each package must contain at least one module with an arbitrary name defining classes inherited from one or more classes defined in the API. This is further discussed in Section 5.6, "Writing an Anaconda add-on" . All add-ons should follow Python's PEP 8 and PEP 257 guidelines for docstring conventions. There is no consensus on the format of the actual content of docstrings in Anaconda ; the only requirement is that they are human-readable. If you plan to use automatically generated documentation for your add-on, docstrings should follow the guidelines for the toolkit you use to accomplish this. 5.6. Writing an Anaconda add-on The sections below will demonstrate the process writing and testing a sample add-on called Hello World. This sample add-on will support all interfaces (Kickstart, GUI and TUI). Sources for this sample add-on are available on GitHub in the rhinstaller/hello-world-anaconda-addon repository; it is recommended to clone this repository or at least open the sources in the web interface. Another repository to review is rhinstaller/anaconda , which contains the installer source code; it will be referred to in several parts of this section as well. Before you begin developing the add-on itself, start by creating its directory structure as described in Section 5.5, "Anaconda Add-on Structure" . Then, continue with Section 5.6.1, "Kickstart Support" , as Kickstart support is mandatory for all add-ons. After that, you can optionally continue with Section 5.6.2, "Graphical user interface" and Section 5.6.3, "Text User Interface" if needed. 5.6.1. Kickstart Support Kickstart support is always the first part of any add-on that should be developed. Other packages - support for the graphical and text-based interface - will depend on it. To begin, navigate to the com_example_hello_world/ks/ directory you have created previously, make sure it contains an __init__.py file, and add another Python script named hello_world.py . Unlike built-in Kickstart commands, add-ons are used in their own sections . Each use of an add-on in a Kickstart file begins with an %addon statement and is closed by %end . The %addon line also contains the name of the add-on (such as %addon com_example_hello_world ) and optionally a list of arguments, if the add-on supports them. An example use of an add-on in a Kickstart file looks like the example below: Example 3. Using an Add-on in a Kickstart File The key class for Kickstart support in add-ons is called AddonData . This class is defined in pyanaconda.addons and represents an object for parsing and storing data from a Kickstart file. Arguments are passed as a list to an instance of the add-on class inherited from the AddonData class. Anything between the first and last line is passed to the add-on's class one line at a time. To keep the example Hello World add-on simple, it will merge all lines in this block into a single line and separate the original lines with a space. The example add-on requires a class inherited from AddonData with a method for handling the list of arguments from the %addon line, and a method for handling lines inside the section. The pyanaconda/addons.py module contains two methods which can be used for this: handle_header - takes a list of arguments from the %addon line (and line numbers for error reporting) handle_line - takes a single line of content from between the %addon and %end statements The example below demonstrates a Hello World add-on which uses the methods described above: Example 4. Using handle_header and handle_line from pyanaconda.addons import AddonData from pykickstart.options import KSOptionParser # export HelloWorldData class to prevent Anaconda's collect method from taking # AddonData class instead of the HelloWorldData class # :see: pyanaconda.kickstart.AnacondaKSHandler.__init__ __all__ = ["HelloWorldData"] HELLO_FILE_PATH = "/root/hello_world_addon_output.txt" class HelloWorldData(AddonData): """ Class parsing and storing data for the Hello world addon. :see: pyanaconda.addons.AddonData """ def __init__(self, name): """ :param name: name of the addon :type name: str """ AddonData.__init__(self, name) self.text = "" self.reverse = False def handle_header(self, lineno, args): """ The handle_header method is called to parse additional arguments in the %addon section line. :param lineno: the current linenumber in the kickstart file :type lineno: int :param args: any additional arguments after %addon <name> :type args: list """ op = KSOptionParser() op.add_option("--reverse", action="store_true", default=False, dest="reverse", help="Reverse the display of the addon text") (opts, extra) = op.parse_args(args=args, lineno=lineno) # Reject any additoinal arguments. Since AddonData.handle_header # rejects any arguments, we can use it to create an error message # and raise an exception. if extra: AddonData.handle_header(self, lineno, extra) # Store the result of the option parsing self.reverse = opts.reverse def handle_line(self, line): """ The handle_line method that is called with every line from this addon's %addon section of the kickstart file. :param line: a single line from the %addon section :type line: str """ # simple example, we just append lines to the text attribute if self.text is "": self.text = line.strip() else: self.text += " " + line.strip() The example begins by importing necessary methods and defining an __all__ variable which is necessary to prevent Anaconda 's collect method from taking the AddonData class instead of add-on specific HelloWorldData . Then, the example shows a definition of the HelloWorldData class inherited from AddonData with its __init__ method calling the parent's __init__ and initializing the attributes self.text and self.reverse to False . The self.reverse attribute is populated in the handle_header method, and the self.text is populated in handle_line . The handle_header method uses an instance of the KSOptionParser provided by pykickstart to parse additional options used on the %addon line, and handle_line strips the content lines of white space at the beginning and end of each line, and appends them to self.text . The code above covers the first phase of the data life cycle in the installation process: it reads data from the Kickstart file. The step is to use this data to drive the installation process. Two predefined methods are available for this purpose: setup - called before the installation transaction starts and used to make changes to the installation runtime environment execute - called at the end of the transaction and used to make changes to the target system To use these two methods, you must add some new imports and a constant to your module, as shown in the following example: Example 5. Importing the setup and execute Methods import os.path from pyanaconda.addons import AddonData from pyanaconda.constants import ROOT_PATH HELLO_FILE_PATH = "/root/hello_world_addon_output.txt" An updated example of the Hello World add-ons with the setup and execute methods included is below: Example 6. Using the setup and execute Methods def setup(self, storage, ksdata, instclass, payload): """ The setup method that should make changes to the runtime environment according to the data stored in this object. :param storage: object storing storage-related information (disks, partitioning, bootloader, etc.) :type storage: blivet.Blivet instance :param ksdata: data parsed from the kickstart file and set in the installation process :type ksdata: pykickstart.base.BaseHandler instance :param instclass: distribution-specific information :type instclass: pyanaconda.installclass.BaseInstallClass :param payload: object managing packages and environment groups for the installation :type payload: any class inherited from the pyanaconda.packaging.Payload class """ # no actions needed in this addon pass def execute(self, storage, ksdata, instclass, users, payload): """ The execute method that should make changes to the installed system. It is called only once in the post-install setup phase. :see: setup :param users: information about created users :type users: pyanaconda.users.Users instance """ hello_file_path = os.path.normpath(ROOT_PATH + HELLO_FILE_PATH) with open(hello_file_path, "w") as fobj: fobj.write("%s\n" % self.text) In the above example, the setup method does nothing; the Hello World add-on does not make any changes to the installation runtime environment. The execute method writes stored text into a file created in the target system's root ( / ) directory. The most important information in the above example is the amount and meaning of the arguments passed to the two new methods; these are described in docstrings within the example. The final phase of the data life cycle, as well as the last part of the code needed in a module providing Kickstart support, is generating a new Kickstart file, which includes values set at installation time, at the end of the installation process as described in Section 5.2, "Architecture of Anaconda" . This is performed by calling the __str__ method recursively on the tree-like structure storing installation data, which means that the class inherited from AddonData must define its own __str__ method which returns its stored data in valid Kickstart syntax. This returned data must be possible to parse again using pykickstart . In the Hello World example, the __str__ method will be similar to the following example: Example 7. Defining a __str__ Method def __str__(self): """ What should end up in the resulting kickstart file, i.e. the %addon section containing string representation of the stored data. """ addon_str = "%%addon %s" % self.name if self.reverse: addon_str += "--reverse" addon_str += "\n%s\n%%end" % self.text return addon_str Once your Kickstart support module contains all necessary methods ( handle_header , handle_line , setup , execute and __str__ ), it becomes a valid Anaconda add-on. You can continue with the following sections to add support for the graphical and text-based user interfaces, or you can continue with Section 5.7, "Deploying and testing an Anaconda add-on" and test the add-on. 5.6.2. Graphical user interface This section will describe adding support for the graphical user interface (GUI) to your add-on. Before you begin, make sure that your add-on already includes support for Kickstart as described in the section. Note Before you start developing add-ons with support for the graphical interface, make sure to install the anaconda-widgets and anaconda-widgets-devel packages, which contain Gtk widgets specific for Anaconda such as SpokeWindow . 5.6.2.1. Basic features Similarly to Kickstart support in add-ons, GUI support requires every part of the add-on to contain at least one module with a definition of a class inherited from a particular class defined by the API. In case of graphical support, the only recommended class is NormalSpoke , which is defined in pyanaconda.ui.gui.spokes . As the class name suggests, it is a class for the normal spoke type of screen as described in Section 5.3, "The Hub & Spoke model" . To implement a new class inherited from NormalSpoke , you must define the following class attributes which are required by the API: builderObjects - lists all top-level objects from the spoke's .glade file that should be, with their children objects (recursively), exposed to the spoke - or should be an empty list if everything should be exposed to the spoke (not recommended) mainWidgetName - contains the id of the main window widget [4] as defined in the .glade file uiFile - contains the name of the .glade file category - contains the class of the category the spoke belongs to icon - contains the identifier of the icon that will be used for the spoke on the hub title defines the title that will be used for the spoke on the hub Example module with all required definitions is shown in the following example: Example 8. Defining Attributes Required for the Normalspoke Class # will never be translated _ = lambda x: x N_ = lambda x: x # the path to addons is in sys.path so we can import things from org_fedora_hello_world from org_fedora_hello_world.gui.categories.hello_world import HelloWorldCategory from pyanaconda.ui.gui.spokes import NormalSpoke # export only the spoke, no helper functions, classes or constants __all__ = ["HelloWorldSpoke"] class HelloWorldSpoke(NormalSpoke): """ Class for the Hello world spoke. This spoke will be in the Hello world category and thus on the Summary hub. It is a very simple example of a unit for the Anaconda's graphical user interface. :see: pyanaconda.ui.common.UIObject :see: pyanaconda.ui.common.Spoke :see: pyanaconda.ui.gui.GUIObject """ ### class attributes defined by API ### # list all top-level objects from the .glade file that should be exposed # to the spoke or leave empty to extract everything builderObjects = ["helloWorldSpokeWindow", "buttonImage"] # the name of the main window widget mainWidgetName = "helloWorldSpokeWindow" # name of the .glade file in the same directory as this source uiFile = "hello_world.glade" # category this spoke belongs to category = HelloWorldCategory # spoke icon (will be displayed on the hub) # preferred are the -symbolic icons as these are used in Anaconda's spokes icon = "face-cool-symbolic" # title of the spoke (will be displayed on the hub) title = N_("_HELLO WORLD") The __all__ attribute is used to export the spoke class, followed by the first lines of its definition including definitions of attributes mentioned above. The values of these attributes are referencing widgets defined in com_example_hello_world/gui/spokes/hello.glade file. Two other notable attributes are present. The first is category , which has its value imported from the HelloWorldCategory class from the com_example_hello_world.gui.categories module. The HelloWorldCategory class will be discussed later, but for now, note that the path to add-ons is in sys.path so that things can be imported from the com_example_hello_world package. The second notable attribute in the example is title , which contains two underscores in its definition. The first one is part of the N_ function name which marks the string for translation, but returns the non-translated version of the string (translation is done later). The second underscore marks the beginning of the title itself and makes the spoke reachable using the Alt + H keyboard shortcut. What usually follows the header of the class definition and the class attributes definitions is the constructor that initializes an instance of the class. In case of the Anaconda graphical interface objects there are two methods initializing a new instance: the __init__ method and the initialize method. The reason for two such functions is that the GUI objects may be created in memory at one time and fully initialized (which can take a longer time) at a different time. Therefore, the __init__ method should only call the parent's __init__ method and (for example) initialize non-GUI attributes. On the other hand, the initialize method that is called when the installer's graphical user interface initializes should finish the full initialization of the spoke. In the sample Hello World add-on, these two methods are defined as follows (note the number and description of the arguments passed to the __init__ method): Example 9. Defining the __init__ and initialize Methods def __init__(self, data, storage, payload, instclass): """ :see: pyanaconda.ui.common.Spoke.__init__ :param data: data object passed to every spoke to load/store data from/to it :type data: pykickstart.base.BaseHandler :param storage: object storing storage-related information (disks, partitioning, bootloader, etc.) :type storage: blivet.Blivet :param payload: object storing packaging-related information :type payload: pyanaconda.packaging.Payload :param instclass: distribution-specific information :type instclass: pyanaconda.installclass.BaseInstallClass """ NormalSpoke.__init__(self, data, storage, payload, instclass) def initialize(self): """ The initialize method that is called after the instance is created. The difference between __init__ and this method is that this may take a long time and thus could be called in a separated thread. :see: pyanaconda.ui.common.UIObject.initialize """ NormalSpoke.initialize(self) self._entry = self.builder.get_object("textEntry") Note the data parameter passed to the __init__ method. This is the in-memory tree-like representation of the Kickstart file where all data is stored. In one of the ancestors' __init__ methods it is stored in the self.data attribute, which allows all other methods in the class to read and modify the structure. Because the HelloWorldData class has already been defined in Section 5.6.1, "Kickstart Support" , there already is a subtree in self.data for this add-on, and its root (an instance of the class) is available as self.data.addons.com_example_hello_world . One of the other things an ancestor's __init__ does is initializing an instance of the GtkBuilder with the spoke's .glade file and storing it as self.builder . This is used in the initialize method to get the GtkTextEntry used to show and modify the text from the kickstart file's %addon section. The __init__ and initialize methods are both important when the spoke is created. However, the main role of the spoke is to be visited by an user who wants to change or review the values this spoke shows and sets. To enable this, three other methods are available: refresh - called when the spoke is about to be visited; This method refreshes the state of the spoke (mainly its UI elements) to make sure that current values stored in the self.data structure are displayed apply - called when the spoke is left and used to store values from UI elements back into the self.data structure execute - called when the spoke is left and used to perform any runtime changes based on the new state of the spoke These functions are implemented in the sample Hello World add-on in the following way: Example 10. Defining the refresh, apply and execute Methods def refresh(self): """ The refresh method that is called every time the spoke is displayed. It should update the UI elements according to the contents of self.data. :see: pyanaconda.ui.common.UIObject.refresh """ self._entry.set_text(self.data.addons.org_fedora_hello_world.text) def apply(self): """ The apply method that is called when the spoke is left. It should update the contents of self.data with values set in the GUI elements. """ self.data.addons.org_fedora_hello_world.text = self._entry.get_text() def execute(self): """ The excecute method that is called when the spoke is left. It is supposed to do all changes to the runtime environment according to the values set in the GUI elements. """ # nothing to do here pass You can use several additional methods to control the spoke's state: ready - determines whether the spoke is ready to be visited; if the value is false, the spoke is not accessible (e.g. the Package Selection spoke before a package source is configured) completed - determines if the spoke has been completed mandatory - determines if the spoke is mandatory or not (e.g. the Installation Destination spoke, which must be always visited, even if you want to use automatic partitioning) All of these attributes need to be dynamically determined based on the current state of the installation process. Below is a sample implementation of these methods in the Hello World add-on, which requires some value to be set in the text attribute of the HelloWorldData class: Example 11. Defining the ready, completed and mandatory Methods @property def ready(self): """ The ready property that tells whether the spoke is ready (can be visited) or not. The spoke is made (in)sensitive based on the returned value. :rtype: bool """ # this spoke is always ready return True @property def completed(self): """ The completed property that tells whether all mandatory items on the spoke are set, or not. The spoke will be marked on the hub as completed or uncompleted acording to the returned value. :rtype: bool """ return bool(self.data.addons.org_fedora_hello_world.text) @property def mandatory(self): """ The mandatory property that tells whether the spoke is mandatory to be completed to continue in the installation process. :rtype: bool """ # this is an optional spoke that is not mandatory to be completed return False After defining these properties, the spoke can control its accessibility and completeness, but it cannot provide a summary of the values configured within - you must visit the spoke to see how it is configured, which may not be desired. For this reason, an additional property called status exists; this property contains a single line of text with a short summary of configured values, which can then be displayed in the hub under the spoke title. The status property is defined in the Hello World example add-on as follows: Example 12. Defining the status Property @property def status(self): """ The status property that is a brief string describing the state of the spoke. It should describe whether all values are set and if possible also the values themselves. The returned value will appear on the hub below the spoke's title. :rtype: str """ text = self.data.addons.org_fedora_hello_world.text # If --reverse was specified in the kickstart, reverse the text if self.data.addons.org_fedora_hello_world.reverse: text = text[::-1] if text: return _("Text set: %s") % text else: return _("Text not set") After defining all properties described in this chapter, the add-on has full support for the graphical user interface as well as Kickstart. Note that the example demonstrated here is very simple and does not contain any controls; knowledge of Python Gtk programming is required to develop a functional, interactive spoke in the GUI. One notable restriction is that each spoke must have its own main window - an instance of the SpokeWindow widget. This widget, along with some other widgets specific to Anaconda , is found in the anaconda-widgets package. Other files required for development of add-ons with GUI support (such as Glade definitions) can be found in the anaconda-widgets-devel package. Once your graphical interface support module contains all necessary methods you can continue with the following section to add support for the text-based user interface, or you can continue with Section 5.7, "Deploying and testing an Anaconda add-on" and test the add-on. 5.6.2.2. Advanced features The pyanaconda package contains several helper and utility functions and constructs which may be used by hubs and spokes and which have not been covered in the section. Most of them are located in pyanaconda.ui.gui.utils . The sample Hello World add-on demonstrates usage of the englightbox content manager which is also used in Anaconda . This manager can put a window into a lightbox to increase its visibility and focus it and to prevent users interacting with the underlying window. To demonstrate this function, the sample add-on contains a button which opens a new dialog window; the dialog itself is a special HelloWorldDialog inheriting from the GUIObject class, which is defined in pyanaconda.ui.gui.__init__ . The dialog class defines the run method which runs and destroys an internal Gtk dialog accessible through the self.window attribute, which is populated using a mainWidgetName class attribute with the same meaning. Therefore, the code defining the dialog is very simple, as demonstrated in the following example: Example 13. Defining a englightbox Dialog # every GUIObject gets ksdata in __init__ dialog = HelloWorldDialog(self.data) # show dialog above the lightbox with enlightbox(self.window, dialog.window): dialog.run() The code above creates an instance of the dialog and then uses the enlightbox context manager to run the dialog within a lightbox. The context manager needs a reference to the window of the spoke and to the dialog's window to instantiate the lightbox for them. Another useful feature provided by Anaconda is the ability to define a spoke which will appear both during the installation and after the first reboot (in the Initial Setup utility described in Section 5.1.2, "Firstboot and Initial Setup" ). To make a spoke available in both Anaconda and Initial Setup , you must inherit the special FirstbootSpokeMixIn (or, more precisely, mixin) as the first inherited class defined in the pyanaconda.ui.common module. If you want to make a certain spoke available only in Initial Setup , you should instead inherit the FirstbootOnlySpokeMixIn class. There are many more advanced features provided by the pyanaconda package (like the @gtk_action_wait and @gtk_action_nowait decorators), but they are out of scope of this guide. Readers are recommended to go through the installer's sources for examples. 5.6.3. Text User Interface The third supported interface, after Kickstart and GUI which have been discussed in sections, Anaconda also supports a text-based interface. This interface is more limited in its capabilities, but on some systems it may be the only choice for an interactive installation. For more information about differences between the text-based and graphical interface and about limitations of the TUI, see Section 5.1.1, "Introduction to Anaconda" . To add support for the text interface into your add-on, create a new set of subpackages under the tui directory as described in Section 5.5, "Anaconda Add-on Structure" . Text mode support in the installer is based on the simpleline utility, which only allows very simple user interaction. It does not support cursor movement (instead acting like a line printer) nor any visual enhancements like using different colors or fonts. Internally, there are three main classes in the simpleline toolkit: App , UIScreen and Widget . Widgets, which are units containing information to be shown (printed) on the screen, are placed on UIScreens which are switched by a single instance of the App class. On top of the basic elements, there are hubs , spokes and dialogs , all containing various widgets in a way similar to the graphical interface. For an add-on, the most important classes are NormalTUISpoke and various other classes defined in the pyanaconda.ui.tui.spokes package. All of those classes are based on the TUIObject class, which itself is an equivalent of the GUIObject class discussed in the chapter. Each TUI spoke is a Python class inheriting from the NormalTUISpoke class, overriding special arguments and methods defined by the API. Because the text interface is simpler than the GUI, there are only two such arguments: title - determines the title of the spoke, same as the title argument in the GUI category - determines the category of the spoke as a string; the category name is not displayed anywhere, it is only used for grouping Note Categories are handled differently than in the GUI. [5] It is recommended to assign a pre-existing category to your new spoke. Creating a new category would require patching Anaconda , and brings little benefit. Each spoke is also expected to override several methods, namely __init__ , initialize , refresh , refresh , apply , execute , input , and prompt , and properties ( ready , completed , mandatory , and status ). All of these have already been described in Section 5.6.2, "Graphical user interface" . The example below shows the implementation of a simple TUI spoke in the Hello World sample add-on: Example 14. Defining a Simple TUI Spoke def __init__(self, app, data, storage, payload, instclass): """ :see: pyanaconda.ui.tui.base.UIScreen :see: pyanaconda.ui.tui.base.App :param app: reference to application which is a main class for TUI screen handling, it is responsible for mainloop control and keeping track of the stack where all TUI screens are scheduled :type app: instance of pyanaconda.ui.tui.base.App :param data: data object passed to every spoke to load/store data from/to it :type data: pykickstart.base.BaseHandler :param storage: object storing storage-related information (disks, partitioning, bootloader, etc.) :type storage: blivet.Blivet :param payload: object storing packaging-related information :type payload: pyanaconda.packaging.Payload :param instclass: distribution-specific information :type instclass: pyanaconda.installclass.BaseInstallClass """ NormalTUISpoke.__init__(self, app, data, storage, payload, instclass) self._entered_text = "" def initialize(self): """ The initialize method that is called after the instance is created. The difference between __init__ and this method is that this may take a long time and thus could be called in a separated thread. :see: pyanaconda.ui.common.UIObject.initialize """ NormalTUISpoke.initialize(self) def refresh(self, args=None): """ The refresh method that is called every time the spoke is displayed. It should update the UI elements according to the contents of self.data. :see: pyanaconda.ui.common.UIObject.refresh :see: pyanaconda.ui.tui.base.UIScreen.refresh :param args: optional argument that may be used when the screen is scheduled (passed to App.switch_screen* methods) :type args: anything :return: whether this screen requests input or not :rtype: bool """ self._entered_text = self.data.addons.org_fedora_hello_world.text return True def apply(self): """ The apply method that is called when the spoke is left. It should update the contents of self.data with values set in the spoke. """ self.data.addons.org_fedora_hello_world.text = self._entered_text def execute(self): """ The excecute method that is called when the spoke is left. It is supposed to do all changes to the runtime environment according to the values set in the spoke. """ # nothing to do here pass def input(self, args, key): """ The input method that is called by the main loop on user's input. :param args: optional argument that may be used when the screen is scheduled (passed to App.switch_screen* methods) :type args: anything :param key: user's input :type key: unicode :return: if the input should not be handled here, return it, otherwise return True or False if the input was processed succesfully or not respectively :rtype: bool|unicode """ if key: self._entered_text = key # no other actions scheduled, apply changes self.apply() # close the current screen (remove it from the stack) self.close() return True def prompt(self, args=None): """ The prompt method that is called by the main loop to get the prompt for this screen. :param args: optional argument that can be passed to App.switch_screen* methods :type args: anything :return: text that should be used in the prompt for the input :rtype: unicode|None """ return _("Enter a new text or leave empty to use the old one: ") It is not necessary to override the __init__ method if it only calls the ancestor's __init__ , but the comments in the example describe the arguments passed to constructors of spoke classes in an understandable way. The initialize method sets up a default value for the internal attribute of the spoke, which is then updated by the refresh method and used by the apply method to update Kickstart data. The only differences in these two methods from their equivalents in the GUI is the return type of the refresh method ( bool instead of None ) and an additional args argument they take. The meaning of the returned value is explained in the comments - it tells the application (the App class instance) whether this spoke requires user input or not. The additional args argument is used for passing extra information to the spoke when scheduled. The execute method has the same purpose as the equivalent method in the GUI; in this case, the method does nothing. Methods input and prompt are specific to the text interface; there are no equivalents in Kickstart or GUI. These two methods are responsible for user interaction. The prompt method should return a prompt which will be displayed after the content of the spoke is printed. After a string is entered in reaction to the prompt, this string is passed to the input method for processing. The input method then processes the entered string and takes action depending on its type and value. The above example asks for any value and then stores it as an internal attribute ( key ). In more complicated add-ons, you typically need to perform some non-trivial actions, such as parse c as "continue" or r as "refresh", convert numbers into integers, show additional screens or toggle boolean values. Return value of the input class must be either the INPUT_PROCESSED or INPUT_DISCARDED constant (both of these are defined in the pyanaconda.constants_text module), or the input string itself (in case this input should be processed by a different screen). In contrast to the graphical mode, the apply method is not called automatically when leaving the spoke; it must be called explicitly from the input method. The same applies to closing (hiding) the spoke's screen, which is done by calling the close method. To show another screen (for example, if you need additional information which was entered in a different spoke), you can instantiate another TUIObject and call one of the self.app.switch_screen* methods of the App . Due to restrictions of the text-based interface, TUI spokes tend to have a very similar structure: a list of checkboxes or entries which should be checked or unchecked and populated by the user. The paragraphs show a way to implement a TUI spoke where the its methods handle printing and processing of the available and provided data. However, there is a different way to accomplish this using the EditTUISpoke class from the pyanaconda.ui.tui.spokes package. By inheriting this class, you can implement a typical TUI spoke by only specifying fields and attributes which should be set in it. The example below demonstrates this: Example 15. Using EditTUISpoke to Define a Text Interface Spoke class _EditData(object): """Auxiliary class for storing data from the example EditSpoke""" def __init__(self): """Trivial constructor just defining the fields that will store data""" self.checked = False self.shown_input = "" self.hidden_input = "" class HelloWorldEditSpoke(EditTUISpoke): """Example class demonstrating usage of EditTUISpoke inheritance""" title = _("Hello World Edit") category = "localization" # simple RE used to specify we only accept a single word as a valid input _valid_input = re.compile(r'\w+') # special class attribute defining spoke's entries as: # Entry(TITLE, ATTRIBUTE, CHECKING_RE or TYPE, SHOW_FUNC or SHOW) # where: # TITLE specifies descriptive title of the entry # ATTRIBUTE specifies attribute of self.args that should be set to the # value entered by the user (may contain dots, i.e. may specify # a deep attribute) # CHECKING_RE specifies compiled RE used for deciding about # accepting/rejecting user's input # TYPE may be one of EditTUISpoke.CHECK or EditTUISpoke.PASSWORD used # instead of CHECKING_RE for simple checkboxes or password entries, # respectively # SHOW_FUNC is a function taking self and self.args and returning True or # False indicating whether the entry should be shown or not # SHOW is a boolean value that may be used instead of the SHOW_FUNC # # :see: pyanaconda.ui.tui.spokes.EditTUISpoke edit_fields = [ Entry("Simple checkbox", "checked", EditTUISpoke.CHECK, True), Entry("Always shown input", "shown_input", _valid_input, True), Entry("Conditioned input", "hidden_input", _valid_input, lambda self, args: bool(args.shown_input)), ] def __init__(self, app, data, storage, payload, instclass): EditTUISpoke.__init__(self, app, data, storage, payload, instclass) # just populate the self.args attribute to have a store for data # typically self.data or a subtree of self.data is used as self.args self.args = _EditData() @property def completed(self): # completed if user entered something non-empty to the Conditioned input return bool(self.args.hidden_input) @property def status(self): return "Hidden input %s" % ("entered" if self.args.hidden_input else "not entered") def apply(self): # nothing needed here, values are set in the self.args tree pass The auxiliary class _EditData serves as a data container which is used to store values entered by the user. The HelloWorldEditSpoke class defines a simple spoke with one checkbox and two entries, all of which are instances of the EditTUISpokeEntry class imported as the Entry class). The first one is shown every time the spoke is displayed, the second instance is only shown if the first one contains a non-empty value. For more information about the EditTUISpoke class, see the comments in the above example. 5.7. Deploying and testing an Anaconda add-on To test a new add-on, you must load it into the installation environment. Add-ons are collected from the /usr/share/anaconda/addons/ directory in the installation runtime environment; to add your own add-on into that directory, you must create a product.img file with the same directory structure and place it on your boot media. For specific instructions on unpacking an existing boot image, creating a product.img file and repackaging the image, see Section 2, "Working with ISO Images" . [1] While Firstboot is a legacy tool, it is still supported because of third-party modules written for it. [2] In Fedora, the add-on is disabled by default. You can enable it using the inst.kdump_addon=on option in the boot menu. [3] The gui package may also contain a categories subpackage if the add-on needs to define a new category, but this is not recommended. [4] an instance of the SpokeWindow widget which is a custom widget created for the Anaconda installer [5] which is likely to change in the future to sticking to the better (GUI) way
[ "com_example_hello_world ├─ ks │ └─ __init__.py ├─ gui │ ├─ __init__.py │ └─ spokes │ └─ __init__.py └─ tui ├─ __init__.py └─ spokes └─ __init__.py", "%addon ADDON_NAME [arguments] first line second line %end", "from pyanaconda.addons import AddonData from pykickstart.options import KSOptionParser export HelloWorldData class to prevent Anaconda's collect method from taking AddonData class instead of the HelloWorldData class :see: pyanaconda.kickstart.AnacondaKSHandler.__init__ __all__ = [\"HelloWorldData\"] HELLO_FILE_PATH = \"/root/hello_world_addon_output.txt\" class HelloWorldData(AddonData): \"\"\" Class parsing and storing data for the Hello world addon. :see: pyanaconda.addons.AddonData \"\"\" def __init__(self, name): \"\"\" :param name: name of the addon :type name: str \"\"\" AddonData.__init__(self, name) self.text = \"\" self.reverse = False def handle_header(self, lineno, args): \"\"\" The handle_header method is called to parse additional arguments in the %addon section line. :param lineno: the current linenumber in the kickstart file :type lineno: int :param args: any additional arguments after %addon <name> :type args: list \"\"\" op = KSOptionParser() op.add_option(\"--reverse\", action=\"store_true\", default=False, dest=\"reverse\", help=\"Reverse the display of the addon text\") (opts, extra) = op.parse_args(args=args, lineno=lineno) # Reject any additoinal arguments. Since AddonData.handle_header # rejects any arguments, we can use it to create an error message # and raise an exception. if extra: AddonData.handle_header(self, lineno, extra) # Store the result of the option parsing self.reverse = opts.reverse def handle_line(self, line): \"\"\" The handle_line method that is called with every line from this addon's %addon section of the kickstart file. :param line: a single line from the %addon section :type line: str \"\"\" # simple example, we just append lines to the text attribute if self.text is \"\": self.text = line.strip() else: self.text += \" \" + line.strip()", "import os.path from pyanaconda.addons import AddonData from pyanaconda.constants import ROOT_PATH HELLO_FILE_PATH = \"/root/hello_world_addon_output.txt\"", "def setup(self, storage, ksdata, instclass, payload): \"\"\" The setup method that should make changes to the runtime environment according to the data stored in this object. :param storage: object storing storage-related information (disks, partitioning, bootloader, etc.) :type storage: blivet.Blivet instance :param ksdata: data parsed from the kickstart file and set in the installation process :type ksdata: pykickstart.base.BaseHandler instance :param instclass: distribution-specific information :type instclass: pyanaconda.installclass.BaseInstallClass :param payload: object managing packages and environment groups for the installation :type payload: any class inherited from the pyanaconda.packaging.Payload class \"\"\" # no actions needed in this addon pass def execute(self, storage, ksdata, instclass, users, payload): \"\"\" The execute method that should make changes to the installed system. It is called only once in the post-install setup phase. :see: setup :param users: information about created users :type users: pyanaconda.users.Users instance \"\"\" hello_file_path = os.path.normpath(ROOT_PATH + HELLO_FILE_PATH) with open(hello_file_path, \"w\") as fobj: fobj.write(\"%s\\n\" % self.text)", "def __str__(self): \"\"\" What should end up in the resulting kickstart file, i.e. the %addon section containing string representation of the stored data. \"\"\" addon_str = \"%%addon %s\" % self.name if self.reverse: addon_str += \"--reverse\" addon_str += \"\\n%s\\n%%end\" % self.text return addon_str", "will never be translated _ = lambda x: x N_ = lambda x: x the path to addons is in sys.path so we can import things from org_fedora_hello_world from org_fedora_hello_world.gui.categories.hello_world import HelloWorldCategory from pyanaconda.ui.gui.spokes import NormalSpoke export only the spoke, no helper functions, classes or constants __all__ = [\"HelloWorldSpoke\"] class HelloWorldSpoke(NormalSpoke): \"\"\" Class for the Hello world spoke. This spoke will be in the Hello world category and thus on the Summary hub. It is a very simple example of a unit for the Anaconda's graphical user interface. :see: pyanaconda.ui.common.UIObject :see: pyanaconda.ui.common.Spoke :see: pyanaconda.ui.gui.GUIObject \"\"\" ### class attributes defined by API ### # list all top-level objects from the .glade file that should be exposed # to the spoke or leave empty to extract everything builderObjects = [\"helloWorldSpokeWindow\", \"buttonImage\"] # the name of the main window widget mainWidgetName = \"helloWorldSpokeWindow\" # name of the .glade file in the same directory as this source uiFile = \"hello_world.glade\" # category this spoke belongs to category = HelloWorldCategory # spoke icon (will be displayed on the hub) # preferred are the -symbolic icons as these are used in Anaconda's spokes icon = \"face-cool-symbolic\" # title of the spoke (will be displayed on the hub) title = N_(\"_HELLO WORLD\")", "def __init__(self, data, storage, payload, instclass): \"\"\" :see: pyanaconda.ui.common.Spoke.__init__ :param data: data object passed to every spoke to load/store data from/to it :type data: pykickstart.base.BaseHandler :param storage: object storing storage-related information (disks, partitioning, bootloader, etc.) :type storage: blivet.Blivet :param payload: object storing packaging-related information :type payload: pyanaconda.packaging.Payload :param instclass: distribution-specific information :type instclass: pyanaconda.installclass.BaseInstallClass \"\"\" NormalSpoke.__init__(self, data, storage, payload, instclass) def initialize(self): \"\"\" The initialize method that is called after the instance is created. The difference between __init__ and this method is that this may take a long time and thus could be called in a separated thread. :see: pyanaconda.ui.common.UIObject.initialize \"\"\" NormalSpoke.initialize(self) self._entry = self.builder.get_object(\"textEntry\")", "def refresh(self): \"\"\" The refresh method that is called every time the spoke is displayed. It should update the UI elements according to the contents of self.data. :see: pyanaconda.ui.common.UIObject.refresh \"\"\" self._entry.set_text(self.data.addons.org_fedora_hello_world.text) def apply(self): \"\"\" The apply method that is called when the spoke is left. It should update the contents of self.data with values set in the GUI elements. \"\"\" self.data.addons.org_fedora_hello_world.text = self._entry.get_text() def execute(self): \"\"\" The excecute method that is called when the spoke is left. It is supposed to do all changes to the runtime environment according to the values set in the GUI elements. \"\"\" # nothing to do here pass", "@property def ready(self): \"\"\" The ready property that tells whether the spoke is ready (can be visited) or not. The spoke is made (in)sensitive based on the returned value. :rtype: bool \"\"\" # this spoke is always ready return True @property def completed(self): \"\"\" The completed property that tells whether all mandatory items on the spoke are set, or not. The spoke will be marked on the hub as completed or uncompleted acording to the returned value. :rtype: bool \"\"\" return bool(self.data.addons.org_fedora_hello_world.text) @property def mandatory(self): \"\"\" The mandatory property that tells whether the spoke is mandatory to be completed to continue in the installation process. :rtype: bool \"\"\" # this is an optional spoke that is not mandatory to be completed return False", "@property def status(self): \"\"\" The status property that is a brief string describing the state of the spoke. It should describe whether all values are set and if possible also the values themselves. The returned value will appear on the hub below the spoke's title. :rtype: str \"\"\" text = self.data.addons.org_fedora_hello_world.text # If --reverse was specified in the kickstart, reverse the text if self.data.addons.org_fedora_hello_world.reverse: text = text[::-1] if text: return _(\"Text set: %s\") % text else: return _(\"Text not set\")", "every GUIObject gets ksdata in __init__ dialog = HelloWorldDialog(self.data) show dialog above the lightbox with enlightbox(self.window, dialog.window): dialog.run()", "def __init__(self, app, data, storage, payload, instclass): \"\"\" :see: pyanaconda.ui.tui.base.UIScreen :see: pyanaconda.ui.tui.base.App :param app: reference to application which is a main class for TUI screen handling, it is responsible for mainloop control and keeping track of the stack where all TUI screens are scheduled :type app: instance of pyanaconda.ui.tui.base.App :param data: data object passed to every spoke to load/store data from/to it :type data: pykickstart.base.BaseHandler :param storage: object storing storage-related information (disks, partitioning, bootloader, etc.) :type storage: blivet.Blivet :param payload: object storing packaging-related information :type payload: pyanaconda.packaging.Payload :param instclass: distribution-specific information :type instclass: pyanaconda.installclass.BaseInstallClass \"\"\" NormalTUISpoke.__init__(self, app, data, storage, payload, instclass) self._entered_text = \"\" def initialize(self): \"\"\" The initialize method that is called after the instance is created. The difference between __init__ and this method is that this may take a long time and thus could be called in a separated thread. :see: pyanaconda.ui.common.UIObject.initialize \"\"\" NormalTUISpoke.initialize(self) def refresh(self, args=None): \"\"\" The refresh method that is called every time the spoke is displayed. It should update the UI elements according to the contents of self.data. :see: pyanaconda.ui.common.UIObject.refresh :see: pyanaconda.ui.tui.base.UIScreen.refresh :param args: optional argument that may be used when the screen is scheduled (passed to App.switch_screen* methods) :type args: anything :return: whether this screen requests input or not :rtype: bool \"\"\" self._entered_text = self.data.addons.org_fedora_hello_world.text return True def apply(self): \"\"\" The apply method that is called when the spoke is left. It should update the contents of self.data with values set in the spoke. \"\"\" self.data.addons.org_fedora_hello_world.text = self._entered_text def execute(self): \"\"\" The excecute method that is called when the spoke is left. It is supposed to do all changes to the runtime environment according to the values set in the spoke. \"\"\" # nothing to do here pass def input(self, args, key): \"\"\" The input method that is called by the main loop on user's input. :param args: optional argument that may be used when the screen is scheduled (passed to App.switch_screen* methods) :type args: anything :param key: user's input :type key: unicode :return: if the input should not be handled here, return it, otherwise return True or False if the input was processed succesfully or not respectively :rtype: bool|unicode \"\"\" if key: self._entered_text = key # no other actions scheduled, apply changes self.apply() # close the current screen (remove it from the stack) self.close() return True def prompt(self, args=None): \"\"\" The prompt method that is called by the main loop to get the prompt for this screen. :param args: optional argument that can be passed to App.switch_screen* methods :type args: anything :return: text that should be used in the prompt for the input :rtype: unicode|None \"\"\" return _(\"Enter a new text or leave empty to use the old one: \")", "class _EditData(object): \"\"\"Auxiliary class for storing data from the example EditSpoke\"\"\" def __init__(self): \"\"\"Trivial constructor just defining the fields that will store data\"\"\" self.checked = False self.shown_input = \"\" self.hidden_input = \"\" class HelloWorldEditSpoke(EditTUISpoke): \"\"\"Example class demonstrating usage of EditTUISpoke inheritance\"\"\" title = _(\"Hello World Edit\") category = \"localization\" # simple RE used to specify we only accept a single word as a valid input _valid_input = re.compile(r'\\w+') # special class attribute defining spoke's entries as: # Entry(TITLE, ATTRIBUTE, CHECKING_RE or TYPE, SHOW_FUNC or SHOW) # where: # TITLE specifies descriptive title of the entry # ATTRIBUTE specifies attribute of self.args that should be set to the # value entered by the user (may contain dots, i.e. may specify # a deep attribute) # CHECKING_RE specifies compiled RE used for deciding about # accepting/rejecting user's input # TYPE may be one of EditTUISpoke.CHECK or EditTUISpoke.PASSWORD used # instead of CHECKING_RE for simple checkboxes or password entries, # respectively # SHOW_FUNC is a function taking self and self.args and returning True or # False indicating whether the entry should be shown or not # SHOW is a boolean value that may be used instead of the SHOW_FUNC # # :see: pyanaconda.ui.tui.spokes.EditTUISpoke edit_fields = [ Entry(\"Simple checkbox\", \"checked\", EditTUISpoke.CHECK, True), Entry(\"Always shown input\", \"shown_input\", _valid_input, True), Entry(\"Conditioned input\", \"hidden_input\", _valid_input, lambda self, args: bool(args.shown_input)), ] def __init__(self, app, data, storage, payload, instclass): EditTUISpoke.__init__(self, app, data, storage, payload, instclass) # just populate the self.args attribute to have a store for data # typically self.data or a subtree of self.data is used as self.args self.args = _EditData() @property def completed(self): # completed if user entered something non-empty to the Conditioned input return bool(self.args.hidden_input) @property def status(self): return \"Hidden input %s\" % (\"entered\" if self.args.hidden_input else \"not entered\") def apply(self): # nothing needed here, values are set in the self.args tree pass" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/anaconda_customization_guide/sect-anaconda-addon-development
Chapter 7. Red Hat Enterprise Linux CoreOS (RHCOS)
Chapter 7. Red Hat Enterprise Linux CoreOS (RHCOS) 7.1. About RHCOS Red Hat Enterprise Linux CoreOS (RHCOS) represents the generation of single-purpose container operating system technology by providing the quality standards of Red Hat Enterprise Linux (RHEL) with automated, remote upgrade features. RHCOS is supported only as a component of OpenShift Container Platform 4.9 for all OpenShift Container Platform machines. RHCOS is the only supported operating system for OpenShift Container Platform control plane, or master, machines. While RHCOS is the default operating system for all cluster machines, you can create compute machines, which are also known as worker machines, that use RHEL as their operating system. There are two general ways RHCOS is deployed in OpenShift Container Platform 4.9: If you install your cluster on infrastructure that the installation program provisions, RHCOS images are downloaded to the target platform during installation. Suitable Ignition config files, which control the RHCOS configuration, are also downloaded and used to deploy the machines. If you install your cluster on infrastructure that you manage, you must follow the installation documentation to obtain the RHCOS images, generate Ignition config files, and use the Ignition config files to provision your machines. 7.1.1. Key RHCOS features The following list describes key features of the RHCOS operating system: Based on RHEL : The underlying operating system consists primarily of RHEL components. The same quality, security, and control measures that support RHEL also support RHCOS. For example, RHCOS software is in RPM packages, and each RHCOS system starts up with a RHEL kernel and a set of services that are managed by the systemd init system. Controlled immutability : Although it contains RHEL components, RHCOS is designed to be managed more tightly than a default RHEL installation. Management is performed remotely from the OpenShift Container Platform cluster. When you set up your RHCOS machines, you can modify only a few system settings. This controlled immutability allows OpenShift Container Platform to store the latest state of RHCOS systems in the cluster so it is always able to create additional machines and perform updates based on the latest RHCOS configurations. CRI-O container runtime : Although RHCOS contains features for running the OCI- and libcontainer-formatted containers that Docker requires, it incorporates the CRI-O container engine instead of the Docker container engine. By focusing on features needed by Kubernetes platforms, such as OpenShift Container Platform, CRI-O can offer specific compatibility with different Kubernetes versions. CRI-O also offers a smaller footprint and reduced attack surface than is possible with container engines that offer a larger feature set. At the moment, CRI-O is the only engine available within OpenShift Container Platform clusters. Set of container tools : For tasks such as building, copying, and otherwise managing containers, RHCOS replaces the Docker CLI tool with a compatible set of container tools. The podman CLI tool supports many container runtime features, such as running, starting, stopping, listing, and removing containers and container images. The skopeo CLI tool can copy, authenticate, and sign images. You can use the crictl CLI tool to work with containers and pods from the CRI-O container engine. While direct use of these tools in RHCOS is discouraged, you can use them for debugging purposes. rpm-ostree upgrades : RHCOS features transactional upgrades using the rpm-ostree system. Updates are delivered by means of container images and are part of the OpenShift Container Platform update process. When deployed, the container image is pulled, extracted, and written to disk, then the bootloader is modified to boot into the new version. The machine will reboot into the update in a rolling manner to ensure cluster capacity is minimally impacted. bootupd firmware and bootloader updater : Package managers and hybrid systems such as rpm-ostree do not update the firmware or the bootloader. With bootupd , RHCOS users have access to a cross-distribution, system-agnostic update tool that manages firmware and boot updates in UEFI and legacy BIOS boot modes that run on modern architectures, such as x86_64, ppc64le, and aarch64. For information about how to install bootupd , see the documentation for Updating the bootloader using bootupd for more information. Updated through the Machine Config Operator : In OpenShift Container Platform, the Machine Config Operator handles operating system upgrades. Instead of upgrading individual packages, as is done with yum upgrades, rpm-ostree delivers upgrades of the OS as an atomic unit. The new OS deployment is staged during upgrades and goes into effect on the reboot. If something goes wrong with the upgrade, a single rollback and reboot returns the system to the state. RHCOS upgrades in OpenShift Container Platform are performed during cluster updates. For RHCOS systems, the layout of the rpm-ostree file system has the following characteristics: /usr is where the operating system binaries and libraries are stored and is read-only. We do not support altering this. /etc , /boot , /var are writable on the system but only intended to be altered by the Machine Config Operator. /var/lib/containers is the graph storage location for storing container images. 7.1.2. Choosing how to configure RHCOS RHCOS is designed to deploy on an OpenShift Container Platform cluster with a minimal amount of user configuration. In its most basic form, this consists of: Starting with a provisioned infrastructure, such as on AWS, or provisioning the infrastructure yourself. Supplying a few pieces of information, such as credentials and cluster name, in an install-config.yaml file when running openshift-install . Because RHCOS systems in OpenShift Container Platform are designed to be fully managed from the OpenShift Container Platform cluster after that, directly changing an RHCOS machine is discouraged. Although limited direct access to RHCOS machines cluster can be accomplished for debugging purposes, you should not directly configure RHCOS systems. Instead, if you need to add or change features on your OpenShift Container Platform nodes, consider making changes in the following ways: Kubernetes workload objects, such as DaemonSet and Deployment : If you need to add services or other user-level features to your cluster, consider adding them as Kubernetes workload objects. Keeping those features outside of specific node configurations is the best way to reduce the risk of breaking the cluster on subsequent upgrades. Day-2 customizations : If possible, bring up a cluster without making any customizations to cluster nodes and make necessary node changes after the cluster is up. Those changes are easier to track later and less likely to break updates. Creating machine configs or modifying Operator custom resources are ways of making these customizations. Day-1 customizations : For customizations that you must implement when the cluster first comes up, there are ways of modifying your cluster so changes are implemented on first boot. Day-1 customizations can be done through Ignition configs and manifest files during openshift-install or by adding boot options during ISO installs provisioned by the user. Here are examples of customizations you could do on day 1: Kernel arguments : If particular kernel features or tuning is needed on nodes when the cluster first boots. Disk encryption : If your security needs require that the root file system on the nodes are encrypted, such as with FIPS support. Kernel modules : If a particular hardware device, such as a network card or video card, does not have a usable module available by default in the Linux kernel. Chronyd : If you want to provide specific clock settings to your nodes, such as the location of time servers. To accomplish these tasks, you can augment the openshift-install process to include additional objects such as MachineConfig objects. Those procedures that result in creating machine configs can be passed to the Machine Config Operator after the cluster is up. Note The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.1.3. Choosing how to deploy RHCOS Differences between RHCOS installations for OpenShift Container Platform are based on whether you are deploying on an infrastructure provisioned by the installer or by the user: Installer-provisioned : Some cloud environments offer pre-configured infrastructures that allow you to bring up an OpenShift Container Platform cluster with minimal configuration. For these types of installations, you can supply Ignition configs that place content on each node so it is there when the cluster first boots. User-provisioned : If you are provisioning your own infrastructure, you have more flexibility in how you add content to a RHCOS node. For example, you could add kernel arguments when you boot the RHCOS ISO installer to install each system. However, in most cases where configuration is required on the operating system itself, it is best to provide that configuration through an Ignition config. The Ignition facility runs only when the RHCOS system is first set up. After that, Ignition configs can be supplied later using the machine config. 7.1.4. About Ignition Ignition is the utility that is used by RHCOS to manipulate disks during initial configuration. It completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users. On first boot, Ignition reads its configuration from the installation media or the location that you specify and applies the configuration to the machines. Whether you are installing your cluster or adding machines to it, Ignition always performs the initial configuration of the OpenShift Container Platform cluster machines. Most of the actual system setup happens on each machine itself. For each machine, Ignition takes the RHCOS image and boots the RHCOS kernel. Options on the kernel command line identify the type of deployment and the location of the Ignition-enabled initial RAM disk (initramfs). 7.1.4.1. How Ignition works To create machines by using Ignition, you need Ignition config files. The OpenShift Container Platform installation program creates the Ignition config files that you need to deploy your cluster. These files are based on the information that you provide to the installation program directly or through an install-config.yaml file. The way that Ignition configures machines is similar to how tools like cloud-init or Linux Anaconda kickstart configure systems, but with some important differences: Ignition runs from an initial RAM disk that is separate from the system you are installing to. Because of that, Ignition can repartition disks, set up file systems, and perform other changes to the machine's permanent file system. In contrast, cloud-init runs as part of a machine init system when the system boots, so making foundational changes to things like disk partitions cannot be done as easily. With cloud-init, it is also difficult to reconfigure the boot process while you are in the middle of the node boot process. Ignition is meant to initialize systems, not change existing systems. After a machine initializes and the kernel is running from the installed system, the Machine Config Operator from the OpenShift Container Platform cluster completes all future machine configuration. Instead of completing a defined set of actions, Ignition implements a declarative configuration. It checks that all partitions, files, services, and other items are in place before the new machine starts. It then makes the changes, like copying files to disk that are necessary for the new machine to meet the specified configuration. After Ignition finishes configuring a machine, the kernel keeps running but discards the initial RAM disk and pivots to the installed system on disk. All of the new system services and other features start without requiring a system reboot. Because Ignition confirms that all new machines meet the declared configuration, you cannot have a partially configured machine. If a machine setup fails, the initialization process does not finish, and Ignition does not start the new machine. Your cluster will never contain partially configured machines. If Ignition cannot complete, the machine is not added to the cluster. You must add a new machine instead. This behavior prevents the difficult case of debugging a machine when the results of a failed configuration task are not known until something that depended on it fails at a later date. If there is a problem with an Ignition config that causes the setup of a machine to fail, Ignition will not try to use the same config to set up another machine. For example, a failure could result from an Ignition config made up of a parent and child config that both want to create the same file. A failure in such a case would prevent that Ignition config from being used again to set up an other machines until the problem is resolved. If you have multiple Ignition config files, you get a union of that set of configs. Because Ignition is declarative, conflicts between the configs could cause Ignition to fail to set up the machine. The order of information in those files does not matter. Ignition will sort and implement each setting in ways that make the most sense. For example, if a file needs a directory several levels deep, if another file needs a directory along that path, the later file is created first. Ignition sorts and creates all files, directories, and links by depth. Because Ignition can start with a completely empty hard disk, it can do something cloud-init cannot do: set up systems on bare metal from scratch using features such as PXE boot. In the bare metal case, the Ignition config is injected into the boot partition so that Ignition can find it and configure the system correctly. 7.1.4.2. The Ignition sequence The Ignition process for an RHCOS machine in an OpenShift Container Platform cluster involves the following steps: The machine gets its Ignition config file. Control plane machines get their Ignition config files from the bootstrap machine, and worker machines get Ignition config files from a control plane machine. Ignition creates disk partitions, file systems, directories, and links on the machine. It supports RAID arrays but does not support LVM volumes. Ignition mounts the root of the permanent file system to the /sysroot directory in the initramfs and starts working in that /sysroot directory. Ignition configures all defined file systems and sets them up to mount appropriately at runtime. Ignition runs systemd temporary files to populate required files in the /var directory. Ignition runs the Ignition config files to set up users, systemd unit files, and other configuration files. Ignition unmounts all components in the permanent system that were mounted in the initramfs. Ignition starts up the init process of the new machine, which in turn starts up all other services on the machine that run during system boot. At the end of this process, the machine is ready to join the cluster and does not require a reboot. 7.2. Viewing Ignition configuration files To see the Ignition config file used to deploy the bootstrap machine, run the following command: USD openshift-install create ignition-configs --dir USDHOME/testconfig After you answer a few questions, the bootstrap.ign , master.ign , and worker.ign files appear in the directory you entered. To see the contents of the bootstrap.ign file, pipe it through the jq filter. Here's a snippet from that file: USD cat USDHOME/testconfig/bootstrap.ign | jq { "ignition": { "version": "3.2.0" }, "passwd": { "users": [ { "name": "core", "sshAuthorizedKeys": [ "ssh-rsa AAAAB3NzaC1yc...." ] } ] }, "storage": { "files": [ { "overwrite": false, "path": "/etc/motd", "user": { "name": "root" }, "append": [ { "source": "data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==" } ], "mode": 420 }, ... To decode the contents of a file listed in the bootstrap.ign file, pipe the base64-encoded data string representing the contents of that file to the base64 -d command. Here's an example using the contents of the /etc/motd file added to the bootstrap machine from the output shown above: USD echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode Example output This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service Repeat those commands on the master.ign and worker.ign files to see the source of Ignition config files for each of those machine types. You should see a line like the following for the worker.ign , identifying how it gets its Ignition config from the bootstrap machine: "source": "https://api.myign.develcluster.example.com:22623/config/worker", Here are a few things you can learn from the bootstrap.ign file: Format: The format of the file is defined in the Ignition config spec . Files of the same format are used later by the MCO to merge changes into a machine's configuration. Contents: Because the bootstrap machine serves the Ignition configs for other machines, both master and worker machine Ignition config information is stored in the bootstrap.ign , along with the bootstrap machine's configuration. Size: The file is more than 1300 lines long, with path to various types of resources. The content of each file that will be copied to the machine is actually encoded into data URLs, which tends to make the content a bit clumsy to read. (Use the jq and base64 commands shown previously to make the content more readable.) Configuration: The different sections of the Ignition config file are generally meant to contain files that are just dropped into a machine's file system, rather than commands to modify existing files. For example, instead of having a section on NFS that configures that service, you would just add an NFS configuration file, which would then be started by the init process when the system comes up. users: A user named core is created, with your SSH key assigned to that user. This allows you to log in to the cluster with that user name and your credentials. storage: The storage section identifies files that are added to each machine. A few notable files include /root/.docker/config.json (which provides credentials your cluster needs to pull from container image registries) and a bunch of manifest files in /opt/openshift/manifests that are used to configure your cluster. systemd: The systemd section holds content used to create systemd unit files. Those files are used to start up services at boot time, as well as manage those services on running systems. Primitives: Ignition also exposes low-level primitives that other tools can build on. 7.3. Changing Ignition configs after installation Machine config pools manage a cluster of nodes and their corresponding machine configs. Machine configs contain configuration information for a cluster. To list all machine config pools that are known: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False To list all machine configs: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m The Machine Config Operator acts somewhat differently than Ignition when it comes to applying these machine configs. The machine configs are read in order (from 00* to 99*). Labels inside the machine configs identify the type of node each is for (master or worker). If the same file appears in multiple machine config files, the last one wins. So, for example, any file that appears in a 99* file would replace the same file that appeared in a 00* file. The input MachineConfig objects are unioned into a "rendered" MachineConfig object, which will be used as a target by the operator and is the value you can see in the machine config pool. To see what files are being managed from a machine config, look for "Path:" inside a particular MachineConfig object. For example: USD oc describe machineconfigs 01-worker-container-runtime | grep Path: Example output Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf Be sure to give the machine config file a later name (such as 10-worker-container-runtime). Keep in mind that the content of each file is in URL-style data. Then apply the new machine config to the cluster.
[ "openshift-install create ignition-configs --dir USDHOME/testconfig", "cat USDHOME/testconfig/bootstrap.ign | jq { \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc....\" ] } ] }, \"storage\": { \"files\": [ { \"overwrite\": false, \"path\": \"/etc/motd\", \"user\": { \"name\": \"root\" }, \"append\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==\" } ], \"mode\": 420 },", "echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode", "This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service", "\"source\": \"https://api.myign.develcluster.example.com:22623/config/worker\",", "USD oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m", "oc describe machineconfigs 01-worker-container-runtime | grep Path:", "Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/architecture/architecture-rhcos
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_python_client/making-open-source-more-inclusive
Chapter 6. Supported components
Chapter 6. Supported components For a full list of component versions that are supported in this release of Red Hat JBoss Core Services, see the Core Services Apache HTTP Server Component Details page. Before you attempt to access the Component Details page, you must ensure that you have an active Red Hat subscription and you are logged in to the Red Hat Customer Portal.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_6_release_notes/supported_components
Appendix D. Ceph File System client configuration reference
Appendix D. Ceph File System client configuration reference This section lists configuration options for Ceph File System (CephFS) FUSE clients. Set them in the Ceph configuration file under the [client] section. client_acl_type Description Set the ACL type. Currently, only possible value is posix_acl to enable POSIX ACL, or an empty string. This option only takes effect when the fuse_default_permissions is set to false . Type String Default "" (no ACL enforcement) client_cache_mid Description Set the client cache midpoint. The midpoint splits the least recently used lists into a hot and warm list. Type Float Default 0.75 client_cache size Description Set the number of inodes that the client keeps in the metadata cache. Type Integer Default 16384 (16 MB) client_caps_release_delay Description Set the delay between capability releases in seconds. The delay sets how many seconds a client waits to release capabilities that it no longer needs in case the capabilities are needed for another user space operation. Type Integer Default 5 (seconds) client_debug_force_sync_read Description If set to true , clients read data directly from OSDs instead of using a local page cache. Type Boolean Default false client_dirsize_rbytes Description If set to true , use the recursive size of a directory (that is, total of all descendants). Type Boolean Default true client_max_inline_size Description Set the maximum size of inlined data stored in a file inode rather than in a separate data object in RADOS. This setting only applies if the inline_data flag is set on the MDS map. Type Integer Default 4096 client_metadata Description Comma-delimited strings for client metadata sent to each MDS, in addition to the automatically generated version, host name, and other metadata. Type String Default "" (no additional metadata) client_mount_gid Description Set the group ID of CephFS mount. Type Integer Default -1 client_mount_timeout Description Set the timeout for CephFS mount in seconds. Type Float Default 300.0 client_mount_uid Description Set the user ID of CephFS mount. Type Integer Default -1 client_mountpoint Description An alternative to the -r option of the ceph-fuse command. Type String Default / client_oc Description Enable object caching. Type Boolean Default true client_oc_max_dirty Description Set the maximum number of dirty bytes in the object cache. Type Integer Default 104857600 (100MB) client_oc_max_dirty_age Description Set the maximum age in seconds of dirty data in the object cache before writeback. Type Float Default 5.0 (seconds) client_oc_max_objects Description Set the maximum number of objects in the object cache. Type Integer Default 1000 client_oc_size Description Set how many bytes of data will the client cache. Type Integer Default 209715200 (200 MB) client_oc_target_dirty Description Set the target size of dirty data. Red Hat recommends to keep this number low. Type Integer Default 8388608 (8MB) client_permissions Description Check client permissions on all I/O operations. Type Boolean Default true client_quota_df Description Report root directory quota for the statfs operation. Type Boolean Default true client_readahead_max_bytes Description Set the maximum number of bytes that the kernel reads ahead for future read operations. Overridden by the client_readahead_max_periods setting. Type Integer Default 0 (unlimited) client_readahead_max_periods Description Set the number of file layout periods (object size * number of stripes) that the kernel reads ahead. Overrides the client_readahead_max_bytes setting. Type Integer Default 4 client_readahead_min Description Set the minimum number bytes that the kernel reads ahead. Type Integer Default 131072 (128KB) client_snapdir Description Set the snapshot directory name. Type String Default ".snap" client_tick_interval Description Set the interval in seconds between capability renewal and other upkeep. Type Float Default 1.0 client_use_random_mds Description Choose random MDS for each request. Type Boolean Default false fuse_default_permissions Description When set to false , the ceph-fuse utility checks does its own permissions checking, instead of relying on the permissions enforcement in FUSE. Set to false together with the client acl type=posix_acl option to enable POSIX ACL. Type Boolean Default true Developer Options These options are internal. They are listed here only to complete the list of options. client_debug_getattr_caps Description Check if the reply from the MDS contains required capabilities. Type Boolean Default false client_debug_inject_tick_delay Description Add artificial delay between client ticks. Type Integer Default 0 client_inject_fixed_oldest_tid Description, Type Boolean Default false client_inject_release_failure Description, Type Boolean Default false client_trace Description The path to the trace file for all file operations. The output is designed to be used by the Ceph synthetic client. See the ceph-syn(8) manual page for details. Type String Default "" (disabled)
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/file_system_guide/ceph-file-system-client-configuration-reference_fs
Part I. Director installation and configuration
Part I. Director installation and configuration
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/director_installation_and_configuration
A.3. Fsync
A.3. Fsync Fsync is known as an I/O expensive operation, but this is is not completely true. Firefox used to call the sqlite library each time the user clicked on a link to go to a new page. Sqlite called fsync and because of the file system settings (mainly ext3 with data-ordered mode), there was a long latency when nothing happened. This could take a long time (up to 30 seconds) if another process was copying a large file at the same time. However, in other cases, where fsync was not used at all, problems emerged with the switch to the ext4 file system. Ext3 was set to data-ordered mode, which flushed memory every few seconds and saved it to a disk. But with ext4 and laptop_mode, the interval between saves was longer and data might get lost when the system was unexpectedly switched off. Now ext4 is patched, but we must still consider the design of our applications carefully, and use fsync as appropriate. The following simple example of reading and writing into a configuration file shows how a backup of a file can be made or how data can be lost: /* open and read configuration file e.g. ./myconfig */ fd = open("./myconfig", O_RDONLY); read(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); ... fd = open("./myconfig", O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR); write(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); A better approach would be: /* open and read configuration file e.g. ./myconfig */ fd = open("./myconfig", O_RDONLY); read(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); ... fd = open("./myconfig.suffix", O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR write(fd, myconfig_buf, sizeof(myconfig_buf)); fsync(fd); /* paranoia - optional */ ... close(fd); rename("./myconfig", "./myconfig~"); /* paranoia - optional */ rename("./myconfig.suffix", "./myconfig");
[ "/* open and read configuration file e.g. ./myconfig */ fd = open(\"./myconfig\", O_RDONLY); read(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); fd = open(\"./myconfig\", O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR); write(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd);", "/* open and read configuration file e.g. ./myconfig */ fd = open(\"./myconfig\", O_RDONLY); read(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); fd = open(\"./myconfig.suffix\", O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR write(fd, myconfig_buf, sizeof(myconfig_buf)); fsync(fd); /* paranoia - optional */ close(fd); rename(\"./myconfig\", \"./myconfig~\"); /* paranoia - optional */ rename(\"./myconfig.suffix\", \"./myconfig\");" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/developer_tips-fsync
Chapter 7. Working with containers
Chapter 7. Working with containers 7.1. Understanding Containers The basic units of OpenShift Container Platform applications are called containers . Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service (often called a "micro-service"), such as a web server or a database, though containers can be used for arbitrary workloads. The Linux kernel has been incorporating capabilities for container technologies for years. OpenShift Container Platform and Kubernetes add the ability to orchestrate containers across multi-host installations. 7.1.1. About containers and RHEL kernel memory Due to Red Hat Enterprise Linux (RHEL) behavior, a container on a node with high CPU usage might seem to consume more memory than expected. The higher memory consumption could be caused by the kmem_cache in the RHEL kernel. The RHEL kernel creates a kmem_cache for each cgroup. For added performance, the kmem_cache contains a cpu_cache , and a node cache for any NUMA nodes. These caches all consume kernel memory. The amount of memory stored in those caches is proportional to the number of CPUs that the system uses. As a result, a higher number of CPUs results in a greater amount of kernel memory being held in these caches. Higher amounts of kernel memory in these caches can cause OpenShift Container Platform containers to exceed the configured memory limits, resulting in the container being killed. To avoid losing containers due to kernel memory issues, ensure that the containers request sufficient memory. You can use the following formula to estimate the amount of memory consumed by the kmem_cache , where nproc is the number of processing units available that are reported by the nproc command. The lower limit of container requests should be this value plus the container memory requirements: USD(nproc) X 1/2 MiB 7.1.2. About the container engine and container runtime A container engine is a piece of software that processes user requests, including command line options and image pulls. The container engine uses a container runtime , also called a lower-level container runtime , to run and manage the components required to deploy and operate containers. You likely will not need to interact with the container engine or container runtime. Note The OpenShift Container Platform documentation uses the term container runtime to refer to the lower-level container runtime. Other documentation can refer to the container engine as the container runtime. OpenShift Container Platform uses CRI-O as the container engine and runC or crun as the container runtime. The default container runtime is crun. Both container runtimes adhere to the Open Container Initiative (OCI) runtime specifications. CRI-O is a Kubernetes-native container engine implementation that integrates closely with the operating system to deliver an efficient and optimized Kubernetes experience. The CRI-O container engine runs as a systemd service on each OpenShift Container Platform cluster node. crun, developed by Red Hat, is a fast and low-memory container runtime fully written in C. runC, developed by Docker and maintained by the Open Container Project, is a lightweight, portable container runtime written in Go. crun has several improvements over runC, including: Smaller binary Quicker processing Lower memory footprint runC has some benefits over crun, including: Most popular OCI container runtime. Longer tenure in production. Default container runtime of CRI-O. You can move between the two container runtimes as needed. For information on setting which container runtime to use, see Creating a ContainerRuntimeConfig CR to edit CRI-O parameters . 7.2. Using Init Containers to perform tasks before a pod is deployed OpenShift Container Platform provides init containers , which are specialized containers that run before application containers and can contain utilities or setup scripts not present in an app image. 7.2.1. Understanding Init Containers You can use an Init Container resource to perform tasks before the rest of a pod is deployed. A pod can have Init Containers in addition to application containers. Init containers allow you to reorganize setup scripts and binding code. An Init Container can: Contain and run utilities that are not desirable to include in the app Container image for security reasons. Contain utilities or custom code for setup that is not present in an app image. For example, there is no requirement to make an image FROM another image just to use a tool like sed, awk, python, or dig during setup. Use Linux namespaces so that they have different filesystem views from app containers, such as access to secrets that application containers are not able to access. Each Init Container must complete successfully before the one is started. So, Init Containers provide an easy way to block or delay the startup of app containers until some set of preconditions are met. For example, the following are some ways you can use Init Containers: Wait for a service to be created with a shell command like: for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1 Register this pod with a remote server from the downward API with a command like: USD curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()' Wait for some time before starting the app Container with a command like sleep 60 . Clone a git repository into a volume. Place values into a configuration file and run a template tool to dynamically generate a configuration file for the main app Container. For example, place the POD_IP value in a configuration and generate the main app configuration file using Jinja. See the Kubernetes documentation for more information. 7.2.2. Creating Init Containers The following example outlines a simple pod which has two Init Containers. The first waits for myservice and the second waits for mydb . After both containers complete, the pod begins. Procedure Create the pod for the Init Container: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: myapp-container image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: init-mydb image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] Create the pod: USD oc create -f myapp.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s The pod status, Init:0/2 , indicates it is waiting for the two services. Create the myservice service. Create a YAML file similar to the following: kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376 Create the pod: USD oc create -f myservice.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s The pod status, Init:1/2 , indicates it is waiting for one service, in this case the mydb service. Create the mydb service: Create a YAML file similar to the following: kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377 Create the pod: USD oc create -f mydb.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m The pod status indicated that it is no longer waiting for the services and is running. 7.3. Using volumes to persist container data Files in a container are ephemeral. As such, when a container crashes or stops, the data is lost. You can use volumes to persist the data used by the containers in a pod. A volume is directory, accessible to the Containers in a pod, where data is stored for the life of the pod. 7.3.1. Understanding volumes Volumes are mounted file systems available to pods and their containers which may be backed by a number of host-local or network attached storage endpoints. Containers are not persistent by default; on restart, their contents are cleared. To ensure that the file system on the volume contains no errors and, if errors are present, to repair them when possible, OpenShift Container Platform invokes the fsck utility prior to the mount utility. This occurs when either adding a volume or updating an existing volume. The simplest volume type is emptyDir , which is a temporary directory on a single machine. Administrators may also allow you to request a persistent volume that is automatically attached to your pods. Note emptyDir volume storage may be restricted by a quota based on the pod's FSGroup, if the FSGroup parameter is enabled by your cluster administrator. 7.3.2. Working with volumes using the OpenShift Container Platform CLI You can use the CLI command oc set volume to add and remove volumes and volume mounts for any object that has a pod template like replication controllers or deployment configs. You can also list volumes in pods or any object that has a pod template. The oc set volume command uses the following general syntax: USD oc set volume <object_selection> <operation> <mandatory_parameters> <options> Object selection Specify one of the following for the object_selection parameter in the oc set volume command: Table 7.1. Object Selection Syntax Description Example <object_type> <name> Selects <name> of type <object_type> . deploymentConfig registry <object_type> / <name> Selects <name> of type <object_type> . deploymentConfig/registry <object_type> --selector= <object_label_selector> Selects resources of type <object_type> that matched the given label selector. deploymentConfig --selector="name=registry" <object_type> --all Selects all resources of type <object_type> . deploymentConfig --all -f or --filename= <file_name> File name, directory, or URL to file to use to edit the resource. -f registry-deployment-config.json Operation Specify --add or --remove for the operation parameter in the oc set volume command. Mandatory parameters Any mandatory parameters are specific to the selected operation and are discussed in later sections. Options Any options are specific to the selected operation and are discussed in later sections. 7.3.3. Listing volumes and volume mounts in a pod You can list volumes and volume mounts in pods or pod templates: Procedure To list volumes: USD oc set volume <object_type>/<name> [options] List volume supported options: Option Description Default --name Name of the volume. -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' For example: To list all volumes for pod p1 : USD oc set volume pod/p1 To list volume v1 defined on all deployment configs: USD oc set volume dc --all --name=v1 7.3.4. Adding volumes to a pod You can add volumes and volume mounts to a pod. Procedure To add a volume, a volume mount, or both to pod templates: USD oc set volume <object_type>/<name> --add [options] Table 7.2. Supported Options for Adding Volumes Option Description Default --name Name of the volume. Automatically generated, if not specified. -t, --type Name of the volume source. Supported values: emptyDir , hostPath , secret , configmap , persistentVolumeClaim or projected . emptyDir -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' -m, --mount-path Mount path inside the selected containers. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . --path Host path. Mandatory parameter for --type=hostPath . Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . --secret-name Name of the secret. Mandatory parameter for --type=secret . --configmap-name Name of the configmap. Mandatory parameter for --type=configmap . --claim-name Name of the persistent volume claim. Mandatory parameter for --type=persistentVolumeClaim . --source Details of volume source as a JSON string. Recommended if the desired volume source is not supported by --type . -o, --output Display the modified objects instead of updating them on the server. Supported values: json , yaml . --output-version Output the modified objects with the given version. api-version For example: To add a new volume source emptyDir to the registry DeploymentConfig object: USD oc set volume dc/registry --add Tip You can alternatively apply the following YAML to add the volume: Example 7.1. Sample deployment config with an added volume kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP 1 Add the volume source emptyDir . To add volume v1 with secret secret1 for replication controller r1 and mount inside the containers at /data : USD oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data Tip You can alternatively apply the following YAML to add the volume: Example 7.2. Sample replication controller with added volume and secret kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data 1 Add the volume and secret. 2 Add the container mount path. To add existing persistent volume v1 with claim name pvc1 to deployment configuration dc.json on disk, mount the volume on container c1 at /data , and update the DeploymentConfig object on the server: USD oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim \ --claim-name=pvc1 --mount-path=/data --containers=c1 Tip You can alternatively apply the following YAML to add the volume: Example 7.3. Sample deployment config with persistent volume added kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data 1 Add the persistent volume claim named `pvc1. 2 Add the container mount path. To add a volume v1 based on Git repository https://github.com/namespace1/project1 with revision 5125c45f9f563 for all replication controllers: USD oc set volume rc --all --add --name=v1 \ --source='{"gitRepo": { "repository": "https://github.com/namespace1/project1", "revision": "5125c45f9f563" }}' 7.3.5. Updating volumes and volume mounts in a pod You can modify the volumes and volume mounts in a pod. Procedure Updating existing volumes using the --overwrite option: USD oc set volume <object_type>/<name> --add --overwrite [options] For example: To replace existing volume v1 for replication controller r1 with existing persistent volume claim pvc1 : USD oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 Tip You can alternatively apply the following YAML to replace the volume: Example 7.4. Sample replication controller with persistent volume claim named pvc1 kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data 1 Set persistent volume claim to pvc1 . To change the DeploymentConfig object d1 mount point to /opt for volume v1 : USD oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt Tip You can alternatively apply the following YAML to change the mount point: Example 7.5. Sample deployment config with mount point set to opt . kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt 1 Set the mount point to /opt . 7.3.6. Removing volumes and volume mounts from a pod You can remove a volume or volume mount from a pod. Procedure To remove a volume from pod templates: USD oc set volume <object_type>/<name> --remove [options] Table 7.3. Supported options for removing volumes Option Description Default --name Name of the volume. -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' --confirm Indicate that you want to remove multiple volumes at once. -o, --output Display the modified objects instead of updating them on the server. Supported values: json , yaml . --output-version Output the modified objects with the given version. api-version For example: To remove a volume v1 from the DeploymentConfig object d1 : USD oc set volume dc/d1 --remove --name=v1 To unmount volume v1 from container c1 for the DeploymentConfig object d1 and remove the volume v1 if it is not referenced by any containers on d1 : USD oc set volume dc/d1 --remove --name=v1 --containers=c1 To remove all volumes for replication controller r1 : USD oc set volume rc/r1 --remove --confirm 7.3.7. Configuring volumes for multiple uses in a pod You can configure a volume to share one volume for multiple uses in a single pod using the volumeMounts.subPath property to specify a subPath value inside a volume instead of the volume's root. Note You cannot add a subPath parameter to an existing scheduled pod. Procedure To view the list of files in the volume, run the oc rsh command: USD oc rsh <pod> Example output sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3 Specify the subPath : Example Pod spec with subPath parameter apiVersion: v1 kind: Pod metadata: name: my-site spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data 1 Databases are stored in the mysql folder. 2 HTML content is stored in the html folder. 7.4. Mapping volumes using projected volumes A projected volume maps several existing volume sources into the same directory. The following types of volume sources can be projected: Secrets Config Maps Downward API Note All sources are required to be in the same namespace as the pod. 7.4.1. Understanding projected volumes Projected volumes can map any combination of these volume sources into a single directory, allowing the user to: automatically populate a single volume with the keys from multiple secrets, config maps, and with downward API information, so that I can synthesize a single directory with various sources of information; populate a single volume with the keys from multiple secrets, config maps, and with downward API information, explicitly specifying paths for each item, so that I can have full control over the contents of that volume. Important When the RunAsUser permission is set in the security context of a Linux-based pod, the projected files have the correct permissions set, including container user ownership. However, when the Windows equivalent RunAsUsername permission is set in a Windows pod, the kubelet is unable to correctly set ownership on the files in the projected volume. Therefore, the RunAsUsername permission set in the security context of a Windows pod is not honored for Windows projected volumes running in OpenShift Container Platform. The following general scenarios show how you can use projected volumes. Config map, secrets, Downward API. Projected volumes allow you to deploy containers with configuration data that includes passwords. An application using these resources could be deploying Red Hat OpenStack Platform (RHOSP) on Kubernetes. The configuration data might have to be assembled differently depending on if the services are going to be used for production or for testing. If a pod is labeled with production or testing, the downward API selector metadata.labels can be used to produce the correct RHOSP configs. Config map + secrets. Projected volumes allow you to deploy containers involving configuration data and passwords. For example, you might execute a config map with some sensitive encrypted tasks that are decrypted using a vault password file. ConfigMap + Downward API. Projected volumes allow you to generate a config including the pod name (available via the metadata.name selector). This application can then pass the pod name along with requests to easily determine the source without using IP tracking. Secrets + Downward API. Projected volumes allow you to use a secret as a public key to encrypt the namespace of the pod (available via the metadata.namespace selector). This example allows the Operator to use the application to deliver the namespace information securely without using an encrypted transport. 7.4.1.1. Example Pod specs The following are examples of Pod specs for creating projected volumes. Pod with a secret, a Downward API, and a config map apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: "/projected-volume" 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: "labels" fieldRef: fieldPath: metadata.labels - path: "cpu_limit" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11 1 Add a volumeMounts section for each container that needs the secret. 2 Specify a path to an unused directory where the secret will appear. 3 Set readOnly to true . 4 Add a volumes block to list each projected volume source. 5 Specify any name for the volume. 6 Set the execute permission on the files. 7 Add a secret. Enter the name of the secret object. Each secret you want to use must be listed. 8 Specify the path to the secrets file under the mountPath . Here, the secrets file is in /projected-volume/my-group/my-username . 9 Add a Downward API source. 10 Add a ConfigMap source. 11 Set the mode for the specific projection Note If there are multiple containers in the pod, each container needs a volumeMounts section, but only one volumes section is needed. Pod with multiple secrets with a non-default permission mode set apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511 Note The defaultMode can only be specified at the projected level and not for each volume source. However, as illustrated above, you can explicitly set the mode for each individual projection. 7.4.1.2. Pathing Considerations Collisions Between Keys when Configured Paths are Identical If you configure any keys with the same path, the pod spec will not be accepted as valid. In the following example, the specified path for mysecret and myconfigmap are the same: apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data Consider the following situations related to the volume file paths. Collisions Between Keys without Configured Paths The only run-time validation that can occur is when all the paths are known at pod creation, similar to the above scenario. Otherwise, when a conflict occurs the most recent specified resource will overwrite anything preceding it (this is true for resources that are updated after pod creation as well). Collisions when One Path is Explicit and the Other is Automatically Projected In the event that there is a collision due to a user specified path matching data that is automatically projected, the latter resource will overwrite anything preceding it as before 7.4.2. Configuring a Projected Volume for a Pod When creating projected volumes, consider the volume file path situations described in Understanding projected volumes . The following example shows how to use a projected volume to mount an existing secret volume source. The steps can be used to create a user name and password secrets from local files. You then create a pod that runs one container, using a projected volume to mount the secrets into the same shared directory. The user name and password values can be any valid string that is base64 encoded. The following example shows admin in base64: USD echo -n "admin" | base64 Example output YWRtaW4= The following example shows the password 1f2d1e2e67df in base64: USD echo -n "1f2d1e2e67df" | base64 Example output MWYyZDFlMmU2N2Rm Procedure To use a projected volume to mount an existing secret volume source. Create the secret: Create a YAML file similar to the following, replacing the password and user information as appropriate: apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= Use the following command to create the secret: USD oc create -f <secrets-filename> For example: USD oc create -f secret.yaml Example output secret "mysecret" created You can check that the secret was created using the following commands: USD oc get secret <secret-name> For example: USD oc get secret mysecret Example output NAME TYPE DATA AGE mysecret Opaque 2 17h USD oc get secret <secret-name> -o yaml For example: USD oc get secret mysecret -o yaml apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: "2107" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque Create a pod with a projected volume. Create a YAML file similar to the following, including a volumes section: kind: Pod metadata: name: test-projected-volume spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-projected-volume image: busybox args: - sleep - "86400" volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1 1 The name of the secret you created. Create the pod from the configuration file: USD oc create -f <your_yaml_file>.yaml For example: USD oc create -f secret-pod.yaml Example output pod "test-projected-volume" created Verify that the pod container is running, and then watch for changes to the pod: USD oc get pod <name> For example: USD oc get pod test-projected-volume The output should appear similar to the following: Example output NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s In another terminal, use the oc exec command to open a shell to the running container: USD oc exec -it <pod> <command> For example: USD oc exec -it test-projected-volume -- /bin/sh In your shell, verify that the projected-volumes directory contains your projected sources: / # ls Example output bin home root tmp dev proc run usr etc projected-volume sys var 7.5. Allowing containers to consume API objects The Downward API is a mechanism that allows containers to consume information about API objects without coupling to OpenShift Container Platform. Such information includes the pod's name, namespace, and resource values. Containers can consume information from the downward API using environment variables or a volume plugin. 7.5.1. Expose pod information to Containers using the Downward API The Downward API contains such information as the pod's name, project, and resource values. Containers can consume information from the downward API using environment variables or a volume plugin. Fields within the pod are selected using the FieldRef API type. FieldRef has two fields: Field Description fieldPath The path of the field to select, relative to the pod. apiVersion The API version to interpret the fieldPath selector within. Currently, the valid selectors in the v1 API include: Selector Description metadata.name The pod's name. This is supported in both environment variables and volumes. metadata.namespace The pod's namespace.This is supported in both environment variables and volumes. metadata.labels The pod's labels. This is only supported in volumes and not in environment variables. metadata.annotations The pod's annotations. This is only supported in volumes and not in environment variables. status.podIP The pod's IP. This is only supported in environment variables and not volumes. The apiVersion field, if not specified, defaults to the API version of the enclosing pod template. 7.5.2. Understanding how to consume container values using the downward API You containers can consume API values using environment variables or a volume plugin. Depending on the method you choose, containers can consume: Pod name Pod project/namespace Pod annotations Pod labels Annotations and labels are available using only a volume plugin. 7.5.2.1. Consuming container values using environment variables When using a container's environment variables, use the EnvVar type's valueFrom field (of type EnvVarSource ) to specify that the variable's value should come from a FieldRef source instead of the literal value specified by the value field. Only constant attributes of the pod can be consumed this way, as environment variables cannot be updated once a process is started in a way that allows the process to be notified that the value of a variable has changed. The fields supported using environment variables are: Pod name Pod project/namespace Procedure Create a new pod spec that contains the environment variables you want the container to consume: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_POD_NAME and MY_POD_NAMESPACE values: USD oc logs -p dapi-env-test-pod 7.5.2.2. Consuming container values using a volume plugin You containers can consume API values using a volume plugin. Containers can consume: Pod name Pod project/namespace Pod annotations Pod labels Procedure To use the volume plugin: Create a new pod spec that contains the environment variables you want the container to consume: Create a volume-pod.yaml file similar to the following: kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: "345" annotation2: "456" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: ["sh", "-c", "cat /tmp/etc/pod_labels /tmp/etc/pod_annotations"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never # ... Create the pod from the volume-pod.yaml file: USD oc create -f volume-pod.yaml Verification Check the container's logs and verify the presence of the configured fields: USD oc logs -p dapi-volume-test-pod Example output cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api 7.5.3. Understanding how to consume container resources using the Downward API When creating pods, you can use the Downward API to inject information about computing resource requests and limits so that image and application authors can correctly create an image for specific environments. You can do this using environment variable or a volume plugin. 7.5.3.1. Consuming container resources using environment variables When creating pods, you can use the Downward API to inject information about computing resource requests and limits using environment variables. When creating the pod configuration, specify environment variables that correspond to the contents of the resources field in the spec.container field. Note If the resource limits are not included in the container configuration, the downward API defaults to the node's CPU and memory allocatable values. Procedure Create a new pod spec that contains the resources you want to inject: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ "/bin/sh", "-c", "env" ] resources: requests: memory: "32Mi" cpu: "125m" limits: memory: "64Mi" cpu: "250m" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml 7.5.3.2. Consuming container resources using a volume plugin When creating pods, you can use the Downward API to inject information about computing resource requests and limits using a volume plugin. When creating the pod configuration, use the spec.volumes.downwardAPI.items field to describe the desired resources that correspond to the spec.resources field. Note If the resource limits are not included in the container configuration, the Downward API defaults to the node's CPU and memory allocatable values. Procedure Create a new pod spec that contains the resources you want to inject: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: ["sh", "-c", "while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done"] resources: requests: memory: "32Mi" cpu: "125m" limits: memory: "64Mi" cpu: "250m" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: "cpu_limit" resourceFieldRef: containerName: client-container resource: limits.cpu - path: "cpu_request" resourceFieldRef: containerName: client-container resource: requests.cpu - path: "mem_limit" resourceFieldRef: containerName: client-container resource: limits.memory - path: "mem_request" resourceFieldRef: containerName: client-container resource: requests.memory # ... Create the pod from the volume-pod.yaml file: USD oc create -f volume-pod.yaml 7.5.4. Consuming secrets using the Downward API When creating pods, you can use the downward API to inject secrets so image and application authors can create an image for specific environments. Procedure Create a secret to inject: Create a secret.yaml file similar to the following: apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth Create the secret object from the secret.yaml file: USD oc create -f secret.yaml Create a pod that references the username field from the above Secret object: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_SECRET_USERNAME value: USD oc logs -p dapi-env-test-pod 7.5.5. Consuming configuration maps using the Downward API When creating pods, you can use the Downward API to inject configuration map values so image and application authors can create an image for specific environments. Procedure Create a config map with the values to inject: Create a configmap.yaml file similar to the following: apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue Create the config map from the configmap.yaml file: USD oc create -f configmap.yaml Create a pod that references the above config map: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Always # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_CONFIGMAP_VALUE value: USD oc logs -p dapi-env-test-pod 7.5.6. Referencing environment variables When creating pods, you can reference the value of a previously defined environment variable by using the USD() syntax. If the environment variable reference can not be resolved, the value will be left as the provided string. Procedure Create a pod that references an existing environment variable: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_ENV_VAR_REF_ENV value: USD oc logs -p dapi-env-test-pod 7.5.7. Escaping environment variable references When creating a pod, you can escape an environment variable reference by using a double dollar sign. The value will then be set to a single dollar sign version of the provided value. Procedure Create a pod that references an existing environment variable: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_NEW_ENV value: USD oc logs -p dapi-env-test-pod 7.6. Copying files to or from an OpenShift Container Platform container You can use the CLI to copy local files to or from a remote directory in a container using the rsync command. 7.6.1. Understanding how to copy files The oc rsync command, or remote sync, is a useful tool for copying database archives to and from your pods for backup and restore purposes. You can also use oc rsync to copy source code changes into a running pod for development debugging, when the running pod supports hot reload of source files. USD oc rsync <source> <destination> [-c <container>] 7.6.1.1. Requirements Specifying the Copy Source The source argument of the oc rsync command must point to either a local directory or a pod directory. Individual files are not supported. When specifying a pod directory the directory name must be prefixed with the pod name: <pod name>:<dir> If the directory name ends in a path separator ( / ), only the contents of the directory are copied to the destination. Otherwise, the directory and its contents are copied to the destination. Specifying the Copy Destination The destination argument of the oc rsync command must point to a directory. If the directory does not exist, but rsync is used for copy, the directory is created for you. Deleting Files at the Destination The --delete flag may be used to delete any files in the remote directory that are not in the local directory. Continuous Syncing on File Change Using the --watch option causes the command to monitor the source path for any file system changes, and synchronizes changes when they occur. With this argument, the command runs forever. Synchronization occurs after short quiet periods to ensure a rapidly changing file system does not result in continuous synchronization calls. When using the --watch option, the behavior is effectively the same as manually invoking oc rsync repeatedly, including any arguments normally passed to oc rsync . Therefore, you can control the behavior via the same flags used with manual invocations of oc rsync , such as --delete . 7.6.2. Copying files to and from containers Support for copying local files to or from a container is built into the CLI. Prerequisites When working with oc rsync , note the following: rsync must be installed. The oc rsync command uses the local rsync tool, if present on the client machine and the remote container. If rsync is not found locally or in the remote container, a tar archive is created locally and sent to the container where the tar utility is used to extract the files. If tar is not available in the remote container, the copy will fail. The tar copy method does not provide the same functionality as oc rsync . For example, oc rsync creates the destination directory if it does not exist and only sends files that are different between the source and the destination. Note In Windows, the cwRsync client should be installed and added to the PATH for use with the oc rsync command. Procedure To copy a local directory to a pod directory: USD oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name> For example: USD oc rsync /home/user/source devpod1234:/src -c user-container To copy a pod directory to a local directory: USD oc rsync devpod1234:/src /home/user/source Example output USD oc rsync devpod1234:/src/status.txt /home/user/ 7.6.3. Using advanced Rsync features The oc rsync command exposes fewer command line options than standard rsync . In the case that you want to use a standard rsync command line option that is not available in oc rsync , for example the --exclude-from=FILE option, it might be possible to use standard rsync 's --rsh ( -e ) option or RSYNC_RSH environment variable as a workaround, as follows: USD rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir> or: Export the RSYNC_RSH variable: USD export RSYNC_RSH='oc rsh' Then, run the rsync command: USD rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir> Both of the above examples configure standard rsync to use oc rsh as its remote shell program to enable it to connect to the remote pod, and are an alternative to running oc rsync . 7.7. Executing remote commands in an OpenShift Container Platform container You can use the CLI to execute remote commands in an OpenShift Container Platform container. 7.7.1. Executing remote commands in containers Support for remote container command execution is built into the CLI. Procedure To run a command in a container: USD oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>] For example: USD oc exec mypod date Example output Thu Apr 9 02:21:53 UTC 2015 Important For security purposes , the oc exec command does not work when accessing privileged containers except when the command is executed by a cluster-admin user. 7.7.2. Protocol for initiating a remote command from a client Clients initiate the execution of a remote command in a container by issuing a request to the Kubernetes API server: /proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command> In the above URL: <node_name> is the FQDN of the node. <namespace> is the project of the target pod. <pod> is the name of the target pod. <container> is the name of the target container. <command> is the desired command to be executed. For example: /proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date Additionally, the client can add parameters to the request to indicate if: the client should send input to the remote container's command (stdin). the client's terminal is a TTY. the remote container's command should send output from stdout to the client. the remote container's command should send output from stderr to the client. After sending an exec request to the API server, the client upgrades the connection to one that supports multiplexed streams; the current implementation uses HTTP/2 . The client creates one stream each for stdin, stdout, and stderr. To distinguish among the streams, the client sets the streamType header on the stream to one of stdin , stdout , or stderr . The client closes all streams, the upgraded connection, and the underlying connection when it is finished with the remote command execution request. 7.8. Using port forwarding to access applications in a container OpenShift Container Platform supports port forwarding to pods. 7.8.1. Understanding port forwarding You can use the CLI to forward one or more local ports to a pod. This allows you to listen on a given or random port locally, and have data forwarded to and from given ports in the pod. Support for port forwarding is built into the CLI: USD oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>] The CLI listens on each local port specified by the user, forwarding using the protocol described below. Ports may be specified using the following formats: 5000 The client listens on port 5000 locally and forwards to 5000 in the pod. 6000:5000 The client listens on port 6000 locally and forwards to 5000 in the pod. :5000 or 0:5000 The client selects a free local port and forwards to 5000 in the pod. OpenShift Container Platform handles port-forward requests from clients. Upon receiving a request, OpenShift Container Platform upgrades the response and waits for the client to create port-forwarding streams. When OpenShift Container Platform receives a new stream, it copies data between the stream and the pod's port. Architecturally, there are options for forwarding to a pod's port. The supported OpenShift Container Platform implementation invokes nsenter directly on the node host to enter the pod's network namespace, then invokes socat to copy data between the stream and the pod's port. However, a custom implementation could include running a helper pod that then runs nsenter and socat , so that those binaries are not required to be installed on the host. 7.8.2. Using port forwarding You can use the CLI to port-forward one or more local ports to a pod. Procedure Use the following command to listen on the specified port in a pod: USD oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>] For example: Use the following command to listen on ports 5000 and 6000 locally and forward data to and from ports 5000 and 6000 in the pod: USD oc port-forward <pod> 5000 6000 Example output Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000 Use the following command to listen on port 8888 locally and forward to 5000 in the pod: USD oc port-forward <pod> 8888:5000 Example output Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000 Use the following command to listen on a free port locally and forward to 5000 in the pod: USD oc port-forward <pod> :5000 Example output Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000 Or: USD oc port-forward <pod> 0:5000 7.8.3. Protocol for initiating port forwarding from a client Clients initiate port forwarding to a pod by issuing a request to the Kubernetes API server: In the above URL: <node_name> is the FQDN of the node. <namespace> is the namespace of the target pod. <pod> is the name of the target pod. For example: After sending a port forward request to the API server, the client upgrades the connection to one that supports multiplexed streams; the current implementation uses Hyptertext Transfer Protocol Version 2 (HTTP/2) . The client creates a stream with the port header containing the target port in the pod. All data written to the stream is delivered via the kubelet to the target pod and port. Similarly, all data sent from the pod for that forwarded connection is delivered back to the same stream in the client. The client closes all streams, the upgraded connection, and the underlying connection when it is finished with the port forwarding request. 7.9. Using sysctls in containers Sysctl settings are exposed through Kubernetes, allowing users to modify certain kernel parameters at runtime. Only sysctls that are namespaced can be set independently on pods. If a sysctl is not namespaced, called node-level , you must use another method of setting the sysctl, such as by using the Node Tuning Operator. Network sysctls are a special category of sysctl. Network sysctls include: System-wide sysctls, for example net.ipv4.ip_local_port_range , that are valid for all networking. You can set these independently for each pod on a node. Interface-specific sysctls, for example net.ipv4.conf.IFNAME.accept_local , that only apply to a specific additional network interface for a given pod. You can set these independently for each additional network configuration. You set these by using a configuration in the tuning-cni after the network interfaces are created. Moreover, only those sysctls considered safe are whitelisted by default; you can manually enable other unsafe sysctls on the node to be available to the user. If you are setting the sysctl and it is node-level, you can find information on this procedure in the section Using the Node Tuning Operator . 7.9.1. About sysctls In Linux, the sysctl interface allows an administrator to modify kernel parameters at runtime. Parameters are available from the /proc/sys/ virtual process file system. The parameters cover various subsystems, such as: kernel (common prefix: kernel. ) networking (common prefix: net. ) virtual memory (common prefix: vm. ) MDADM (common prefix: dev. ) More subsystems are described in Kernel documentation . To get a list of all parameters, run: USD sudo sysctl -a 7.9.2. Namespaced and node-level sysctls A number of sysctls are namespaced in the Linux kernels. This means that you can set them independently for each pod on a node. Being namespaced is a requirement for sysctls to be accessible in a pod context within Kubernetes. The following sysctls are known to be namespaced: kernel.shm* kernel.msg* kernel.sem fs.mqueue.* Additionally, most of the sysctls in the net.* group are known to be namespaced. Their namespace adoption differs based on the kernel version and distributor. Sysctls that are not namespaced are called node-level and must be set manually by the cluster administrator, either by means of the underlying Linux distribution of the nodes, such as by modifying the /etc/sysctls.conf file, or by using a daemon set with privileged containers. You can use the Node Tuning Operator to set node-level sysctls. Note Consider marking nodes with special sysctls as tainted. Only schedule pods onto them that need those sysctl settings. Use the taints and toleration feature to mark the nodes. 7.9.3. Safe and unsafe sysctls Sysctls are grouped into safe and unsafe sysctls. For system-wide sysctls to be considered safe, they must be namespaced. A namespaced sysctl ensures there is isolation between namespaces and therefore pods. If you set a sysctl for one pod it must not add any of the following: Influence any other pod on the node Harm the node health Gain CPU or memory resources outside of the resource limits of a pod Note Being namespaced alone is not sufficient for the sysctl to be considered safe. Any sysctl that is not added to the allowed list on OpenShift Container Platform is considered unsafe for OpenShift Container Platform. Unsafe sysctls are not allowed by default. For system-wide sysctls the cluster administrator must manually enable them on a per-node basis. Pods with disabled unsafe sysctls are scheduled but do not launch. Note You cannot manually enable interface-specific unsafe sysctls. OpenShift Container Platform adds the following system-wide and interface-specific safe sysctls to an allowed safe list: Table 7.4. System-wide safe sysctls sysctl Description kernel.shm_rmid_forced When set to 1 , all shared memory objects in current IPC namespace are automatically forced to use IPC_RMID. For more information, see shm_rmid_forced . net.ipv4.ip_local_port_range Defines the local port range that is used by TCP and UDP to choose the local port. The first number is the first port number, and the second number is the last local port number. If possible, it is better if these numbers have different parity (one even and one odd value). They must be greater than or equal to ip_unprivileged_port_start . The default values are 32768 and 60999 respectively. For more information, see ip_local_port_range . net.ipv4.tcp_syncookies When net.ipv4.tcp_syncookies is set, the kernel handles TCP SYN packets normally until the half-open connection queue is full, at which time, the SYN cookie functionality kicks in. This functionality allows the system to keep accepting valid connections, even if under a denial-of-service attack. For more information, see tcp_syncookies . net.ipv4.ping_group_range This restricts ICMP_PROTO datagram sockets to users in the group range. The default is 1 0 , meaning that nobody, not even root, can create ping sockets. For more information, see ping_group_range . net.ipv4.ip_unprivileged_port_start This defines the first unprivileged port in the network namespace. To disable all privileged ports, set this to 0 . Privileged ports must not overlap with the ip_local_port_range . For more information, see ip_unprivileged_port_start . net.ipv4.ip_local_reserved_ports Specify a range of comma-separated local ports that you want to reserve for applications or services. net.ipv4.tcp_keepalive_time Specify the interval in seconds before the first keepalive probe should be sent after a connection has become idle. net.ipv4.tcp_fin_timeout Specify the time in seconds that a connection remains in the FIN-WAIT-2 state before it is aborted. net.ipv4.tcp_keepalive_intvl Specify the interval in seconds between the keepalive probes. This value is multiplied by the tcp_keepalive_probes value to determine the total time required before it is decided that the connection is broken. net.ipv4.tcp_keepalive_probes Specify how many keepalive probes to send until it is determined that the connection is broken. Table 7.5. Interface-specific safe sysctls sysctl Description net.ipv4.conf.IFNAME.accept_redirects Accept IPv4 ICMP redirect messages. net.ipv4.conf.IFNAME.accept_source_route Accept IPv4 packets with strict source route (SRR) option. net.ipv4.conf.IFNAME.arp_accept Define behavior for gratuitous ARP frames with an IPv4 address that is not already present in the ARP table: 0 - Do not create new entries in the ARP table. 1 - Create new entries in the ARP table. net.ipv4.conf.IFNAME.arp_notify Define mode for notification of IPv4 address and device changes. net.ipv4.conf.IFNAME.disable_policy Disable IPSEC policy (SPD) for this IPv4 interface. net.ipv4.conf.IFNAME.secure_redirects Accept ICMP redirect messages only to gateways listed in the interface's current gateway list. net.ipv4.conf.IFNAME.send_redirects Send redirects is enabled only if the node acts as a router. That is, a host should not send an ICMP redirect message. It is used by routers to notify the host about a better routing path that is available for a particular destination. net.ipv6.conf.IFNAME.accept_ra Accept IPv6 Router advertisements; autoconfigure using them. It also determines whether or not to transmit router solicitations. Router solicitations are transmitted only if the functional setting is to accept router advertisements. net.ipv6.conf.IFNAME.accept_redirects Accept IPv6 ICMP redirect messages. net.ipv6.conf.IFNAME.accept_source_route Accept IPv6 packets with SRR option. net.ipv6.conf.IFNAME.arp_accept Define behavior for gratuitous ARP frames with an IPv6 address that is not already present in the ARP table: 0 - Do not create new entries in the ARP table. 1 - Create new entries in the ARP table. net.ipv6.conf.IFNAME.arp_notify Define mode for notification of IPv6 address and device changes. net.ipv6.neigh.IFNAME.base_reachable_time_ms This parameter controls the hardware address to IP mapping lifetime in the neighbour table for IPv6. net.ipv6.neigh.IFNAME.retrans_time_ms Set the retransmit timer for neighbor discovery messages. Note When setting these values using the tuning CNI plugin, use the value IFNAME literally. The interface name is represented by the IFNAME token, and is replaced with the actual name of the interface at runtime. 7.9.4. Updating the interface-specific safe sysctls list OpenShift Container Platform includes a predefined list of safe interface-specific sysctls . You can modify this list by updating the cni-sysctl-allowlist in the openshift-multus namespace. Important The support for updating the interface-specific safe sysctls list is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Follow this procedure to modify the predefined list of safe sysctls . This procedure describes how to extend the default allow list. Procedure View the existing predefined list by running the following command: USD oc get cm -n openshift-multus cni-sysctl-allowlist -oyaml Expected output apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD kind: ConfigMap metadata: annotations: kubernetes.io/description: | Sysctl allowlist for nodes. release.openshift.io/version: 4.18.0-0.nightly-2022-11-16-003434 creationTimestamp: "2022-11-17T14:09:27Z" name: cni-sysctl-allowlist namespace: openshift-multus resourceVersion: "2422" uid: 96d138a3-160e-4943-90ff-6108fa7c50c3 Edit the list by using the following command: USD oc edit cm -n openshift-multus cni-sysctl-allowlist -oyaml For example, to allow you to be able to implement stricter reverse path forwarding you need to add ^net.ipv4.conf.IFNAME.rp_filterUSD and ^net.ipv6.conf.IFNAME.rp_filterUSD to the list as shown here: # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv4.conf.IFNAME.rp_filterUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD ^net.ipv6.conf.IFNAME.rp_filterUSD Save the changes to the file and exit. Note The removal of sysctls is also supported. Edit the file, remove the sysctl or sysctls then save the changes and exit. Verification Follow this procedure to enforce stricter reverse path forwarding for IPv4. For more information on reverse path forwarding see Reverse Path Forwarding . Create a network attachment definition, such as reverse-path-fwd-example.yaml , with the following content: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ "cniVersion": "0.4.0", "name": "tuningnad", "plugins": [{ "type": "bridge" }, { "type": "tuning", "sysctl": { "net.ipv4.conf.IFNAME.rp_filter": "1" } } ] }' Apply the yaml by running the following command: USD oc apply -f reverse-path-fwd-example.yaml Example output networkattachmentdefinition.k8.cni.cncf.io/tuningnad created Create a pod such as examplepod.yaml using the following YAML: apiVersion: v1 kind: Pod metadata: name: example labels: app: httpd namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: httpd image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest' ports: - containerPort: 8080 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL 1 Specify the name of the configured NetworkAttachmentDefinition . Apply the yaml by running the following command: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE example 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh example Verify the value of the configured sysctl flag. For example, find the value net.ipv4.conf.net1.rp_filter by running the following command: sh-4.4# sysctl net.ipv4.conf.net1.rp_filter Expected output net.ipv4.conf.net1.rp_filter = 1 Additional resources Linux networking documentation 7.9.5. Starting a pod with safe sysctls You can set sysctls on pods using the pod's securityContext . The securityContext applies to all containers in the same pod. Safe sysctls are allowed by default. This example uses the pod securityContext to set the following safe sysctls: kernel.shm_rmid_forced net.ipv4.ip_local_port_range net.ipv4.tcp_syncookies net.ipv4.ping_group_range Warning To avoid destabilizing your operating system, modify sysctl parameters only after you understand their effects. Use this procedure to start a pod with the configured sysctl settings. Note In most cases you modify an existing pod definition and add the securityContext spec. Procedure Create a YAML file sysctl_pod.yaml that defines an example pod and add the securityContext spec, as shown in the following example: apiVersion: v1 kind: Pod metadata: name: sysctl-example namespace: default spec: containers: - name: podexample image: centos command: ["bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 1 runAsGroup: 3000 2 allowPrivilegeEscalation: false 3 capabilities: 4 drop: ["ALL"] securityContext: runAsNonRoot: true 5 seccompProfile: 6 type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: "1" - name: net.ipv4.ip_local_port_range value: "32770 60666" - name: net.ipv4.tcp_syncookies value: "0" - name: net.ipv4.ping_group_range value: "0 200000000" 1 runAsUser controls which user ID the container is run with. 2 runAsGroup controls which primary group ID the containers is run with. 3 allowPrivilegeEscalation determines if a pod can request to allow privilege escalation. If unspecified, it defaults to true. This boolean directly controls whether the no_new_privs flag gets set on the container process. 4 capabilities permit privileged actions without giving full root access. This policy ensures all capabilities are dropped from the pod. 5 runAsNonRoot: true requires that the container will run with a user with any UID other than 0. 6 RuntimeDefault enables the default seccomp profile for a pod or container workload. Create the pod by running the following command: USD oc apply -f sysctl_pod.yaml Verify that the pod is created by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE sysctl-example 1/1 Running 0 14s Log in to the pod by running the following command: USD oc rsh sysctl-example Verify the values of the configured sysctl flags. For example, find the value kernel.shm_rmid_forced by running the following command: sh-4.4# sysctl kernel.shm_rmid_forced Expected output kernel.shm_rmid_forced = 1 7.9.6. Starting a pod with unsafe sysctls A pod with unsafe sysctls fails to launch on any node unless the cluster administrator explicitly enables unsafe sysctls for that node. As with node-level sysctls, use the taints and toleration feature or labels on nodes to schedule those pods onto the right nodes. The following example uses the pod securityContext to set a safe sysctl kernel.shm_rmid_forced and two unsafe sysctls, net.core.somaxconn and kernel.msgmax . There is no distinction between safe and unsafe sysctls in the specification. Warning To avoid destabilizing your operating system, modify sysctl parameters only after you understand their effects. The following example illustrates what happens when you add safe and unsafe sysctls to a pod specification: Procedure Create a YAML file sysctl-example-unsafe.yaml that defines an example pod and add the securityContext specification, as shown in the following example: apiVersion: v1 kind: Pod metadata: name: sysctl-example-unsafe spec: containers: - name: podexample image: centos command: ["bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: "0" - name: net.core.somaxconn value: "1024" - name: kernel.msgmax value: "65536" Create the pod using the following command: USD oc apply -f sysctl-example-unsafe.yaml Verify that the pod is scheduled but does not deploy because unsafe sysctls are not allowed for the node using the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE sysctl-example-unsafe 0/1 SysctlForbidden 0 14s 7.9.7. Enabling unsafe sysctls A cluster administrator can allow certain unsafe sysctls for very special situations such as high performance or real-time application tuning. If you want to use unsafe sysctls, a cluster administrator must enable them individually for a specific type of node. The sysctls must be namespaced. You can further control which sysctls are set in pods by specifying lists of sysctls or sysctl patterns in the allowedUnsafeSysctls field of the Security Context Constraints. The allowedUnsafeSysctls option controls specific needs such as high performance or real-time application tuning. Warning Due to their nature of being unsafe, the use of unsafe sysctls is at-your-own-risk and can lead to severe problems, such as improper behavior of containers, resource shortage, or breaking a node. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to decide how to label your machine config by running the following command: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-bfb92f0cd1684e54d8e234ab7423cc96 True False False 3 3 3 0 42m worker rendered-worker-21b6cb9a0f8919c88caf39db80ac1fce True False False 3 3 3 0 42m Add a label to the machine config pool where the containers with the unsafe sysctls will run by running the following command: USD oc label machineconfigpool worker custom-kubelet=sysctl Create a YAML file set-sysctl-worker.yaml that defines a KubeletConfig custom resource (CR): apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: sysctl 1 kubeletConfig: allowedUnsafeSysctls: 2 - "kernel.msg*" - "net.core.somaxconn" 1 Specify the label from the machine config pool. 2 List the unsafe sysctls you want to allow. Create the object by running the following command: USD oc apply -f set-sysctl-worker.yaml Wait for the Machine Config Operator to generate the new rendered configuration and apply it to the machines by running the following command: USD oc get machineconfigpool worker -w After some minutes the UPDATING status changes from True to False: NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 2 0 71m worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 3 0 72m worker rendered-worker-0188658afe1f3a183ec8c4f14186f4d5 True False False 3 3 3 0 72m Create a YAML file sysctl-example-safe-unsafe.yaml that defines an example pod and add the securityContext spec, as shown in the following example: apiVersion: v1 kind: Pod metadata: name: sysctl-example-safe-unsafe spec: containers: - name: podexample image: centos command: ["bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: "0" - name: net.core.somaxconn value: "1024" - name: kernel.msgmax value: "65536" Create the pod by running the following command: USD oc apply -f sysctl-example-safe-unsafe.yaml Expected output Warning: would violate PodSecurity "restricted:latest": forbidden sysctls (net.core.somaxconn, kernel.msgmax) pod/sysctl-example-safe-unsafe created Verify that the pod is created by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE sysctl-example-safe-unsafe 1/1 Running 0 19s Log in to the pod by running the following command: USD oc rsh sysctl-example-safe-unsafe Verify the values of the configured sysctl flags. For example, find the value net.core.somaxconn by running the following command: sh-4.4# sysctl net.core.somaxconn Expected output net.core.somaxconn = 1024 The unsafe sysctl is now allowed and the value is set as defined in the securityContext spec of the updated pod specification. 7.9.8. Additional resources Configuring system controls by using the tuning CNI Using the Node Tuning Operator 7.10. Accessing faster builds with /dev/fuse You can configure your pods with the /dev/fuse device to access faster builds. 7.10.1. Configuring /dev/fuse on unprivileged pods As an alternative to the virtual filesystem, you can configure the /dev/fuse device to the io.kubernetes.cri-o.Devices annotation to access faster builds within unprivileged pods. Using /dev/fuse is secure, efficient, and scalable, and allows unprivileged users to mount an overlay filesystem as if the unprivileged pod was privileged. Procedure Create the pod. USD oc exec -ti no-priv -- /bin/bash USD cat >> Dockerfile <<EOF FROM registry.access.redhat.com/ubi9 EOF USD podman build . Implement /dev/fuse by adding the /dev/fuse device to the io.kubernetes.cri-o.Devices annotation. io.kubernetes.cri-o.Devices: "/dev/fuse" For example: apiVersion: v1 kind: Pod metadata: name: podman-pod annotations: io.kubernetes.cri-o.Devices: "/dev/fuse" Configure the /dev/fuse device in your pod specifications. spec: containers: - name: podman-container image: quay.io/podman/stable args: - sleep - "1000000" securityContext: runAsUser: 1000
[ "USD(nproc) X 1/2 MiB", "for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1", "curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()'", "apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: myapp-container image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: init-mydb image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f myapp.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s", "kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376", "oc create -f myservice.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s", "kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377", "oc create -f mydb.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m", "oc set volume <object_selection> <operation> <mandatory_parameters> <options>", "oc set volume <object_type>/<name> [options]", "oc set volume pod/p1", "oc set volume dc --all --name=v1", "oc set volume <object_type>/<name> --add [options]", "oc set volume dc/registry --add", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP", "oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data", "kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data", "oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 --mount-path=/data --containers=c1", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data", "oc set volume rc --all --add --name=v1 --source='{\"gitRepo\": { \"repository\": \"https://github.com/namespace1/project1\", \"revision\": \"5125c45f9f563\" }}'", "oc set volume <object_type>/<name> --add --overwrite [options]", "oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1", "kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data", "oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt", "oc set volume <object_type>/<name> --remove [options]", "oc set volume dc/d1 --remove --name=v1", "oc set volume dc/d1 --remove --name=v1 --containers=c1", "oc set volume rc/r1 --remove --confirm", "oc rsh <pod>", "sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3", "apiVersion: v1 kind: Pod metadata: name: my-site spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: \"/projected-volume\" 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: \"labels\" fieldRef: fieldPath: metadata.labels - path: \"cpu_limit\" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data", "echo -n \"admin\" | base64", "YWRtaW4=", "echo -n \"1f2d1e2e67df\" | base64", "MWYyZDFlMmU2N2Rm", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4=", "oc create -f <secrets-filename>", "oc create -f secret.yaml", "secret \"mysecret\" created", "oc get secret <secret-name>", "oc get secret mysecret", "NAME TYPE DATA AGE mysecret Opaque 2 17h", "oc get secret <secret-name> -o yaml", "oc get secret mysecret -o yaml", "apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: \"2107\" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque", "kind: Pod metadata: name: test-projected-volume spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-projected-volume image: busybox args: - sleep - \"86400\" volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1", "oc create -f <your_yaml_file>.yaml", "oc create -f secret-pod.yaml", "pod \"test-projected-volume\" created", "oc get pod <name>", "oc get pod test-projected-volume", "NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s", "oc exec -it <pod> <command>", "oc exec -it test-projected-volume -- /bin/sh", "/ # ls", "bin home root tmp dev proc run usr etc projected-volume sys var", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: \"345\" annotation2: \"456\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: [\"sh\", \"-c\", \"cat /tmp/etc/pod_labels /tmp/etc/pod_annotations\"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never", "oc create -f volume-pod.yaml", "oc logs -p dapi-volume-test-pod", "cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ \"/bin/sh\", \"-c\", \"env\" ] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory", "oc create -f pod.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: [\"sh\", \"-c\", \"while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done\"] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: \"cpu_limit\" resourceFieldRef: containerName: client-container resource: limits.cpu - path: \"cpu_request\" resourceFieldRef: containerName: client-container resource: requests.cpu - path: \"mem_limit\" resourceFieldRef: containerName: client-container resource: limits.memory - path: \"mem_request\" resourceFieldRef: containerName: client-container resource: requests.memory", "oc create -f volume-pod.yaml", "apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth", "oc create -f secret.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue", "oc create -f configmap.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Always", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "oc rsync <source> <destination> [-c <container>]", "<pod name>:<dir>", "oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name>", "oc rsync /home/user/source devpod1234:/src -c user-container", "oc rsync devpod1234:/src /home/user/source", "oc rsync devpod1234:/src/status.txt /home/user/", "rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>", "export RSYNC_RSH='oc rsh'", "rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>", "oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>]", "oc exec mypod date", "Thu Apr 9 02:21:53 UTC 2015", "/proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command>", "/proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date", "oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]", "oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]", "oc port-forward <pod> 5000 6000", "Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000", "oc port-forward <pod> 8888:5000", "Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000", "oc port-forward <pod> :5000", "Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000", "oc port-forward <pod> 0:5000", "/proxy/nodes/<node_name>/portForward/<namespace>/<pod>", "/proxy/nodes/node123.openshift.com/portForward/myns/mypod", "sudo sysctl -a", "oc get cm -n openshift-multus cni-sysctl-allowlist -oyaml", "apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD kind: ConfigMap metadata: annotations: kubernetes.io/description: | Sysctl allowlist for nodes. release.openshift.io/version: 4.18.0-0.nightly-2022-11-16-003434 creationTimestamp: \"2022-11-17T14:09:27Z\" name: cni-sysctl-allowlist namespace: openshift-multus resourceVersion: \"2422\" uid: 96d138a3-160e-4943-90ff-6108fa7c50c3", "oc edit cm -n openshift-multus cni-sysctl-allowlist -oyaml", "Please edit the object below. Lines beginning with a '#' will be ignored, and an empty file will abort the edit. If an error occurs while saving this file will be reopened with the relevant failures. # apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv4.conf.IFNAME.rp_filterUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD ^net.ipv6.conf.IFNAME.rp_filterUSD", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tuningnad\", \"plugins\": [{ \"type\": \"bridge\" }, { \"type\": \"tuning\", \"sysctl\": { \"net.ipv4.conf.IFNAME.rp_filter\": \"1\" } } ] }'", "oc apply -f reverse-path-fwd-example.yaml", "networkattachmentdefinition.k8.cni.cncf.io/tuningnad created", "apiVersion: v1 kind: Pod metadata: name: example labels: app: httpd namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: httpd image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest' ports: - containerPort: 8080 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL", "oc apply -f examplepod.yaml", "oc get pod", "NAME READY STATUS RESTARTS AGE example 1/1 Running 0 47s", "oc rsh example", "sh-4.4# sysctl net.ipv4.conf.net1.rp_filter", "net.ipv4.conf.net1.rp_filter = 1", "apiVersion: v1 kind: Pod metadata: name: sysctl-example namespace: default spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 1 runAsGroup: 3000 2 allowPrivilegeEscalation: false 3 capabilities: 4 drop: [\"ALL\"] securityContext: runAsNonRoot: true 5 seccompProfile: 6 type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"1\" - name: net.ipv4.ip_local_port_range value: \"32770 60666\" - name: net.ipv4.tcp_syncookies value: \"0\" - name: net.ipv4.ping_group_range value: \"0 200000000\"", "oc apply -f sysctl_pod.yaml", "oc get pod", "NAME READY STATUS RESTARTS AGE sysctl-example 1/1 Running 0 14s", "oc rsh sysctl-example", "sh-4.4# sysctl kernel.shm_rmid_forced", "kernel.shm_rmid_forced = 1", "apiVersion: v1 kind: Pod metadata: name: sysctl-example-unsafe spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"", "oc apply -f sysctl-example-unsafe.yaml", "oc get pod", "NAME READY STATUS RESTARTS AGE sysctl-example-unsafe 0/1 SysctlForbidden 0 14s", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-bfb92f0cd1684e54d8e234ab7423cc96 True False False 3 3 3 0 42m worker rendered-worker-21b6cb9a0f8919c88caf39db80ac1fce True False False 3 3 3 0 42m", "oc label machineconfigpool worker custom-kubelet=sysctl", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: sysctl 1 kubeletConfig: allowedUnsafeSysctls: 2 - \"kernel.msg*\" - \"net.core.somaxconn\"", "oc apply -f set-sysctl-worker.yaml", "oc get machineconfigpool worker -w", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 2 0 71m worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 3 0 72m worker rendered-worker-0188658afe1f3a183ec8c4f14186f4d5 True False False 3 3 3 0 72m", "apiVersion: v1 kind: Pod metadata: name: sysctl-example-safe-unsafe spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"", "oc apply -f sysctl-example-safe-unsafe.yaml", "Warning: would violate PodSecurity \"restricted:latest\": forbidden sysctls (net.core.somaxconn, kernel.msgmax) pod/sysctl-example-safe-unsafe created", "oc get pod", "NAME READY STATUS RESTARTS AGE sysctl-example-safe-unsafe 1/1 Running 0 19s", "oc rsh sysctl-example-safe-unsafe", "sh-4.4# sysctl net.core.somaxconn", "net.core.somaxconn = 1024", "oc exec -ti no-priv -- /bin/bash", "cat >> Dockerfile <<EOF FROM registry.access.redhat.com/ubi9 EOF", "podman build .", "io.kubernetes.cri-o.Devices: \"/dev/fuse\"", "apiVersion: v1 kind: Pod metadata: name: podman-pod annotations: io.kubernetes.cri-o.Devices: \"/dev/fuse\"", "spec: containers: - name: podman-container image: quay.io/podman/stable args: - sleep - \"1000000\" securityContext: runAsUser: 1000" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/nodes/working-with-containers
5.8.4. Multiple NFS Mounts
5.8.4. Multiple NFS Mounts When mounting multiple mounts from the same NFS export, attempting to override the SELinux context of each mount with a different context, results in subsequent mount commands failing. In the following example, the NFS server has a single export, /export , which has two subdirectories, web/ and database/ . The following commands attempt two mounts from a single NFS export, and try to override the context for each one: The second mount command fails, and the following is logged to /var/log/messages : To mount multiple mounts from a single NFS export, with each mount having a different context, use the -o nosharecache,context options. The following example mounts multiple mounts from a single NFS export, with a different context for each mount (allowing a single service access to each one): In this example, server:/export/web is mounted locally to /local/web/ , with all files being labeled with the httpd_sys_content_t type, allowing Apache HTTP Server access. server:/export/database is mounted locally to /local/database , with all files being labeled with the mysqld_db_t type, allowing MySQL access. These type changes are not written to disk. Important The nosharecache options allows you to mount the same subdirectory of an export multiple times with different contexts (for example, mounting /export/web multiple times). Do not mount the same subdirectory from an export multiple times with different contexts, as this creates an overlapping mount, where files are accessible under two different contexts.
[ "~]# mount server:/export/web /local/web -o context=\"system_u:object_r:httpd_sys_content_t:s0\" ~]# mount server:/export/database /local/database -o context=\"system_u:object_r:mysqld_db_t:s0\"", "kernel: SELinux: mount invalid. Same superblock, different security settings for (dev 0:15, type nfs)", "~]# mount server:/export/web /local/web -o nosharecache,context=\"system_u:object_r:httpd_sys_content_t:s0\" ~]# mount server:/export/database /local/database -o \\ nosharecache,context=\"system_u:object_r:mysqld_db_t:s0\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-mounting_file_systems-multiple_nfs_mounts
Chapter 5. Requests
Chapter 5. Requests This section enumerates all the requests that are available in the API. POST / affinitylabels GET / affinitylabels GET / affinitylabels / {label:id} PUT / affinitylabels / {label:id} DELETE / affinitylabels / {label:id} POST / affinitylabels / {label:id} / hosts GET / affinitylabels / {label:id} / hosts DELETE / affinitylabels / {label:id} / hosts / {host:id} GET / affinitylabels / {label:id} / hosts / {host:id} POST / affinitylabels / {label:id} / vms GET / affinitylabels / {label:id} / vms DELETE / affinitylabels / {label:id} / vms / {vm:id} GET / affinitylabels / {label:id} / vms / {vm:id} POST / bookmarks GET / bookmarks GET / bookmarks / {bookmark:id} PUT / bookmarks / {bookmark:id} DELETE / bookmarks / {bookmark:id} GET / clusterlevels GET / clusterlevels / {level:id} GET / clusterlevels / {level:id} / clusterfeatures GET / clusterlevels / {level:id} / clusterfeatures / {feature:id} POST / clusters GET / clusters GET / clusters / {cluster:id} PUT / clusters / {cluster:id} DELETE / clusters / {cluster:id} POST / clusters / {cluster:id} / affinitygroups GET / clusters / {cluster:id} / affinitygroups GET / clusters / {cluster:id} / affinitygroups / {group:id} PUT / clusters / {cluster:id} / affinitygroups / {group:id} DELETE / clusters / {cluster:id} / affinitygroups / {group:id} POST / clusters / {cluster:id} / affinitygroups / {group:id} / vms GET / clusters / {cluster:id} / affinitygroups / {group:id} / vms DELETE / clusters / {cluster:id} / affinitygroups / {group:id} / vms / {vm:id} POST / clusters / {cluster:id} / cpuprofiles GET / clusters / {cluster:id} / cpuprofiles GET / clusters / {cluster:id} / cpuprofiles / {profile:id} DELETE / clusters / {cluster:id} / cpuprofiles / {profile:id} GET / clusters / {cluster:id} / enabledfeatures POST / clusters / {cluster:id} / enabledfeatures GET / clusters / {cluster:id} / enabledfeatures / {feature:id} DELETE / clusters / {cluster:id} / enabledfeatures / {feature:id} GET / clusters / {cluster:id} / externalnetworkproviders GET / clusters / {cluster:id} / glusterhooks GET / clusters / {cluster:id} / glusterhooks / {hook:id} DELETE / clusters / {cluster:id} / glusterhooks / {hook:id} POST / clusters / {cluster:id} / glusterhooks / {hook:id} / disable POST / clusters / {cluster:id} / glusterhooks / {hook:id} / enable POST / clusters / {cluster:id} / glusterhooks / {hook:id} / resolve POST / clusters / {cluster:id} / glustervolumes GET / clusters / {cluster:id} / glustervolumes GET / clusters / {cluster:id} / glustervolumes / {volume:id} DELETE / clusters / {cluster:id} / glustervolumes / {volume:id} POST / clusters / {cluster:id} / glustervolumes / {volume:id} / getprofilestatistics POST / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks GET / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks DELETE / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks POST / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / activate POST / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / migrate POST / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / stopmigrate GET / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / {brick:id} DELETE / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / {brick:id} POST / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / {brick:id} / replace GET / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / {brick:id} / statistics GET / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / {brick:id} / statistics / {statistic:id} POST / clusters / {cluster:id} / glustervolumes / {volume:id} / rebalance POST / clusters / {cluster:id} / glustervolumes / {volume:id} / resetalloptions POST / clusters / {cluster:id} / glustervolumes / {volume:id} / resetoption POST / clusters / {cluster:id} / glustervolumes / {volume:id} / setoption POST / clusters / {cluster:id} / glustervolumes / {volume:id} / start POST / clusters / {cluster:id} / glustervolumes / {volume:id} / startprofile GET / clusters / {cluster:id} / glustervolumes / {volume:id} / statistics GET / clusters / {cluster:id} / glustervolumes / {volume:id} / statistics / {statistic:id} POST / clusters / {cluster:id} / glustervolumes / {volume:id} / stop POST / clusters / {cluster:id} / glustervolumes / {volume:id} / stopprofile POST / clusters / {cluster:id} / glustervolumes / {volume:id} / stoprebalance GET / clusters / {cluster:id} / networkfilters GET / clusters / {cluster:id} / networkfilters / {networkfilter:id} POST / clusters / {cluster:id} / networks GET / clusters / {cluster:id} / networks GET / clusters / {cluster:id} / networks / {network:id} DELETE / clusters / {cluster:id} / networks / {network:id} PUT / clusters / {cluster:id} / networks / {network:id} POST / clusters / {cluster:id} / permissions GET / clusters / {cluster:id} / permissions GET / clusters / {cluster:id} / permissions / {permission:id} DELETE / clusters / {cluster:id} / permissions / {permission:id} POST / clusters / {cluster:id} / resetemulatedmachine POST / clusters / {cluster:id} / syncallnetworks POST / cpuprofiles GET / cpuprofiles GET / cpuprofiles / {profile:id} PUT / cpuprofiles / {profile:id} DELETE / cpuprofiles / {profile:id} POST / cpuprofiles / {profile:id} / permissions GET / cpuprofiles / {profile:id} / permissions GET / cpuprofiles / {profile:id} / permissions / {permission:id} DELETE / cpuprofiles / {profile:id} / permissions / {permission:id} POST / datacenters GET / datacenters GET / datacenters / {datacenter:id} PUT / datacenters / {datacenter:id} DELETE / datacenters / {datacenter:id} POST / datacenters / {datacenter:id} / clusters GET / datacenters / {datacenter:id} / clusters GET / datacenters / {datacenter:id} / clusters / {cluster:id} PUT / datacenters / {datacenter:id} / clusters / {cluster:id} DELETE / datacenters / {datacenter:id} / clusters / {cluster:id} POST / datacenters / {datacenter:id} / clusters / {cluster:id} / affinitygroups GET / datacenters / {datacenter:id} / clusters / {cluster:id} / affinitygroups GET / datacenters / {datacenter:id} / clusters / {cluster:id} / affinitygroups / {group:id} PUT / datacenters / {datacenter:id} / clusters / {cluster:id} / affinitygroups / {group:id} DELETE / datacenters / {datacenter:id} / clusters / {cluster:id} / affinitygroups / {group:id} POST / datacenters / {datacenter:id} / clusters / {cluster:id} / affinitygroups / {group:id} / vms GET / datacenters / {datacenter:id} / clusters / {cluster:id} / affinitygroups / {group:id} / vms DELETE / datacenters / {datacenter:id} / clusters / {cluster:id} / affinitygroups / {group:id} / vms / {vm:id} POST / datacenters / {datacenter:id} / clusters / {cluster:id} / cpuprofiles GET / datacenters / {datacenter:id} / clusters / {cluster:id} / cpuprofiles GET / datacenters / {datacenter:id} / clusters / {cluster:id} / cpuprofiles / {profile:id} DELETE / datacenters / {datacenter:id} / clusters / {cluster:id} / cpuprofiles / {profile:id} GET / datacenters / {datacenter:id} / clusters / {cluster:id} / enabledfeatures POST / datacenters / {datacenter:id} / clusters / {cluster:id} / enabledfeatures GET / datacenters / {datacenter:id} / clusters / {cluster:id} / enabledfeatures / {feature:id} DELETE / datacenters / {datacenter:id} / clusters / {cluster:id} / enabledfeatures / {feature:id} GET / datacenters / {datacenter:id} / clusters / {cluster:id} / externalnetworkproviders GET / datacenters / {datacenter:id} / clusters / {cluster:id} / glusterhooks GET / datacenters / {datacenter:id} / clusters / {cluster:id} / glusterhooks / {hook:id} DELETE / datacenters / {datacenter:id} / clusters / {cluster:id} / glusterhooks / {hook:id} POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glusterhooks / {hook:id} / disable POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glusterhooks / {hook:id} / enable POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glusterhooks / {hook:id} / resolve POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes GET / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes GET / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} DELETE / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / getprofilestatistics POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks GET / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks DELETE / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / activate POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / migrate POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / stopmigrate GET / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / {brick:id} DELETE / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / {brick:id} POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / {brick:id} / replace GET / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / {brick:id} / statistics GET / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / glusterbricks / {brick:id} / statistics / {statistic:id} POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / rebalance POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / resetalloptions POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / resetoption POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / setoption POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / start POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / startprofile GET / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / statistics GET / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / statistics / {statistic:id} POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / stop POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / stopprofile POST / datacenters / {datacenter:id} / clusters / {cluster:id} / glustervolumes / {volume:id} / stoprebalance GET / datacenters / {datacenter:id} / clusters / {cluster:id} / networkfilters GET / datacenters / {datacenter:id} / clusters / {cluster:id} / networkfilters / {networkfilter:id} POST / datacenters / {datacenter:id} / clusters / {cluster:id} / networks GET / datacenters / {datacenter:id} / clusters / {cluster:id} / networks GET / datacenters / {datacenter:id} / clusters / {cluster:id} / networks / {network:id} DELETE / datacenters / {datacenter:id} / clusters / {cluster:id} / networks / {network:id} PUT / datacenters / {datacenter:id} / clusters / {cluster:id} / networks / {network:id} POST / datacenters / {datacenter:id} / clusters / {cluster:id} / permissions GET / datacenters / {datacenter:id} / clusters / {cluster:id} / permissions GET / datacenters / {datacenter:id} / clusters / {cluster:id} / permissions / {permission:id} DELETE / datacenters / {datacenter:id} / clusters / {cluster:id} / permissions / {permission:id} POST / datacenters / {datacenter:id} / clusters / {cluster:id} / resetemulatedmachine POST / datacenters / {datacenter:id} / clusters / {cluster:id} / syncallnetworks POST / datacenters / {datacenter:id} / iscsibonds GET / datacenters / {datacenter:id} / iscsibonds GET / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} PUT / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} DELETE / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} POST / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks GET / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks GET / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} PUT / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} DELETE / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} POST / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / networklabels GET / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / networklabels GET / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / networklabels / {label:id} DELETE / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / networklabels / {label:id} POST / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / permissions GET / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / permissions GET / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / permissions / {permission:id} DELETE / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / permissions / {permission:id} POST / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / vnicprofiles GET / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / vnicprofiles GET / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / vnicprofiles / {profile:id} DELETE / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / vnicprofiles / {profile:id} POST / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / vnicprofiles / {profile:id} / permissions GET / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / vnicprofiles / {profile:id} / permissions GET / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / vnicprofiles / {profile:id} / permissions / {permission:id} DELETE / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / networks / {network:id} / vnicprofiles / {profile:id} / permissions / {permission:id} POST / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / storageserverconnections GET / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / storageserverconnections GET / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / storageserverconnections / {storageconnection:id} PUT / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / storageserverconnections / {storageconnection:id} DELETE / datacenters / {datacenter:id} / iscsibonds / {iscsibond:id} / storageserverconnections / {storageconnection:id} POST / datacenters / {datacenter:id} / networks GET / datacenters / {datacenter:id} / networks GET / datacenters / {datacenter:id} / networks / {network:id} DELETE / datacenters / {datacenter:id} / networks / {network:id} PUT / datacenters / {datacenter:id} / networks / {network:id} POST / datacenters / {datacenter:id} / permissions GET / datacenters / {datacenter:id} / permissions GET / datacenters / {datacenter:id} / permissions / {permission:id} DELETE / datacenters / {datacenter:id} / permissions / {permission:id} POST / datacenters / {datacenter:id} / qoss GET / datacenters / {datacenter:id} / qoss GET / datacenters / {datacenter:id} / qoss / {qos:id} PUT / datacenters / {datacenter:id} / qoss / {qos:id} DELETE / datacenters / {datacenter:id} / qoss / {qos:id} POST / datacenters / {datacenter:id} / quotas GET / datacenters / {datacenter:id} / quotas GET / datacenters / {datacenter:id} / quotas / {quota:id} PUT / datacenters / {datacenter:id} / quotas / {quota:id} DELETE / datacenters / {datacenter:id} / quotas / {quota:id} POST / datacenters / {datacenter:id} / quotas / {quota:id} / permissions GET / datacenters / {datacenter:id} / quotas / {quota:id} / permissions GET / datacenters / {datacenter:id} / quotas / {quota:id} / permissions / {permission:id} DELETE / datacenters / {datacenter:id} / quotas / {quota:id} / permissions / {permission:id} POST / datacenters / {datacenter:id} / quotas / {quota:id} / quotaclusterlimits GET / datacenters / {datacenter:id} / quotas / {quota:id} / quotaclusterlimits GET / datacenters / {datacenter:id} / quotas / {quota:id} / quotaclusterlimits / {limit:id} DELETE / datacenters / {datacenter:id} / quotas / {quota:id} / quotaclusterlimits / {limit:id} POST / datacenters / {datacenter:id} / quotas / {quota:id} / quotastoragelimits GET / datacenters / {datacenter:id} / quotas / {quota:id} / quotastoragelimits GET / datacenters / {datacenter:id} / quotas / {quota:id} / quotastoragelimits / {limit:id} DELETE / datacenters / {datacenter:id} / quotas / {quota:id} / quotastoragelimits / {limit:id} POST / datacenters / {datacenter:id} / storagedomains GET / datacenters / {datacenter:id} / storagedomains GET / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} DELETE / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} POST / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / activate POST / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / deactivate POST / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks GET / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks PUT / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks / {disk:id} GET / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks / {disk:id} DELETE / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks / {disk:id} POST / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks / {disk:id} / copy POST / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks / {disk:id} / export POST / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks / {disk:id} / move POST / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks / {disk:id} / permissions GET / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks / {disk:id} / permissions GET / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks / {disk:id} / permissions / {permission:id} DELETE / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks / {disk:id} / permissions / {permission:id} POST / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks / {disk:id} / register POST / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks / {disk:id} / sparsify GET / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks / {disk:id} / statistics GET / datacenters / {datacenter:id} / storagedomains / {storagedomain:id} / disks / {disk:id} / statistics / {statistic:id} POST / diskprofiles GET / diskprofiles GET / diskprofiles / {diskprofile:id} PUT / diskprofiles / {diskprofile:id} DELETE / diskprofiles / {diskprofile:id} POST / diskprofiles / {diskprofile:id} / permissions GET / diskprofiles / {diskprofile:id} / permissions GET / diskprofiles / {diskprofile:id} / permissions / {permission:id} DELETE / diskprofiles / {diskprofile:id} / permissions / {permission:id} POST / disks GET / disks PUT / disks / {disk:id} GET / disks / {disk:id} DELETE / disks / {disk:id} POST / disks / {disk:id} / copy POST / disks / {disk:id} / export POST / disks / {disk:id} / move POST / disks / {disk:id} / permissions GET / disks / {disk:id} / permissions GET / disks / {disk:id} / permissions / {permission:id} DELETE / disks / {disk:id} / permissions / {permission:id} POST / disks / {disk:id} / reduce POST / disks / {disk:id} / refreshlun POST / disks / {disk:id} / sparsify GET / disks / {disk:id} / statistics GET / disks / {disk:id} / statistics / {statistic:id} GET / domains GET / domains / {domain:id} GET / domains / {domain:id} / groups GET / domains / {domain:id} / groups / {group:id} GET / domains / {domain:id} / users GET / domains / {domain:id} / users / {user:id} POST / events GET / events POST / events / undelete GET / events / {event:id} DELETE / events / {event:id} POST / externalhostproviders GET / externalhostproviders GET / externalhostproviders / {provider:id} PUT / externalhostproviders / {provider:id} DELETE / externalhostproviders / {provider:id} GET / externalhostproviders / {provider:id} / certificates GET / externalhostproviders / {provider:id} / certificates / {certificate:id} GET / externalhostproviders / {provider:id} / computeresources GET / externalhostproviders / {provider:id} / computeresources / {resource:id} GET / externalhostproviders / {provider:id} / discoveredhosts GET / externalhostproviders / {provider:id} / discoveredhosts / {host:id} GET / externalhostproviders / {provider:id} / hostgroups GET / externalhostproviders / {provider:id} / hostgroups / {group:id} GET / externalhostproviders / {provider:id} / hosts GET / externalhostproviders / {provider:id} / hosts / {host:id} POST / externalhostproviders / {provider:id} / importcertificates POST / externalhostproviders / {provider:id} / testconnectivity POST / externalvmimports POST / groups GET / groups GET / groups / {group:id} DELETE / groups / {group:id} POST / groups / {group:id} / permissions GET / groups / {group:id} / permissions GET / groups / {group:id} / permissions / {permission:id} DELETE / groups / {group:id} / permissions / {permission:id} GET / groups / {group:id} / roles GET / groups / {group:id} / roles / {role:id} DELETE / groups / {group:id} / roles / {role:id} PUT / groups / {group:id} / roles / {role:id} POST / groups / {group:id} / roles / {role:id} / permits GET / groups / {group:id} / roles / {role:id} / permits GET / groups / {group:id} / roles / {role:id} / permits / {permit:id} DELETE / groups / {group:id} / roles / {role:id} / permits / {permit:id} POST / groups / {group:id} / tags GET / groups / {group:id} / tags GET / groups / {group:id} / tags / {tag:id} DELETE / groups / {group:id} / tags / {tag:id} POST / hosts GET / hosts GET / hosts / {host:id} PUT / hosts / {host:id} DELETE / hosts / {host:id} POST / hosts / {host:id} / activate POST / hosts / {host:id} / affinitylabels GET / hosts / {host:id} / affinitylabels GET / hosts / {host:id} / affinitylabels / {label:id} DELETE / hosts / {host:id} / affinitylabels / {label:id} POST / hosts / {host:id} / approve POST / hosts / {host:id} / commitnetconfig POST / hosts / {host:id} / deactivate GET / hosts / {host:id} / devices GET / hosts / {host:id} / devices / {device:id} POST / hosts / {host:id} / enrollcertificate GET / hosts / {host:id} / externalnetworkproviderconfigurations GET / hosts / {host:id} / externalnetworkproviderconfigurations / {configuration:id} POST / hosts / {host:id} / fence POST / hosts / {host:id} / fenceagents GET / hosts / {host:id} / fenceagents GET / hosts / {host:id} / fenceagents / {agent:id} PUT / hosts / {host:id} / fenceagents / {agent:id} DELETE / hosts / {host:id} / fenceagents / {agent:id} POST / hosts / {host:id} / forceselectspm GET / hosts / {host:id} / hooks GET / hosts / {host:id} / hooks / {hook:id} POST / hosts / {host:id} / install POST / hosts / {host:id} / iscsidiscover POST / hosts / {host:id} / iscsilogin GET / hosts / {host:id} / katelloerrata GET / hosts / {host:id} / katelloerrata / {katelloerratum:id} POST / hosts / {host:id} / networkattachments GET / hosts / {host:id} / networkattachments GET / hosts / {host:id} / networkattachments / {attachment:id} PUT / hosts / {host:id} / networkattachments / {attachment:id} DELETE / hosts / {host:id} / networkattachments / {attachment:id} GET / hosts / {host:id} / nics GET / hosts / {host:id} / nics / {nic:id} GET / hosts / {host:id} / nics / {nic:id} / linklayerdiscoveryprotocolelements POST / hosts / {host:id} / nics / {nic:id} / networkattachments GET / hosts / {host:id} / nics / {nic:id} / networkattachments GET / hosts / {host:id} / nics / {nic:id} / networkattachments / {attachment:id} PUT / hosts / {host:id} / nics / {nic:id} / networkattachments / {attachment:id} DELETE / hosts / {host:id} / nics / {nic:id} / networkattachments / {attachment:id} POST / hosts / {host:id} / nics / {nic:id} / networklabels GET / hosts / {host:id} / nics / {nic:id} / networklabels GET / hosts / {host:id} / nics / {nic:id} / networklabels / {label:id} DELETE / hosts / {host:id} / nics / {nic:id} / networklabels / {label:id} GET / hosts / {host:id} / nics / {nic:id} / statistics GET / hosts / {host:id} / nics / {nic:id} / statistics / {statistic:id} POST / hosts / {host:id} / nics / {nic:id} / updatevirtualfunctionsconfiguration POST / hosts / {host:id} / nics / {nic:id} / virtualfunctionallowedlabels GET / hosts / {host:id} / nics / {nic:id} / virtualfunctionallowedlabels GET / hosts / {host:id} / nics / {nic:id} / virtualfunctionallowedlabels / {label:id} DELETE / hosts / {host:id} / nics / {nic:id} / virtualfunctionallowedlabels / {label:id} POST / hosts / {host:id} / nics / {nic:id} / virtualfunctionallowednetworks GET / hosts / {host:id} / nics / {nic:id} / virtualfunctionallowednetworks GET / hosts / {host:id} / nics / {nic:id} / virtualfunctionallowednetworks / {network:id} DELETE / hosts / {host:id} / nics / {nic:id} / virtualfunctionallowednetworks / {network:id} GET / hosts / {host:id} / numanodes GET / hosts / {host:id} / numanodes / {node:id} GET / hosts / {host:id} / numanodes / {node:id} / statistics GET / hosts / {host:id} / numanodes / {node:id} / statistics / {statistic:id} POST / hosts / {host:id} / permissions GET / hosts / {host:id} / permissions GET / hosts / {host:id} / permissions / {permission:id} DELETE / hosts / {host:id} / permissions / {permission:id} POST / hosts / {host:id} / refresh POST / hosts / {host:id} / setupnetworks GET / hosts / {host:id} / statistics GET / hosts / {host:id} / statistics / {statistic:id} GET / hosts / {host:id} / storage GET / hosts / {host:id} / storage / {storage:id} POST / hosts / {host:id} / storageconnectionextensions GET / hosts / {host:id} / storageconnectionextensions GET / hosts / {host:id} / storageconnectionextensions / {storageconnectionextension:id} PUT / hosts / {host:id} / storageconnectionextensions / {storageconnectionextension:id} DELETE / hosts / {host:id} / storageconnectionextensions / {storageconnectionextension:id} POST / hosts / {host:id} / syncallnetworks POST / hosts / {host:id} / tags GET / hosts / {host:id} / tags GET / hosts / {host:id} / tags / {tag:id} DELETE / hosts / {host:id} / tags / {tag:id} GET / hosts / {host:id} / unmanagednetworks GET / hosts / {host:id} / unmanagednetworks / {unmanagednetwork:id} DELETE / hosts / {host:id} / unmanagednetworks / {unmanagednetwork:id} POST / hosts / {host:id} / unregisteredstoragedomainsdiscover POST / hosts / {host:id} / upgrade POST / hosts / {host:id} / upgradecheck GET / icons GET / icons / {icon:id} POST / imagetransfers GET / imagetransfers GET / imagetransfers / {imagetransfer:id} POST / imagetransfers / {imagetransfer:id} / cancel POST / imagetransfers / {imagetransfer:id} / extend POST / imagetransfers / {imagetransfer:id} / finalize POST / imagetransfers / {imagetransfer:id} / pause POST / imagetransfers / {imagetransfer:id} / resume POST / instancetypes GET / instancetypes GET / instancetypes / {instancetype:id} PUT / instancetypes / {instancetype:id} DELETE / instancetypes / {instancetype:id} POST / instancetypes / {instancetype:id} / graphicsconsoles GET / instancetypes / {instancetype:id} / graphicsconsoles GET / instancetypes / {instancetype:id} / graphicsconsoles / {console:id} DELETE / instancetypes / {instancetype:id} / graphicsconsoles / {console:id} POST / instancetypes / {instancetype:id} / nics GET / instancetypes / {instancetype:id} / nics GET / instancetypes / {instancetype:id} / nics / {nic:id} PUT / instancetypes / {instancetype:id} / nics / {nic:id} DELETE / instancetypes / {instancetype:id} / nics / {nic:id} POST / instancetypes / {instancetype:id} / watchdogs GET / instancetypes / {instancetype:id} / watchdogs GET / instancetypes / {instancetype:id} / watchdogs / {watchdog:id} PUT / instancetypes / {instancetype:id} / watchdogs / {watchdog:id} DELETE / instancetypes / {instancetype:id} / watchdogs / {watchdog:id} POST / jobs GET / jobs GET / jobs / {job:id} POST / jobs / {job:id} / clear POST / jobs / {job:id} / end POST / jobs / {job:id} / steps GET / jobs / {job:id} / steps GET / jobs / {job:id} / steps / {step:id} POST / jobs / {job:id} / steps / {step:id} / end GET / jobs / {job:id} / steps / {step:id} / statistics GET / jobs / {job:id} / steps / {step:id} / statistics / {statistic:id} GET / katelloerrata GET / katelloerrata / {katelloerratum:id} POST / macpools GET / macpools GET / macpools / {macpool:id} PUT / macpools / {macpool:id} DELETE / macpools / {macpool:id} GET / networkfilters GET / networkfilters / {networkfilter:id} POST / networks GET / networks GET / networks / {network:id} PUT / networks / {network:id} DELETE / networks / {network:id} POST / networks / {network:id} / networklabels GET / networks / {network:id} / networklabels GET / networks / {network:id} / networklabels / {label:id} DELETE / networks / {network:id} / networklabels / {label:id} POST / networks / {network:id} / permissions GET / networks / {network:id} / permissions GET / networks / {network:id} / permissions / {permission:id} DELETE / networks / {network:id} / permissions / {permission:id} POST / networks / {network:id} / vnicprofiles GET / networks / {network:id} / vnicprofiles GET / networks / {network:id} / vnicprofiles / {profile:id} DELETE / networks / {network:id} / vnicprofiles / {profile:id} POST / networks / {network:id} / vnicprofiles / {profile:id} / permissions GET / networks / {network:id} / vnicprofiles / {profile:id} / permissions GET / networks / {network:id} / vnicprofiles / {profile:id} / permissions / {permission:id} DELETE / networks / {network:id} / vnicprofiles / {profile:id} / permissions / {permission:id} POST / openstackimageproviders GET / openstackimageproviders GET / openstackimageproviders / {provider:id} PUT / openstackimageproviders / {provider:id} DELETE / openstackimageproviders / {provider:id} GET / openstackimageproviders / {provider:id} / certificates GET / openstackimageproviders / {provider:id} / certificates / {certificate:id} GET / openstackimageproviders / {provider:id} / images GET / openstackimageproviders / {provider:id} / images / {image:id} POST / openstackimageproviders / {provider:id} / images / {image:id} / import POST / openstackimageproviders / {provider:id} / importcertificates POST / openstackimageproviders / {provider:id} / testconnectivity POST / openstacknetworkproviders GET / openstacknetworkproviders GET / openstacknetworkproviders / {provider:id} PUT / openstacknetworkproviders / {provider:id} DELETE / openstacknetworkproviders / {provider:id} GET / openstacknetworkproviders / {provider:id} / certificates GET / openstacknetworkproviders / {provider:id} / certificates / {certificate:id} POST / openstacknetworkproviders / {provider:id} / importcertificates GET / openstacknetworkproviders / {provider:id} / networks GET / openstacknetworkproviders / {provider:id} / networks / {network:id} POST / openstacknetworkproviders / {provider:id} / networks / {network:id} / import POST / openstacknetworkproviders / {provider:id} / networks / {network:id} / subnets GET / openstacknetworkproviders / {provider:id} / networks / {network:id} / subnets GET / openstacknetworkproviders / {provider:id} / networks / {network:id} / subnets / {subnet:id} DELETE / openstacknetworkproviders / {provider:id} / networks / {network:id} / subnets / {subnet:id} POST / openstacknetworkproviders / {provider:id} / testconnectivity POST / openstackvolumeproviders GET / openstackvolumeproviders GET / openstackvolumeproviders / {provider:id} PUT / openstackvolumeproviders / {provider:id} DELETE / openstackvolumeproviders / {provider:id} POST / openstackvolumeproviders / {provider:id} / authenticationkeys GET / openstackvolumeproviders / {provider:id} / authenticationkeys GET / openstackvolumeproviders / {provider:id} / authenticationkeys / {key:id} PUT / openstackvolumeproviders / {provider:id} / authenticationkeys / {key:id} DELETE / openstackvolumeproviders / {provider:id} / authenticationkeys / {key:id} GET / openstackvolumeproviders / {provider:id} / certificates GET / openstackvolumeproviders / {provider:id} / certificates / {certificate:id} POST / openstackvolumeproviders / {provider:id} / importcertificates POST / openstackvolumeproviders / {provider:id} / testconnectivity GET / openstackvolumeproviders / {provider:id} / volumetypes GET / openstackvolumeproviders / {provider:id} / volumetypes / {type:id} GET / operatingsystems GET / operatingsystems / {operatingsystem:id} GET / options / {option:id} POST / permissions GET / permissions GET / permissions / {permission:id} DELETE / permissions / {permission:id} POST / roles GET / roles GET / roles / {role:id} DELETE / roles / {role:id} PUT / roles / {role:id} POST / roles / {role:id} / permits GET / roles / {role:id} / permits GET / roles / {role:id} / permits / {permit:id} DELETE / roles / {role:id} / permits / {permit:id} POST / schedulingpolicies GET / schedulingpolicies GET / schedulingpolicies / {policy:id} PUT / schedulingpolicies / {policy:id} DELETE / schedulingpolicies / {policy:id} POST / schedulingpolicies / {policy:id} / balances GET / schedulingpolicies / {policy:id} / balances GET / schedulingpolicies / {policy:id} / balances / {balance:id} DELETE / schedulingpolicies / {policy:id} / balances / {balance:id} POST / schedulingpolicies / {policy:id} / filters GET / schedulingpolicies / {policy:id} / filters GET / schedulingpolicies / {policy:id} / filters / {filter:id} DELETE / schedulingpolicies / {policy:id} / filters / {filter:id} POST / schedulingpolicies / {policy:id} / weights GET / schedulingpolicies / {policy:id} / weights GET / schedulingpolicies / {policy:id} / weights / {weight:id} DELETE / schedulingpolicies / {policy:id} / weights / {weight:id} GET / schedulingpolicyunits GET / schedulingpolicyunits / {unit:id} DELETE / schedulingpolicyunits / {unit:id} POST / storageconnections GET / storageconnections GET / storageconnections / {storageconnection:id} PUT / storageconnections / {storageconnection:id} DELETE / storageconnections / {storageconnection:id} POST / storagedomains GET / storagedomains GET / storagedomains / {storagedomain:id} PUT / storagedomains / {storagedomain:id} DELETE / storagedomains / {storagedomain:id} POST / storagedomains / {storagedomain:id} / diskprofiles GET / storagedomains / {storagedomain:id} / diskprofiles GET / storagedomains / {storagedomain:id} / diskprofiles / {profile:id} DELETE / storagedomains / {storagedomain:id} / diskprofiles / {profile:id} POST / storagedomains / {storagedomain:id} / disks GET / storagedomains / {storagedomain:id} / disks PUT / storagedomains / {storagedomain:id} / disks / {disk:id} GET / storagedomains / {storagedomain:id} / disks / {disk:id} DELETE / storagedomains / {storagedomain:id} / disks / {disk:id} POST / storagedomains / {storagedomain:id} / disks / {disk:id} / copy POST / storagedomains / {storagedomain:id} / disks / {disk:id} / export POST / storagedomains / {storagedomain:id} / disks / {disk:id} / move POST / storagedomains / {storagedomain:id} / disks / {disk:id} / permissions GET / storagedomains / {storagedomain:id} / disks / {disk:id} / permissions GET / storagedomains / {storagedomain:id} / disks / {disk:id} / permissions / {permission:id} DELETE / storagedomains / {storagedomain:id} / disks / {disk:id} / permissions / {permission:id} POST / storagedomains / {storagedomain:id} / disks / {disk:id} / reduce POST / storagedomains / {storagedomain:id} / disks / {disk:id} / sparsify GET / storagedomains / {storagedomain:id} / disks / {disk:id} / statistics GET / storagedomains / {storagedomain:id} / disks / {disk:id} / statistics / {statistic:id} GET / storagedomains / {storagedomain:id} / disksnapshots GET / storagedomains / {storagedomain:id} / disksnapshots / {snapshot:id} DELETE / storagedomains / {storagedomain:id} / disksnapshots / {snapshot:id} GET / storagedomains / {storagedomain:id} / files GET / storagedomains / {storagedomain:id} / files / {file:id} GET / storagedomains / {storagedomain:id} / images GET / storagedomains / {storagedomain:id} / images / {image:id} POST / storagedomains / {storagedomain:id} / images / {image:id} / import POST / storagedomains / {storagedomain:id} / isattached POST / storagedomains / {storagedomain:id} / permissions GET / storagedomains / {storagedomain:id} / permissions GET / storagedomains / {storagedomain:id} / permissions / {permission:id} DELETE / storagedomains / {storagedomain:id} / permissions / {permission:id} POST / storagedomains / {storagedomain:id} / reduceluns POST / storagedomains / {storagedomain:id} / refreshluns POST / storagedomains / {storagedomain:id} / storageconnections GET / storagedomains / {storagedomain:id} / storageconnections GET / storagedomains / {storagedomain:id} / storageconnections / {connection:id} DELETE / storagedomains / {storagedomain:id} / storageconnections / {connection:id} GET / storagedomains / {storagedomain:id} / templates GET / storagedomains / {storagedomain:id} / templates / {template:id} DELETE / storagedomains / {storagedomain:id} / templates / {template:id} GET / storagedomains / {storagedomain:id} / templates / {template:id} / disks GET / storagedomains / {storagedomain:id} / templates / {template:id} / disks / {disk:id} POST / storagedomains / {storagedomain:id} / templates / {template:id} / import POST / storagedomains / {storagedomain:id} / templates / {template:id} / register POST / storagedomains / {storagedomain:id} / updateovfstore GET / storagedomains / {storagedomain:id} / vms GET / storagedomains / {storagedomain:id} / vms / {vm:id} DELETE / storagedomains / {storagedomain:id} / vms / {vm:id} GET / storagedomains / {storagedomain:id} / vms / {vm:id} / diskattachments GET / storagedomains / {storagedomain:id} / vms / {vm:id} / diskattachments / {attachment:id} GET / storagedomains / {storagedomain:id} / vms / {vm:id} / disks GET / storagedomains / {storagedomain:id} / vms / {vm:id} / disks / {disk:id} POST / storagedomains / {storagedomain:id} / vms / {vm:id} / import POST / storagedomains / {storagedomain:id} / vms / {vm:id} / register POST / tags GET / tags GET / tags / {tag:id} PUT / tags / {tag:id} DELETE / tags / {tag:id} POST / templates GET / templates GET / templates / {template:id} PUT / templates / {template:id} DELETE / templates / {template:id} GET / templates / {template:id} / cdroms GET / templates / {template:id} / cdroms / {cdrom:id} GET / templates / {template:id} / diskattachments GET / templates / {template:id} / diskattachments / {attachment:id} DELETE / templates / {template:id} / diskattachments / {attachment:id} POST / templates / {template:id} / export POST / templates / {template:id} / graphicsconsoles GET / templates / {template:id} / graphicsconsoles GET / templates / {template:id} / graphicsconsoles / {console:id} DELETE / templates / {template:id} / graphicsconsoles / {console:id} POST / templates / {template:id} / nics GET / templates / {template:id} / nics GET / templates / {template:id} / nics / {nic:id} PUT / templates / {template:id} / nics / {nic:id} DELETE / templates / {template:id} / nics / {nic:id} POST / templates / {template:id} / permissions GET / templates / {template:id} / permissions GET / templates / {template:id} / permissions / {permission:id} DELETE / templates / {template:id} / permissions / {permission:id} POST / templates / {template:id} / tags GET / templates / {template:id} / tags GET / templates / {template:id} / tags / {tag:id} DELETE / templates / {template:id} / tags / {tag:id} POST / templates / {template:id} / watchdogs GET / templates / {template:id} / watchdogs GET / templates / {template:id} / watchdogs / {watchdog:id} PUT / templates / {template:id} / watchdogs / {watchdog:id} DELETE / templates / {template:id} / watchdogs / {watchdog:id} POST / users GET / users GET / users / {user:id} DELETE / users / {user:id} GET / users / {user:id} / groups POST / users / {user:id} / permissions GET / users / {user:id} / permissions GET / users / {user:id} / permissions / {permission:id} DELETE / users / {user:id} / permissions / {permission:id} GET / users / {user:id} / roles GET / users / {user:id} / roles / {role:id} DELETE / users / {user:id} / roles / {role:id} PUT / users / {user:id} / roles / {role:id} POST / users / {user:id} / roles / {role:id} / permits GET / users / {user:id} / roles / {role:id} / permits GET / users / {user:id} / roles / {role:id} / permits / {permit:id} DELETE / users / {user:id} / roles / {role:id} / permits / {permit:id} POST / users / {user:id} / sshpublickeys GET / users / {user:id} / sshpublickeys GET / users / {user:id} / sshpublickeys / {key:id} PUT / users / {user:id} / sshpublickeys / {key:id} DELETE / users / {user:id} / sshpublickeys / {key:id} POST / users / {user:id} / tags GET / users / {user:id} / tags GET / users / {user:id} / tags / {tag:id} DELETE / users / {user:id} / tags / {tag:id} POST / vmpools GET / vmpools GET / vmpools / {pool:id} PUT / vmpools / {pool:id} DELETE / vmpools / {pool:id} POST / vmpools / {pool:id} / allocatevm POST / vmpools / {pool:id} / permissions GET / vmpools / {pool:id} / permissions GET / vmpools / {pool:id} / permissions / {permission:id} DELETE / vmpools / {pool:id} / permissions / {permission:id} POST / vms GET / vms GET / vms / {vm:id} PUT / vms / {vm:id} DELETE / vms / {vm:id} POST / vms / {vm:id} / affinitylabels GET / vms / {vm:id} / affinitylabels GET / vms / {vm:id} / affinitylabels / {label:id} DELETE / vms / {vm:id} / affinitylabels / {label:id} GET / vms / {vm:id} / applications GET / vms / {vm:id} / applications / {application:id} POST / vms / {vm:id} / cancelmigration POST / vms / {vm:id} / cdroms GET / vms / {vm:id} / cdroms GET / vms / {vm:id} / cdroms / {cdrom:id} PUT / vms / {vm:id} / cdroms / {cdrom:id} POST / vms / {vm:id} / clone POST / vms / {vm:id} / commitsnapshot POST / vms / {vm:id} / detach POST / vms / {vm:id} / diskattachments GET / vms / {vm:id} / diskattachments GET / vms / {vm:id} / diskattachments / {attachment:id} DELETE / vms / {vm:id} / diskattachments / {attachment:id} PUT / vms / {vm:id} / diskattachments / {attachment:id} POST / vms / {vm:id} / export POST / vms / {vm:id} / freezefilesystems POST / vms / {vm:id} / graphicsconsoles GET / vms / {vm:id} / graphicsconsoles GET / vms / {vm:id} / graphicsconsoles / {console:id} DELETE / vms / {vm:id} / graphicsconsoles / {console:id} POST / vms / {vm:id} / graphicsconsoles / {console:id} / proxyticket POST / vms / {vm:id} / graphicsconsoles / {console:id} / remoteviewerconnectionfile POST / vms / {vm:id} / graphicsconsoles / {console:id} / ticket POST / vms / {vm:id} / hostdevices GET / vms / {vm:id} / hostdevices GET / vms / {vm:id} / hostdevices / {device:id} DELETE / vms / {vm:id} / hostdevices / {device:id} GET / vms / {vm:id} / katelloerrata GET / vms / {vm:id} / katelloerrata / {katelloerratum:id} POST / vms / {vm:id} / logon POST / vms / {vm:id} / maintenance POST / vms / {vm:id} / migrate POST / vms / {vm:id} / nics GET / vms / {vm:id} / nics GET / vms / {vm:id} / nics / {nic:id} PUT / vms / {vm:id} / nics / {nic:id} DELETE / vms / {vm:id} / nics / {nic:id} POST / vms / {vm:id} / nics / {nic:id} / activate POST / vms / {vm:id} / nics / {nic:id} / deactivate GET / vms / {vm:id} / nics / {nic:id} / networkfilterparameters POST / vms / {vm:id} / nics / {nic:id} / networkfilterparameters GET / vms / {vm:id} / nics / {nic:id} / networkfilterparameters / {parameter:id} PUT / vms / {vm:id} / nics / {nic:id} / networkfilterparameters / {parameter:id} DELETE / vms / {vm:id} / nics / {nic:id} / networkfilterparameters / {parameter:id} GET / vms / {vm:id} / nics / {nic:id} / reporteddevices GET / vms / {vm:id} / nics / {nic:id} / reporteddevices / {reporteddevice:id} GET / vms / {vm:id} / nics / {nic:id} / statistics GET / vms / {vm:id} / nics / {nic:id} / statistics / {statistic:id} POST / vms / {vm:id} / numanodes GET / vms / {vm:id} / numanodes GET / vms / {vm:id} / numanodes / {node:id} PUT / vms / {vm:id} / numanodes / {node:id} DELETE / vms / {vm:id} / numanodes / {node:id} POST / vms / {vm:id} / permissions GET / vms / {vm:id} / permissions GET / vms / {vm:id} / permissions / {permission:id} DELETE / vms / {vm:id} / permissions / {permission:id} POST / vms / {vm:id} / previewsnapshot POST / vms / {vm:id} / reboot POST / vms / {vm:id} / reordermacaddresses GET / vms / {vm:id} / reporteddevices GET / vms / {vm:id} / reporteddevices / {reporteddevice:id} GET / vms / {vm:id} / sessions GET / vms / {vm:id} / sessions / {session:id} POST / vms / {vm:id} / shutdown POST / vms / {vm:id} / snapshots GET / vms / {vm:id} / snapshots GET / vms / {vm:id} / snapshots / {snapshot:id} DELETE / vms / {vm:id} / snapshots / {snapshot:id} GET / vms / {vm:id} / snapshots / {snapshot:id} / cdroms GET / vms / {vm:id} / snapshots / {snapshot:id} / cdroms / {cdrom:id} GET / vms / {vm:id} / snapshots / {snapshot:id} / disks GET / vms / {vm:id} / snapshots / {snapshot:id} / disks / {disk:id} GET / vms / {vm:id} / snapshots / {snapshot:id} / nics GET / vms / {vm:id} / snapshots / {snapshot:id} / nics / {nic:id} POST / vms / {vm:id} / snapshots / {snapshot:id} / restore POST / vms / {vm:id} / start GET / vms / {vm:id} / statistics GET / vms / {vm:id} / statistics / {statistic:id} POST / vms / {vm:id} / stop POST / vms / {vm:id} / suspend POST / vms / {vm:id} / tags GET / vms / {vm:id} / tags GET / vms / {vm:id} / tags / {tag:id} DELETE / vms / {vm:id} / tags / {tag:id} POST / vms / {vm:id} / thawfilesystems POST / vms / {vm:id} / ticket POST / vms / {vm:id} / undosnapshot POST / vms / {vm:id} / watchdogs GET / vms / {vm:id} / watchdogs GET / vms / {vm:id} / watchdogs / {watchdog:id} PUT / vms / {vm:id} / watchdogs / {watchdog:id} DELETE / vms / {vm:id} / watchdogs / {watchdog:id} POST / vnicprofiles GET / vnicprofiles GET / vnicprofiles / {profile:id} PUT / vnicprofiles / {profile:id} DELETE / vnicprofiles / {profile:id} POST / vnicprofiles / {profile:id} / permissions GET / vnicprofiles / {profile:id} / permissions GET / vnicprofiles / {profile:id} / permissions / {permission:id} DELETE / vnicprofiles / {profile:id} / permissions / {permission:id}
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/rest_api_guide/requests
Chapter 3. Configure RH-SSO
Chapter 3. Configure RH-SSO The RH-SSO installation process is outside the scope of this guide. It is assumed you have already installed RH-SSO on a node that is situated independently from the Red Hat OpenStack Platform director deployment. The RH-SSO URL will be identified by the USDFED_RHSSO_URL variable. RH-SSO supports multi-tenancy, and uses realms to allow for separation between projects. As a result, RH-SSO operations always occur within the context of a realm. This guide uses the site-specific variable USDFED_RHSSO_REALM to identify the RH-SSO realm being used. The RH-SSO realm can either be created ahead of time (as would be typical when RH-SSO is administered by an IT group), or the keycloak-httpd-client-install tool can create it for you if you have administrator privileges on the RH-SSO server. 3.1. Configure the RH-SSO Realm Once the RH-SSO realm is available, use the RH-SSO web console to configure that realm for user federation against IdM: Select USDFED_RHSSO_REALM from the drop-down list in the upper left corner. Select User Federation from the left side Configure panel. From the Add provider ... drop down list in the upper right corner of the User Federation panel, select ldap . Fill in the following fields with these values, be sure to substitute any USDFED_ site-specific variable: Property Value Console Display Name Red Hat IDM Edit Mode READ_ONLY Sync Registrations Off Vendor Red Hat Directory Server Username LDAP attribute uid RDN LDAP attribute uid UUID LDAP attribute ipaUniqueID User Object Classes inetOrgPerson, organizationalPerson Connection URL LDAPS://USDFED_IPA_HOST Users DN cn=users,cn=accounts,USDFED_IPA_BASE_DN Authentication Type simple Bind DN uid=rhsso,cn=sysaccounts,cn=etc,USDFED_IPA_BASE_DN Bind Credential USDFED_IPA_RHSSO_SERVICE_PASSWD Use the Test connection and Test authentication buttons to check that user federation is working. Click Save at the bottom of the User Federation panel to save the new user federation provider. Click on the Mappers tab at the top of the Red Hat IDM user federation page you just created. Create a mapper to retrieve the user's group information; this means that a user's group memberships will be returned in the SAML assertion. You will be using group membership later to provide authorization in OpenStack. Click on the Create button in the upper right hand corner of the Mappers page. On the Add user federation mapper page, select group-ldap-mapper from the Mapper Type drop down list, and give it the name Group Mapper . Fill in the following fields with these values, and be sure to substitute any USDFED_ site-specific variable. Property Value LDAP Groups DN cn=groups,cn=accounts„USDFED_IPA_BASE_DN Group Name LDAP Attribute cn Group Object Classes groupOfNames Membership LDAP Attribute member Membership Attribute Type DN Mode READ_ONLY User Groups Retrieve Strategy GET_GROUPS_FROM_USER_MEMBEROF_ATTRIBUTE Click Save . 3.2. Add User Attributes for SAML Assertion The SAML assertion can send to keystone the properties that are bound to the user (for example, user metadata); these are called attributes in SAML. You will need to configure RH-SSO to return the required attributes in the assertion. Then, when keystone receives the SAML assertion, it will map those attributes into user metadata in a manner which keystone can then process. The process of mapping IdP attributes into keystone data is called Federated Mapping and will be covered later in this guide (see Section 4.21, "Create the Mapping File and Upload to Keystone" ). RH-SSO calls the process of adding returned attributes Protocol Mapping . Protocol mapping is a property of the RH-SSO client (for example, the service provider (SP) added to the RH-SSO realm). The process for adding a given attribute to SAML follows a similar process. In the RH-SSO administration web console: Select USDFED_RHSSO_REALM from the drop-down list in the upper left corner. Select Clients from the left side Configure panel. Select the SP client that was setup by keycloak-httpd-client-install . It will be identified by its SAML EntityId . Select the Mappers tab from the horizontal list of tabs appearing at the top of the client panel. In the Mappers panel in the upper right are two buttons: Create and Add Builtin . Use one of these buttons to add a protocol mapper to the client. You can add any required attributes, but for this exercise you will only need the list of groups the user is a member of (because group membership is how you will authorize the user). 3.3. Add Group Information to the Assertion Click on the Create button in the Mappers panel. In the Create Protocol Mapper panel select Group list from the Mapper type drop-down list. Enter Group List as a name in the Name field. Enter groups as the name of the SAML attribute in the Group attribute name field. Note This is the name of the attribute as it will appear in the SAML assertion. When the keystone mapper searches for names in the Remote section of the mapping declaration, it is the SAML attribute names it is looking for. Whenever you add an attribute in RH-SSO to be passed in the assertion you will need to specify the SAML attribute name; it is the RH-SSO protocol mapper where that name is defined. In the SAML Attribute NameFormat field select Basic . In the Single Group Attribute toggle box select On . Click Save at the bottom of the panel. Note keycloak-httpd-client-install adds a group mapper when it runs.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/federate_with_identity_service/configure_rh_sso
8.202. sos
8.202. sos 8.202.1. RHBA-2013:1688 - sos bug fix and enhancement update An updated sos package that fixes several bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The sos package contains a set of tools that gather information from system hardware, logs and configuration files. The information can then be used for diagnostic purposes and debugging. Bug Fixes BZ#876309 The SELinux plug-in used some commands that are obsolete on modern Linux distributions so the sosreport utility was unable to collect some information from the SELinux tools and diagnostics. The plug-in has been updated to reflect changes in the SELinux tools and diagnostics so that sosreport is now able to collect more information from these components. BZ# 883811 versions of sos did not mask passwords in libvirt's XML configuration files and output of the corosync-obctl command so the passwords may have been disclosed to recipients of sosreport data. This update modifies the respective libvirt and corosync plug-ins so that passwords are now left out when collecting sos data from the aforementioned sources. BZ#888488 Previously, when executing external commands, sos always used the user's environment settings unless the environment was explicitly specified by a plug-in used for collecting sos data. Consequently, the collected output was a subject to locale and custom settings of the user running the sosreport command, which could be undesirable for further processing of the data. With this update, sos runs all external commands with a consistent LC_ALL setting and the command output is now collected using the C locale. BZ#888589 The sosreport utility previously verified all installed packages by default, which was highly demanding on CPU and memory usage. To avoid this situation, the rpm plug-in now contains a fixed list of packages to verify, including core system packages such as the kernel packages. BZ# 888724 versions of sos did not preserve the permissions of the collected files. File permissions in sosreport archives could have been inconsistent with file permissions on the host system, potentially misleading the user. With this update, sos preserves ownership and permissions of the collected files. BZ#913201 The sosreport utility previously could cause unexpected RPC failures on the local system by attempting to copy RPC channel files from the proc file system (/proc/net/rpc/*/channel). These files are now blacklisted from collection and the sosreport command can no longer interfere with active RPC communications. BZ#924925 The openswan plug-in previously collected output of the "ipesec barf" command to obtain VPN related diagnostic information. This could cause sosreport to appear unresponsive when running on systems that contained accounts with large UIDs and had installed a version of openswan affected by bug 771612. With this update, the ipsec barf command is no longer run by default, and the problem can no longer occur in this scenario, unless the barf functionality is explicitly enabled from the command line. BZ#947424 The devicemapper plug-in used an obsolete syntax to obtain information from the udev subsystem. The plug-in called the "udevinfo" command instead of the actual command, "udevadm info". This has been fixed with this update, and the correct property data can now be collected for the relevant block device types. BZ# 966602 The sosreport command incorrectly assumed that the tar program would always write data on standard output by default. Consequently, when the TAPE environment variable was set, data may have been unexpectedly written to a tape device, or another location expanded to by this variable. The sosreport has been modified to always call the tar command with the "-f" option, forcing data to be written to standard output. Users who set the TAPE variable in their environment can run sosreport without risking that data on existing tape devices could be overwritten. BZ#986301 versions of sos allowed passwords from luci configuration files to be collected by the cluster module so the passwords may have been disclosed to recipients of sosreport data. This update modifies the cluster module so that luci passwords are now left out from the collected data. BZ#986973 versions of the sos package called the "wbinfo -u" command to collect user information from domains visible to the system Winbind configuration. However, the wbinfo command may have used very large amounts of memory and CPU time on large Active Directory installations with many trusted domains. As a consequence, sosreport appeared to be unresponsive and may have triggered out-of-memory conditions for other processes. The sosreport command has been modified to use the "--domain='.'" switch with the wbinfo command, which restricts data collection to the local domain. The problem no longer occurs in the described scenario. BZ#987103 versions of sos collected the file /etc/krb5.keytab on systems where kerberos authentication is configured. This file contains encrypted keys and is of limited diagnostic value. A summary of entries in the file is now obtained using the klist command instead. Enhancements BZ# 868711 The output of the "gluster volume geo-replication-status" command may be important for debugging problems related to Gluster geographic replication. Therefore, the gluster plug-in now collects this diagnostic output by default. BZ#907861 The ID mapping daemon (idmapd) controls identity mappings used by NFS services and may be important for diagnostic and troubleshooting efforts. Therefore, the idmad.conf configuration file is now collected on NFS client and server hosts, and can be analyzed in the sosreport utility. BZ#924338 The sosreport utility now allows collecting configuration files for the Open Hardware Platform Interface (OpenHPI) components. BZ# 924839 The sosreport utility now collects kernel log data (dmesg logs) from vmcore dump files that are found on the system. BZ# 989292 The sos package now supports collecting of unified cluster diagnostic data with the crm_report tool. Users of sos are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/sos
Chapter 12. Monitor Red Hat JBoss Data Grid Applications in Red Hat JBoss EAP
Chapter 12. Monitor Red Hat JBoss Data Grid Applications in Red Hat JBoss EAP Red Hat JBoss Data Grid library applications (in the form of WAR or EAR files) can be deployed within JBoss Enterprise Application Server 6 (or better) and then monitored using JBoss Operations Network. Report a bug 12.1. Prerequisites The following are prerequisites to monitor a Red Hat JBoss Data Grid library application in JBoss Enterprise Application Platform: Install and configure JBoss Enterprise Application Platform 6 (or better). Install and configure JBoss Operations Network 3.2.2 (or better). Install and configure JBoss Data Grid (6.3 or better) Library mode plug-in. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/chap-monitor_red_hat_jboss_data_grid_applications_in_red_hat_jboss_eap
Chapter 1. Overview of images
Chapter 1. Overview of images 1.1. Understanding containers, images, and image streams Containers, images, and image streams are important concepts to understand when you set out to create and manage containerized software. An image holds a set of software that is ready to run, while a container is a running instance of a container image. An image stream provides a way of storing different versions of the same basic image. Those different versions are represented by different tags on the same image name. 1.2. Images Containers in OpenShift Container Platform are based on OCI- or Docker-formatted container images . An image is a binary that includes all of the requirements for running a single container, as well as metadata describing its needs and capabilities. You can think of it as a packaging technology. Containers only have access to resources defined in the image unless you give the container additional access when creating it. By deploying the same image in multiple containers across multiple hosts and load balancing between them, OpenShift Container Platform can provide redundancy and horizontal scaling for a service packaged into an image. You can use the podman or docker CLI directly to build images, but OpenShift Container Platform also supplies builder images that assist with creating new images by adding your code or configuration to existing images. Because applications develop over time, a single image name can actually refer to many different versions of the same image. Each different image is referred to uniquely by its hash, a long hexadecimal number such as fd44297e2ddb050ec4f... , which is usually shortened to 12 characters, such as fd44297e2ddb . You can create , manage , and use container images. 1.3. Image registry An image registry is a content server that can store and serve container images. For example: registry.redhat.io A registry contains a collection of one or more image repositories, which contain one or more tagged images. Red Hat provides a registry at registry.redhat.io for subscribers. OpenShift Container Platform can also supply its own internal registry for managing custom container images. 1.4. Image repository An image repository is a collection of related container images and tags identifying them. For example, the OpenShift Container Platform Jenkins images are in the repository: docker.io/openshift/jenkins-2-centos7 1.5. Image tags An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here :v3.11.59-2 is the tag: registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2 You can add additional tags to an image. For example, an image might be assigned the tags :v3.11.59-2 and :latest . OpenShift Container Platform provides the oc tag command, which is similar to the docker tag command, but operates on image streams instead of directly on images. 1.6. Image IDs An image ID is a SHA (Secure Hash Algorithm) code that can be used to pull an image. A SHA image ID cannot change. A specific SHA identifier always references the exact same container image content. For example: docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324 1.7. Containers The basic units of OpenShift Container Platform applications are called containers. Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. The word container is defined as a specific running or paused instance of a container image. Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service, often called a micro-service, such as a web server or a database, though containers can be used for arbitrary workloads. The Linux kernel has been incorporating capabilities for container technologies for years. The Docker project developed a convenient management interface for Linux containers on a host. More recently, the Open Container Initiative has developed open standards for container formats and container runtimes. OpenShift Container Platform and Kubernetes add the ability to orchestrate OCI- and Docker-formatted containers across multi-host installations. Though you do not directly interact with container runtimes when using OpenShift Container Platform, understanding their capabilities and terminology is important for understanding their role in OpenShift Container Platform and how your applications function inside of containers. Tools such as podman can be used to replace docker command-line tools for running and managing containers directly. Using podman , you can experiment with containers separately from OpenShift Container Platform. 1.8. Why use imagestreams An image stream and its associated tags provide an abstraction for referencing container images from within OpenShift Container Platform. The image stream and its tags allow you to see what images are available and ensure that you are using the specific image you need even if the image in the repository changes. Image streams do not contain actual image data, but present a single virtual view of related images, similar to an image repository. You can configure builds and deployments to watch an image stream for notifications when new images are added and react by performing a build or deployment, respectively. For example, if a deployment is using a certain image and a new version of that image is created, a deployment could be automatically performed to pick up the new version of the image. However, if the image stream tag used by the deployment or build is not updated, then even if the container image in the container image registry is updated, the build or deployment continues using the , presumably known good image. The source images can be stored in any of the following: OpenShift Container Platform's integrated registry. An external registry, for example registry.redhat.io or Quay.io. Other image streams in the OpenShift Container Platform cluster. When you define an object that references an image stream tag, such as a build or deployment configuration, you point to an image stream tag and not the repository. When you build or deploy your application, OpenShift Container Platform queries the repository using the image stream tag to locate the associated ID of the image and uses that exact image. The image stream metadata is stored in the etcd instance along with other cluster information. Using image streams has several significant benefits: You can tag, rollback a tag, and quickly deal with images, without having to re-push using the command line. You can trigger builds and deployments when a new image is pushed to the registry. Also, OpenShift Container Platform has generic triggers for other resources, such as Kubernetes objects. You can mark a tag for periodic re-import. If the source image has changed, that change is picked up and reflected in the image stream, which triggers the build or deployment flow, depending upon the build or deployment configuration. You can share images using fine-grained access control and quickly distribute images across your teams. If the source image changes, the image stream tag still points to a known-good version of the image, ensuring that your application do not break unexpectedly. You can configure security around who can view and use the images through permissions on the image stream objects. Users that lack permission to read or list images on the cluster level can still retrieve the images tagged in a project using image streams. You can manage image streams, use image streams with Kubernetes resources , and trigger updates on image stream updates . 1.9. Image stream tags An image stream tag is a named pointer to an image in an image stream. An image stream tag is similar to a container image tag. 1.10. Image stream images An image stream image allows you to retrieve a specific container image from a particular image stream where it is tagged. An image stream image is an API resource object that pulls together some metadata about a particular image SHA identifier. 1.11. Image stream triggers An image stream trigger causes a specific action when an image stream tag changes. For example, importing can cause the value of the tag to change, which causes a trigger to fire when there are deployments, builds, or other resources listening for those. 1.12. How you can use the Cluster Samples Operator During the initial startup, the Operator creates the default samples resource to initiate the creation of the image streams and templates. You can use the Cluster Samples Operator to manage the sample image streams and templates stored in the openshift namespace. As a cluster administrator, you can use the Cluster Samples Operator to: Configure the Operator . Use the Operator with an alternate registry . 1.13. About templates A template is a definition of an object to be replicated. You can use templates to build and deploy configurations. 1.14. How you can use Ruby on Rails As a developer, you can use Ruby on Rails to: Write your application: Set up a database. Create a welcome page. Configure your application for OpenShift Container Platform. Store your application in Git. Deploy your application in OpenShift Container Platform: Create the database service. Create the frontend service. Create a route for your application.
[ "registry.redhat.io", "docker.io/openshift/jenkins-2-centos7", "registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2", "docker.io/openshift/jenkins-2-centos7@sha256:ab312bda324" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/images/overview-of-images
Chapter 5. Uninstalling OpenShift Data Foundation
Chapter 5. Uninstalling OpenShift Data Foundation 5.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledge base article on Uninstalling OpenShift Data Foundation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_amazon_web_services/uninstalling_openshift_data_foundation
Chapter 5. Exporting applications
Chapter 5. Exporting applications As a developer, you can export your application in the ZIP file format. Based on your needs, import the exported application to another project in the same cluster or a different cluster by using the Import YAML option in the +Add view. Exporting your application helps you to reuse your application resources and saves your time. 5.1. Prerequisites You have installed the gitops-primer Operator from the OperatorHub. Note The Export application option is disabled in the Topology view even after installing the gitops-primer Operator. You have created an application in the Topology view to enable Export application . 5.2. Procedure In the developer perspective, perform one of the following steps: Navigate to the +Add view and click Export application in the Application portability tile. Navigate to the Topology view and click Export application . Click OK in the Export Application dialog box. A notification opens to confirm that the export of resources from your project has started. Optional steps that you might need to perform in the following scenarios: If you have started exporting an incorrect application, click Export application Cancel Export . If your export is already in progress and you want to start a fresh export, click Export application Restart Export . If you want to view logs associated with exporting an application, click Export application and the View Logs link. After a successful export, click Download in the dialog box to download application resources in ZIP format onto your machine.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/building_applications/odc-exporting-applications
Chapter 14. Project networking with IPv6
Chapter 14. Project networking with IPv6 14.1. IPv6 subnet options When you create IPv6 subnets in a Red Hat OpenStack Platform (RHOSP) project network you can specify address mode and Router Advertisement mode to obtain a particular result as described in the following table. Note RHOSP does not support IPv6 prefix delegation in ML2/OVN deployments. You must set the Global Unicast Address prefix manually. RA Mode Address Mode Result ipv6_ra_mode=not set ipv6-address-mode=slaac The instance receives an IPv6 address from the external router (not managed by OpenStack Networking) using Stateless Address Autoconfiguration (SLAAC). Note OpenStack Networking supports only EUI-64 IPv6 address assignment for SLAAC. This allows for simplified IPv6 networking, as hosts self-assign addresses based on the base 64-bits plus the MAC address. You cannot create subnets with a different netmask and address_assign_type of SLAAC. ipv6_ra_mode=not set ipv6-address-mode=dhcpv6-stateful The instance receives an IPv6 address and optional information from OpenStack Networking (dnsmasq) using DHCPv6 stateful . ipv6_ra_mode=not set ipv6-address-mode=dhcpv6-stateless The instance receives an IPv6 address from the external router using SLAAC, and optional information from OpenStack Networking (dnsmasq) using DHCPv6 stateless . ipv6_ra_mode=slaac ipv6-address-mode=not-set The instance uses SLAAC to receive an IPv6 address from OpenStack Networking ( radvd ). ipv6_ra_mode=dhcpv6-stateful ipv6-address-mode=not-set The instance receives an IPv6 address and optional information from an external DHCPv6 server using DHCPv6 stateful . ipv6_ra_mode=dhcpv6-stateless ipv6-address-mode=not-set The instance receives an IPv6 address from OpenStack Networking ( radvd ) using SLAAC, and optional information from an external DHCPv6 server using DHCPv6 stateless . ipv6_ra_mode=slaac ipv6-address-mode=slaac The instance receives an IPv6 address from OpenStack Networking ( radvd ) using SLAAC . ipv6_ra_mode=dhcpv6-stateful ipv6-address-mode=dhcpv6-stateful The instance receives an IPv6 address from OpenStack Networking ( dnsmasq ) using DHCPv6 stateful , and optional information from OpenStack Networking ( dnsmasq ) using DHCPv6 stateful . ipv6_ra_mode=dhcpv6-stateless ipv6-address-mode=dhcpv6-stateless The instance receives an IPv6 address from OpenStack Networking ( radvd ) using SLAAC , and optional information from OpenStack Networking ( dnsmasq ) using DHCPv6 stateless . 14.2. Create an IPv6 subnet using Stateful DHCPv6 You can create an IPv6 subnet in a Red Hat OpenStack (RHOSP) project network. For example, you can create an IPv6 subnet using Stateful DHCPv6 in network named database-servers in a project named QA. Procedure Retrieve the project ID of the Project where you want to create the IPv6 subnet. These values are unique between OpenStack deployments, so your values differ from the values in this example. Retrieve a list of all networks present in OpenStack Networking (neutron), and note the name of the network where you want to host the IPv6 subnet: Include the project ID, network name, and ipv6 address mode in the openstack subnet create command: Validation steps Validate this configuration by reviewing the network list. Note that the entry for database-servers now reflects the newly created IPv6 subnet: Result As a result of this configuration, instances that the QA project creates can receive a DHCP IPv6 address when added to the database-servers subnet: Additional resources To find the Router Advertisement mode and address mode combinations to achieve a particular result in an IPv6 subnet, see IPv6 subnet options in the Networking Guide .
[ "openstack project list +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 25837c567ed5458fbb441d39862e1399 | QA | | f59f631a77264a8eb0defc898cb836af | admin | | 4e2e1951e70643b5af7ed52f3ff36539 | demo | | 8561dff8310e4cd8be4b6fd03dc8acf5 | services | +----------------------------------+----------+", "openstack network list +--------------------------------------+------------------+-------------------------------------------------------------+ | id | name | subnets | +--------------------------------------+------------------+-------------------------------------------------------------+ | 8357062a-0dc2-4146-8a7f-d2575165e363 | private | c17f74c4-db41-4538-af40-48670069af70 10.0.0.0/24 | | 31d61f7d-287e-4ada-ac29-ed7017a54542 | public | 303ced03-6019-4e79-a21c-1942a460b920 172.24.4.224/28 | | 6aff6826-4278-4a35-b74d-b0ca0cbba340 | database-servers | | +--------------------------------------+------------------+-------------------------------------------------------------+", "openstack subnet create --ip-version 6 --ipv6-address-mode dhcpv6-stateful --project 25837c567ed5458fbb441d39862e1399 --network database-servers --subnet-range fdf8:f53b:82e4::53/125 subnet_name Created a new subnet: +-------------------+--------------------------------------------------------------+ | Field | Value | +-------------------+--------------------------------------------------------------+ | allocation_pools | {\"start\": \"fdf8:f53b:82e4::52\", \"end\": \"fdf8:f53b:82e4::56\"} | | cidr | fdf8:f53b:82e4::53/125 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | fdf8:f53b:82e4::51 | | host_routes | | | id | cdfc3398-997b-46eb-9db1-ebbd88f7de05 | | ip_version | 6 | | ipv6_address_mode | dhcpv6-stateful | | ipv6_ra_mode | | | name | | | network_id | 6aff6826-4278-4a35-b74d-b0ca0cbba340 | | tenant_id | 25837c567ed5458fbb441d39862e1399 | +-------------------+--------------------------------------------------------------+", "openstack network list +--------------------------------------+------------------+-------------------------------------------------------------+ | id | name | subnets | +--------------------------------------+------------------+-------------------------------------------------------------+ | 6aff6826-4278-4a35-b74d-b0ca0cbba340 | database-servers | cdfc3398-997b-46eb-9db1-ebbd88f7de05 fdf8:f53b:82e4::50/125 | | 8357062a-0dc2-4146-8a7f-d2575165e363 | private | c17f74c4-db41-4538-af40-48670069af70 10.0.0.0/24 | | 31d61f7d-287e-4ada-ac29-ed7017a54542 | public | 303ced03-6019-4e79-a21c-1942a460b920 172.24.4.224/28 | +--------------------------------------+------------------+-------------------------------------------------------------+", "openstack server list +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+ | fad04b7a-75b5-4f96-aed9-b40654b56e03 | corp-vm-01 | ACTIVE | - | Running | database-servers=fdf8:f53b:82e4::52 | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/networking_guide/proj-network-ipv6_rhosp-network
Data Grid Operator Guide
Data Grid Operator Guide Red Hat Data Grid 8.4 Create Data Grid clusters on OpenShift Red Hat Customer Content Services
[ "cp redhat-datagrid-cli kubectl-infinispan", "plugin list The following compatible plugins are available: /path/to/kubectl-infinispan", "infinispan --help", "infinispan install --channel=8.4.x --source=redhat-operators --source-namespace=openshift-marketplace", "get pods -n openshift-operators | grep infinispan-operator NAME READY STATUS infinispan-operator-<id> 1/1 Running", "new-project USD{INSTALL_NAMESPACE} 1 new-project USD{WATCH_NAMESPACE} 2", "apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: datagrid namespace: USD{INSTALL_NAMESPACE} EOF", "apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: datagrid namespace: USD{INSTALL_NAMESPACE} spec: targetNamespaces: - USD{WATCH_NAMESPACE} EOF", "apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: datagrid-operator namespace: USD{INSTALL_NAMESPACE} spec: channel: 8.4.x installPlanApproval: Automatic name: datagrid source: redhat-operators sourceNamespace: openshift-marketplace EOF", "get pods -n USD{INSTALL_NAMESPACE} NAME READY STATUS infinispan-operator-<id> 1/1 Running", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 version: <Data Grid_version> service: type: DataGrid", "infinispan create cluster --replicas=3 -Pservice.type=DataGrid infinispan", "get pods -w", "infinispan delete cluster infinispan", "get infinispan -o yaml", "conditions: - message: 'View: [infinispan-0, infinispan-1]' status: \"True\" type: wellFormed", "wait --for condition=wellFormed --timeout=240s infinispan/infinispan", "logs infinispan-0 | grep ISPN000094", "INFO [org.infinispan.CLUSTER] (MSC service thread 1-2) ISPN000094: Received new cluster view for channel infinispan: [infinispan-0|0] (1) [infinispan-0] INFO [org.infinispan.CLUSTER] (jgroups-3,infinispan-0) ISPN000094: Received new cluster view for channel infinispan: [infinispan-0|1] (2) [infinispan-0, infinispan-1]", "cat > cr_minimal.yaml<<EOF apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 version: <Data Grid_version> service: type: DataGrid EOF", "apply -f my_infinispan.yaml", "get pods -w", "spec: replicas: 0", "get infinispan infinispan -o=jsonpath='{.status.replicasWantedAtRestart}'", "spec: replicas: 6", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-config namespace: rhdg-namespace data: infinispan-config.xml: > <infinispan> <!-- Custom configuration goes here. --> </infinispan>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-config namespace: rhdg-namespace data: infinispan-config.yaml: > infinispan: # Custom configuration goes here.", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-config namespace: rhdg-namespace data: infinispan-config.json: > { \"infinispan\": { } }", "apply -f cluster-config.yaml", "spec: configMapName: \"cluster-config\"", "<infinispan> <cache-container> <distributed-cache-configuration name=\"base-template\"> <expiration lifespan=\"5000\"/> </distributed-cache-configuration> <distributed-cache-configuration name=\"extended-template\" configuration=\"base-template\"> <encoding media-type=\"application/x-protostream\"/> <expiration lifespan=\"10000\" max-idle=\"1000\"/> </distributed-cache-configuration> </cache-container> </infinispan>", "infinispan: cacheContainer: caches: base-template: distributedCacheConfiguration: expiration: lifespan: \"5000\" extended-template: distributedCacheConfiguration: configuration: \"base-template\" encoding: mediaType: \"application/x-protostream\" expiration: lifespan: \"10000\" maxIdle: \"1000\"", "{ \"infinispan\" : { \"cache-container\" : { \"caches\" : { \"base-template\" : { \"distributed-cache-configuration\" : { \"expiration\" : { \"lifespan\" : \"5000\" } } }, \"extended-template\" : { \"distributed-cache-configuration\" : { \"configuration\" : \"base-template\", \"encoding\": { \"media-type\": \"application/x-protostream\" }, \"expiration\" : { \"lifespan\" : \"10000\", \"max-idle\" : \"1000\" } } } } } } }", "apiVersion: v1 kind: ConfigMap metadata: name: logging-config namespace: rhdg-namespace data: infinispan-config.xml: > <infinispan> <!-- Add custom Data Grid configuration if required. --> <!-- You can provide either Data Grid configuration, logging configuration, or both. --> </infinispan> log4j.xml: > <?xml version=\"1.0\" encoding=\"UTF-8\"?> <Configuration name=\"ServerConfig\" monitorInterval=\"60\" shutdownHook=\"disable\"> <Appenders> <!-- Colored output on the console --> <Console name=\"STDOUT\"> <PatternLayout pattern=\"%d{HH:mm:ss,SSS} %-5p (%t) [%c] %m%throwable%n\"/> </Console> </Appenders> <Loggers> <Root level=\"INFO\"> <AppenderRef ref=\"STDOUT\" level=\"TRACE\"/> </Root> <Logger name=\"org.infinispan\" level=\"TRACE\"/> </Loggers> </Configuration>", "apiVersion: v1 kind: Secret metadata: name: user-secret type: Opaque data: postgres_cred: sensitive-value mysql_cred: sensitive-value2", "apply -f user-secret.yaml", "spec: security: credentialStoreSecretName: user-secret", "<credential-store> <credential-reference store=\"credentials\" alias=\"postgres_cred\"/> </credential-store>", "spec: version: 8.4.6-1 upgrades: type: Shutdown", "spec: version: 8.4.6-1 upgrades: type: HotRodRolling", "get infinispan <cr_name> -o yaml", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 version: <Data Grid_version> service: type: DataGrid", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan annotations: infinispan.org/monitoring: 'true' spec: replicas: 6 version: 8.4.6-1 upgrades: type: Shutdown service: type: DataGrid container: storage: 2Gi # The ephemeralStorage and storageClassName fields are mutually exclusive. ephemeralStorage: false storageClassName: my-storage-class sites: local: name: azure expose: type: LoadBalancer locations: - name: azure url: openshift://api.azure.host:6443 secretName: azure-token - name: aws clusterName: infinispan namespace: rhdg-namespace url: openshift://api.aws.host:6443 secretName: aws-token security: endpointSecretName: endpoint-identities endpointEncryption: type: Secret certSecretName: tls-secret container: extraJvmOpts: \"-XX:NativeMemoryTracking=summary\" cpu: \"2000m:1000m\" memory: \"2Gi:1Gi\" logging: categories: org.infinispan: debug org.jgroups: debug org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error expose: type: LoadBalancer configMapName: \"my-cluster-config\" configListener: enabled: true affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchLabels: app: infinispan-pod clusterName: infinispan infinispan_cr: infinispan topologyKey: \"kubernetes.io/hostname\"", "spec: service: type: DataGrid container: storage: 2Gi ephemeralStorage: true", "spec: service: type: DataGrid container: storage: 2Gi storageClassName: my-storage-class", "spec: container: cpu: \"2000m:1000m\" memory: \"2Gi:1Gi\"", "spec: container: extraJvmOpts: \"-<option>=<value>\" routerExtraJvmOpts: \"-<option>=<value>\" cliExtraJvmOpts: \"-<option>=<value>\"", "spec: service: container: readinessProbe: failureThreshold: 1 initialDelaySeconds: 1 periodSeconds: 1 successThreshold: 1 timeoutSeconds: 1 livenessProbe: failureThreshold: 1 initialDelaySeconds: 1 periodSeconds: 1 successThreshold: 1 timeoutSeconds: 1 startupProbe: failureThreshold: 1 initialDelaySeconds: 1 periodSeconds: 1 successThreshold: 1 timeoutSeconds: 1", "apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority value: 1000000 globalDefault: false description: \"Use this priority class for high priority service pods only.\"", "create -f high-priority.yaml", "kind: Infinispan spec: scheduling: affinity: priorityClassName: \"high-priority\"", "spec: logging: categories: org.infinispan: debug org.jgroups: debug", "logs -f USDPOD_NAME", "extraJvmOpts: \"-Xlog:gc*:stdout:time,level,tags\"", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 version: <Data Grid_version> service: type: Cache", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan annotations: infinispan.org/monitoring: 'true' spec: replicas: 2 version: 8.4.6-1 upgrades: type: Shutdown service: type: Cache replicationFactor: 2 autoscale: maxMemUsagePercent: 70 maxReplicas: 5 minMemUsagePercent: 30 minReplicas: 2 security: endpointSecretName: endpoint-identities endpointEncryption: type: Secret certSecretName: tls-secret container: extraJvmOpts: \"-XX:NativeMemoryTracking=summary\" cpu: \"2000m:1000m\" memory: \"2Gi:1Gi\" logging: categories: org.infinispan: trace org.jgroups: trace expose: type: LoadBalancer affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchLabels: app: infinispan-pod clusterName: infinispan infinispan_cr: infinispan topologyKey: \"kubernetes.io/hostname\"", "spec: service: type: Cache autoscale: disabled: false maxMemUsagePercent: 70 maxReplicas: 5 minMemUsagePercent: 30 minReplicas: 2", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: annotations: infinispan.org/targetAnnotations: service-annotation1, service-annotation2 infinispan.org/podTargetAnnotations: pod-annotation1, pod-annotation2 infinispan.org/routerAnnotations: router-annotation1, router-annotation2 service-annotation1: value service-annotation2: value pod-annotation1: value pod-annotation2: value router-annotation1: value router-annotation2: value", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: annotations: infinispan.org/targetLabels: service-label1, service-label2 infinispan.org/podTargetLabels: pod-label1, pod-label2 labels: service-label1: value service-label2: value pod-label1: value pod-label2: value # The operator does not attach these labels to resources. my-label: my-value environment: development", "edit subscription datagrid -n openshift-operators", "spec: config: env: - name: INFINISPAN_OPERATOR_TARGET_LABELS value: | {\"service-label1\":\"value\", service-label1\":\"value\"} - name: INFINISPAN_OPERATOR_POD_TARGET_LABELS value: | {\"pod-label1\":\"value\", \"pod-label2\":\"value\"} - name: INFINISPAN_OPERATOR_TARGET_ANNOTATIONS value: | {\"service-annotation1\":\"value\", \"service-annotation2\":\"value\"} - name: INFINISPAN_OPERATOR_POD_TARGET_ANNOTATIONS value: | {\"pod-annotation1\":\"value\", \"pod-annotation2\":\"value\"}", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: datagrid namespace: openshift-operators spec: channel: 8.4.x installPlanApproval: Automatic name: datagrid source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ADDITIONAL_VARS value: \"[\\\"VAR_NAME\\\", \\\"ANOTHER_VAR\\\"]\" - name: VAR_NAME value: USD(VAR_NAME_VALUE) - name: ANOTHER_VAR value: USD(ANOTHER_VAR_VALUE)", "kind: Subscription spec: config: env: - name: ADDITIONAL_VARS value: \"[\\\"TZ\\\"]\" - name: TZ value: \"JST-9\"", "apply -f subscription-datagrid.yaml", "get subscription datagrid -n openshift-operators -o jsonpath='{.spec.config.env[*].name}'", "edit subscription datagrid -n openshift-operators", "get secret infinispan-generated-secret", "get secret infinispan-generated-secret -o jsonpath=\"{.data.identities\\.yaml}\" | base64 --decode", "credentials: - username: myfirstusername password: changeme-one - username: mysecondusername password: changeme-two", "create secret generic --from-file=identities.yaml connect-secret", "spec: security: endpointSecretName: connect-secret", "patch secret infinispan-generated-operator-secret -p='{\"stringData\":{\"password\": \"supersecretoperatorpassword\"}}'", "spec: security: endpointAuthentication: false", "spec: security: endpointEncryption: type: Secret certSecretName: tls-secret clientCert: Validate clientCertSecretName: infinispan-client-cert-secret", "apiVersion: v1 kind: Secret metadata: name: infinispan-client-cert-secret type: Opaque stringData: truststore-password: changme data: truststore.p12: \"<base64_encoded_PKCS12_trust_store>\"", "apiVersion: v1 kind: Secret metadata: name: infinispan-client-cert-secret type: Opaque stringData: truststore-password: changme data: trust.ca: \"<base64_encoded_CA_certificate>\" trust.cert.client1: \"<base64_encoded_client_certificate>\" trust.cert.client2: \"<base64_encoded_client_certificate>\"", "spec: security: endpointEncryption: type: Service certServiceName: service.beta.openshift.io certSecretName: infinispan-cert-secret", "get secret infinispan-cert-secret -o jsonpath='{.data.tls\\.crt}' | base64 --decode > tls.crt", "spec: security: endpointEncryption: type: None", "apply -f tls_secret.yaml", "spec: security: endpointEncryption: type: Secret certSecretName: tls-secret", "apiVersion: v1 kind: Secret metadata: name: tls-secret type: Opaque stringData: alias: server password: changeme data: keystore.p12: \"MIIKDgIBAzCCCdQGCSqGSIb3DQEHA...\"", "apiVersion: v1 kind: Secret metadata: name: tls-secret type: Opaque data: tls.key: \"LS0tLS1CRUdJTiBQUk ...\" tls.crt: \"LS0tLS1CRUdJTiBDRVl ...\"", "spec: security: authorization: enabled: true", "credentials: - username: admin password: changeme - username: my-user-1 password: changeme roles: - admin - username: my-user-2 password: changeme roles: - monitor", "delete secret connect-secret --ignore-not-found create secret generic --from-file=identities.yaml connect-secret", "spec: security: endpointSecretName: connect-secret", "spec: security: authorization: enabled: true roles: - name: my-role-1 permissions: - ALL - name: my-role-2 permissions: - READ - WRITE", "metadata: name: infinispan", "get services", "spec: expose: type: LoadBalancer port: 65535", "get services | grep external", "spec: expose: type: NodePort nodePort: 30000", "get services | grep external", "spec: expose: type: Route host: www.example.org", "get routes", "create sa -n <namespace> lon", "policy add-role-to-user view -n <namespace> -z lon", "adm policy add-cluster-role-to-user cluster-reader -z lon -n <namespace>", "apiVersion: v1 kind: Secret metadata: name: ispn-xsite-sa-token 1 annotations: kubernetes.io/service-account.name: \"<service-account>\" 2 type: kubernetes.io/service-account-token", "-n <namespace> create -f sa-token.yaml", "-n <namespace> get secrets ispn-xsite-sa-token -o jsonpath=\"{.data.token}\" | base64 -d", "-n <namespace> create secret generic <token-secret> --from-literal=token=<token>", "-n <namespace> create token <service-account>", "-n <namespace> create secret generic <token-secret> --from-literal=token=<token>", "-n <namespace> delete secrets <token-secret>", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 3 version: <Data Grid_version> service: type: DataGrid sites: local: name: LON expose: type: LoadBalancer port: 65535 maxRelayNodes: 1 locations: - name: NYC clusterName: <nyc_cluster_name> namespace: <nyc_cluster_namespace> url: openshift://api.rhdg-nyc.openshift-aws.myhost.com:6443 secretName: nyc-token logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: nyc-cluster spec: replicas: 2 version: <Data Grid_version> service: type: DataGrid sites: local: name: NYC expose: type: LoadBalancer port: 65535 maxRelayNodes: 1 locations: - name: LON clusterName: infinispan namespace: rhdg-namespace url: openshift://api.rhdg-lon.openshift-aws.myhost.com:6443 secretName: lon-token logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error", "spec: logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error", "get infinispan -o yaml", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 3 version: <Data Grid_version> service: type: DataGrid sites: local: name: LON expose: type: LoadBalancer port: 65535 maxRelayNodes: 1 locations: - name: NYC url: infinispan+xsite://infinispan-nyc.myhost.com:7900 logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 version: <Data Grid_version> service: type: DataGrid sites: local: name: NYC expose: type: LoadBalancer port: 65535 maxRelayNodes: 1 locations: - name: LON url: infinispan+xsite://infinispan-lon.myhost.com logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error", "spec: logging: categories: org.jgroups.protocols.TCP: error org.jgroups.protocols.relay.RELAY2: error", "get infinispan -o yaml", "spec: service: type: DataGrid sites: local: name: LON discovery: launchGossipRouter: true memory: \"2Gi:1Gi\" cpu: \"2000m:1000m\"", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 3 service: type: DataGrid sites: local: name: LON discovery: launchGossipRouter: false locations: - name: NYC url: infinispan+xsite://infinispan-nyc.myhost.com:7900", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 3 service: type: DataGrid sites: local: name: NYC locations: - name: LON", "spec: service: type: DataGrid sites: local: name: LON expose: type: LoadBalancer maxRelayNodes: 1 locations: - name: NYC clusterName: <nyc_cluster_name> namespace: <nyc_cluster_namespace> url: openshift://api.site-b.devcluster.openshift.com:6443 secretName: nyc-token", "spec: service: type: DataGrid sites: local: name: LON expose: type: LoadBalancer port: 65535 maxRelayNodes: 1 locations: - name: NYC url: infinispan+xsite://infinispan-nyc.myhost.com:7900", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 version: <Data Grid_version> expose: type: LoadBalancer service: type: DataGrid sites: local: name: SiteA # encryption: protocol: TLSv1.3 transportKeyStore: secretName: transport-tls-secret alias: transport filename: keystore.p12 routerKeyStore: secretName: router-tls-secret alias: router filename: keystore.p12 trustStore: secretName: truststore-tls-secret filename: truststore.p12 locations: #", "apiVersion: v1 kind: Secret metadata: name: tls-secret type: Opaque stringData: password: changeme type: pkcs12 data: <file-name>: \"MIIKDgIBAzCCCdQGCSqGSIb3DQEHA...\"", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: example-clustera spec: replicas: 1 expose: type: LoadBalancer service: type: DataGrid sites: local: name: SiteA expose: type: ClusterIP maxRelayNodes: 1 locations: - name: SiteB clusterName: example-clusterb namespace: cluster-namespace", "get infinispan -o yaml", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan annotations: infinispan.org/monitoring: 'true'", "vendor_cache_manager_default_cluster_size", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan annotations: infinispan.org/monitoring: 'false'", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan annotations: infinispan.org/targetLabels: \"label1,label2,label3\" infinispan.org/serviceMonitorTargetLabels: \"label1,label2\"", "apiVersion: v1 kind: ServiceAccount metadata: name: infinispan-monitoring", "apply -f service-account.yaml", "adm policy add-cluster-role-to-user cluster-monitoring-view -z infinispan-monitoring", "serviceaccounts get-token infinispan-monitoring", "apiVersion: integreatly.org/v1alpha1 kind: GrafanaDataSource metadata: name: grafanadatasource spec: name: datasource.yaml datasources: - access: proxy editable: true isDefault: true jsonData: httpHeaderName1: Authorization timeInterval: 5s tlsSkipVerify: true name: Prometheus secureJsonData: httpHeaderValue1: >- Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6Imc4O type: prometheus url: 'https://thanos-querier.openshift-monitoring.svc.cluster.local:9091'", "apply -f grafana-datasource.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: infinispan-operator-config data: grafana.dashboard.namespace: infinispan grafana.dashboard.name: infinispan grafana.dashboard.monitoring.key: middleware", "apply -f infinispan-operator-config.yaml", "get routes grafana-route -o jsonpath=https://\"{.spec.host}\"", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: jmx: enabled: true", "get secret infinispan-generated-operator-secret -o jsonpath=\"{.data.identities\\.yaml}\" | base64 --decode", "apiVersion: operator.cryostat.io/v1beta1 kind: Cryostat metadata: name: cryostat-sample spec: minimal: false enableCertManager: true", "wait -n <namespace> --for=condition=MainDeploymentAvailable cryostat/cryostat-sample", "-n <namespace> get cryostat cryostat-sample", "get secret infinispan-generated-operator-secret -o jsonpath=\"{.data.identities\\.yaml}\" | base64 --decode", "target.labels['infinispan_cr'] == '<cluster_name>'", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchLabels: app: infinispan-pod clusterName: <cluster_name> infinispan_cr: <cluster_name> topologyKey: \"kubernetes.io/hostname\"", "spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: infinispan-pod clusterName: <cluster_name> infinispan_cr: <cluster_name> topologyKey: \"topology.kubernetes.io/hostname\"", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchLabels: app: infinispan-pod clusterName: <cluster_name> infinispan_cr: <cluster_name> topologyKey: \"topology.kubernetes.io/zone\" - weight: 90 podAffinityTerm: labelSelector: matchLabels: app: infinispan-pod clusterName: <cluster_name> infinispan_cr: <cluster_name> topologyKey: \"kubernetes.io/hostname\"", "spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: infinispan-pod clusterName: <cluster_name> infinispan_cr: <cluster_name> topologyKey: \"topology.kubernetes.io/zone\"", "apply -f mycache.yaml cache.infinispan.org/mycachedefinition created", "apiVersion: infinispan.org/v2alpha1 kind: Cache metadata: name: mycachedefinition spec: clusterName: infinispan name: myXMLcache template: <distributed-cache mode=\"SYNC\" statistics=\"true\"><encoding media-type=\"application/x-protostream\"/><persistence><file-store/></persistence></distributed-cache>", "apiVersion: infinispan.org/v2alpha1 kind: Cache metadata: name: mycachedefinition spec: clusterName: infinispan name: myYAMLcache template: |- distributedCache: mode: \"SYNC\" owners: \"2\" statistics: \"true\" encoding: mediaType: \"application/x-protostream\" persistence: fileStore: ~", "spec: updates: strategy: recreate", "apply -f mycache.yaml", "<distributed-cache name=\"persistent-cache\" mode=\"SYNC\"> <encoding media-type=\"application/x-protostream\"/> <persistence> <file-store/> </persistence> </distributed-cache>", "[//containers/default]> create cache --template=default mycache", "<distributed-cache name=\"default\" mode=\"SYNC\" owners=\"2\"> <memory storage=\"OFF_HEAP\" max-size=\"<maximum_size_in_bytes>\" when-full=\"REMOVE\" /> <partition-handling when-split=\"ALLOW_READ_WRITES\" merge-policy=\"REMOVE_ALL\"/> </distributed-cache>", "apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: mybatch spec: cluster: infinispan config: | create cache --template=org.infinispan.DIST_SYNC mycache put --cache=mycache hello world put --cache=mycache hola mundo", "apply -f mybatch.yaml", "wait --for=jsonpath='{.status.phase}'=Succeeded Batch/mybatch", "mkdir -p /tmp/mybatch", "cat > /tmp/mybatch/mycache.xml<<EOF <distributed-cache name=\"mycache\" mode=\"SYNC\"> <encoding media-type=\"application/x-protostream\"/> <memory max-count=\"1000000\" when-full=\"REMOVE\"/> </distributed-cache> EOF", "create cache mycache --file=/etc/batch/mycache.xml put --cache=mycache hello world put --cache=mycache hola mundo", "ls /tmp/mybatch batch mycache.xml", "create configmap mybatch-config-map --from-file=/tmp/mybatch", "cat > mybatch.yaml<<EOF apiVersion: infinispan.org/v2alpha1 kind: Batch metadata: name: mybatch spec: cluster: infinispan configMap: mybatch-config-map EOF", "apply -f mybatch.yaml", "wait --for=jsonpath='{.status.phase}'=Succeeded Batch/mybatch", "echo \"creating caches...\" create cache sessions --file=/etc/batch/infinispan-prod-sessions.xml create cache tokens --file=/etc/batch/infinispan-prod-tokens.xml create cache people --file=/etc/batch/infinispan-prod-people.xml create cache books --file=/etc/batch/infinispan-prod-books.xml create cache authors --file=/etc/batch/infinispan-prod-authors.xml echo \"list caches in the cluster\" ls caches", "echo \"creating caches...\" create cache mytemplate --file=/etc/batch/mycache.xml create cache sessions --template=mytemplate create cache tokens --template=mytemplate echo \"list caches in the cluster\" ls caches", "echo \"creating counters...\" create counter --concurrency-level=1 --initial-value=5 --storage=PERSISTENT --type=weak mycounter1 create counter --initial-value=3 --storage=PERSISTENT --type=strong mycounter2 create counter --initial-value=13 --storage=PERSISTENT --type=strong --upper-bound=10 mycounter3 echo \"list counters in the cluster\" ls counters", "echo \"creating schema...\" schema --upload=person.proto person.proto schema --upload=book.proto book.proto schema --upload=author.proto book.proto echo \"list Protobuf schema\" ls schemas", "echo \"creating tasks...\" task upload --file=/etc/batch/myfirstscript.js myfirstscript task upload --file=/etc/batch/mysecondscript.js mysecondscript task upload --file=/etc/batch/mythirdscript.js mythirdscript echo \"list tasks\" ls tasks", "apiVersion: infinispan.org/v2alpha1 kind: Backup metadata: name: my-backup spec: cluster: source-cluster volume: storage: 1Gi storageClassName: my-storage-class", "spec: resources: templates: - distributed-sync-prod - distributed-sync-dev caches: - cache-one - cache-two counters: - counter-name protoSchemas: - authors.proto - books.proto tasks: - wordStream.js", "spec: resources: caches: - \"*\" protoSchemas: - \"*\"", "apply -f my-backup.yaml", "ISPN005044: Backup file created 'my-backup.zip'", "describe Backup my-backup", "apiVersion: infinispan.org/v2alpha1 kind: Restore metadata: name: my-restore spec: backup: my-backup cluster: target-cluster", "spec: resources: templates: - distributed-sync-prod - distributed-sync-dev caches: - cache-one - cache-two counters: - counter-name protoSchemas: - authors.proto - books.proto tasks: - wordStream.js", "apply -f my-restore.yaml", "ISPN005045: Restore 'my-backup' complete", "logs <backup|restore_pod_name>", "project rhdg-namespace", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: datagrid-libs spec: accessModes: - ReadWriteMany resources: requests: storage: 100Mi", "apply -f datagrid-libs.yaml", "apiVersion: v1 kind: Pod metadata: name: datagrid-libs-pod spec: securityContext: fsGroup: 2000 volumes: - name: lib-pv-storage persistentVolumeClaim: claimName: datagrid-libs containers: - name: lib-pv-container image: registry.redhat.io/datagrid/datagrid-8-rhel8:8.4 volumeMounts: - mountPath: /tmp/libs name: lib-pv-storage", "apply -f datagrid-libs-pod.yaml wait --for=condition=ready --timeout=2m pod/datagrid-libs-pod", "cp --no-preserve=true libs datagrid-libs-pod:/tmp/", "delete pod datagrid-libs-pod", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 dependencies: volumeClaimName: datagrid-libs service: type: DataGrid", "apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 dependencies: artifacts: - url: http://example.com:8080/path hash: sha256:596408848b56b5a23096baa110cd8b633c9a9aef2edd6b38943ade5b4edcd686 service: type: DataGrid", "{ \"specversion\": \"1.0\", \"source\": \"/infinispan/<cluster_name>/<cache_name>\", \"type\": \"org.infinispan.entry.created\", \"time\": \"<timestamp>\", \"subject\": \"<key-name>\", \"id\": \"key-name:CommandInvocation:node-name:0\", \"data\": { \"property\": \"value\" } }", "spec: cloudEvents: acks: \"1\" bootstrapServers: my-cluster-kafka-bootstrap_1.<namespace_1>.svc:9092,my-cluster-kafka-bootstrap_2.<namespace_2>.svc:9092 cacheEntriesTopic: target-topic", "metadata: name: infinispan", "infinispan shell <cluster_name>", "metadata: name: infinispan", "import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; import org.infinispan.client.hotrod.configuration.SaslQop; import org.infinispan.client.hotrod.impl.ConfigurationProperties; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host(\"USDHOSTNAME\") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .security().authentication() .username(\"username\") .password(\"changeme\") .realm(\"default\") .saslQop(SaslQop.AUTH) .saslMechanism(\"SCRAM-SHA-512\") .ssl() .sniHostName(\"USDSERVICE_HOSTNAME\") .trustStoreFileName(\"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\") .trustStoreType(\"pem\");", "Connection infinispan.client.hotrod.server_list=USDHOSTNAME:USDPORT Authentication infinispan.client.hotrod.use_auth=true infinispan.client.hotrod.auth_username=developer infinispan.client.hotrod.auth_password=USDPASSWORD infinispan.client.hotrod.auth_server_name=USDCLUSTER_NAME infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512 Encryption infinispan.client.hotrod.sni_host_name=USDSERVICE_HOSTNAME infinispan.client.hotrod.trust_store_file_name=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt infinispan.client.hotrod.trust_store_type=pem", "import org.infinispan.client.hotrod.configuration.ClientIntelligence; import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; import org.infinispan.client.hotrod.configuration.SaslQop; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host(\"USDHOSTNAME\") .port(\"USDPORT\") .security().authentication() .username(\"username\") .password(\"changeme\") .realm(\"default\") .saslQop(SaslQop.AUTH) .saslMechanism(\"SCRAM-SHA-512\") .ssl() .sniHostName(\"USDSERVICE_HOSTNAME\") //Create a client trust store with tls.crt from your project. .trustStoreFileName(\"/path/to/truststore.pkcs12\") .trustStorePassword(\"trust_store_password\") .trustStoreType(\"PCKS12\"); builder.clientIntelligence(ClientIntelligence.BASIC);", "Connection infinispan.client.hotrod.server_list=USDHOSTNAME:USDPORT Client intelligence infinispan.client.hotrod.client_intelligence=BASIC Authentication infinispan.client.hotrod.use_auth=true infinispan.client.hotrod.auth_username=developer infinispan.client.hotrod.auth_password=USDPASSWORD infinispan.client.hotrod.auth_server_name=USDCLUSTER_NAME infinispan.client.hotrod.sasl_properties.javax.security.sasl.qop=auth infinispan.client.hotrod.sasl_mechanism=SCRAM-SHA-512 Encryption infinispan.client.hotrod.sni_host_name=USDSERVICE_HOSTNAME Create a client trust store with tls.crt from your project. infinispan.client.hotrod.trust_store_file_name=/path/to/truststore.pkcs12 infinispan.client.hotrod.trust_store_password=trust_store_password infinispan.client.hotrod.trust_store_type=PCKS12", "import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.security() .authentication() .saslMechanism(\"EXTERNAL\") .ssl() .keyStoreFileName(\"/path/to/keystore\") .keyStorePassword(\"keystorepassword\".toCharArray()) .keyStoreType(\"PCKS12\");", "import org.infinispan.client.hotrod.DefaultTemplate; import org.infinispan.client.hotrod.RemoteCache; import org.infinispan.client.hotrod.RemoteCacheManager; builder.remoteCache(\"my-cache\") .templateName(DefaultTemplate.DIST_SYNC); builder.remoteCache(\"another-cache\") .configuration(\"<infinispan><cache-container><distributed-cache name=\\\"another-cache\\\"><encoding media-type=\\\"application/x-protostream\\\"/></distributed-cache></cache-container></infinispan>\"); try (RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build())) { // Get a remote cache that does not exist. // Rather than return null, create the cache from a template. RemoteCache<String, String> cache = cacheManager.getCache(\"my-cache\"); // Store a value. cache.put(\"hello\", \"world\"); // Retrieve the value and print it. System.out.printf(\"key = %s\\n\", cache.get(\"hello\"));", "import org.infinispan.client.hotrod.RemoteCacheManager; import org.infinispan.commons.configuration.XMLStringConfiguration; private void createCacheWithXMLConfiguration() { String cacheName = \"CacheWithXMLConfiguration\"; String xml = String.format(\"<distributed-cache name=\\\"%s\\\">\" + \"<encoding media-type=\\\"application/x-protostream\\\"/>\" + \"<locking isolation=\\\"READ_COMMITTED\\\"/>\" + \"<transaction mode=\\\"NON_XA\\\"/>\" + \"<expiration lifespan=\\\"60000\\\" interval=\\\"20000\\\"/>\" + \"</distributed-cache>\" , cacheName); manager.administration().getOrCreateCache(cacheName, new XMLStringConfiguration(xml)); System.out.println(\"Cache with configuration exists or is created.\"); }", "Add cache configuration infinispan.client.hotrod.cache.my-cache.template_name=org.infinispan.DIST_SYNC infinispan.client.hotrod.cache.another-cache.configuration=<infinispan><cache-container><distributed-cache name=\\\"another-cache\\\"/></cache-container></infinispan> infinispan.client.hotrod.cache.my-other-cache.configuration_uri=file:/path/to/configuration.xml" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html-single/data_grid_operator_guide/index
Chapter 7. OpenStack Cloud Controller Manager reference guide
Chapter 7. OpenStack Cloud Controller Manager reference guide 7.1. The OpenStack Cloud Controller Manager Beginning with OpenShift Container Platform 4.12, clusters that run on Red Hat OpenStack Platform (RHOSP) were switched from the legacy OpenStack cloud provider to the external OpenStack Cloud Controller Manager (CCM). This change follows the move in Kubernetes from in-tree, legacy cloud providers to external cloud providers that are implemented by using the Cloud Controller Manager . To preserve user-defined configurations for the legacy cloud provider, existing configurations are mapped to new ones as part of the migration process. It searches for a configuration called cloud-provider-config in the openshift-config namespace. Note The config map name cloud-provider-config is not statically configured. It is derived from the spec.cloudConfig.name value in the infrastructure/cluster CRD. Found configurations are synchronized to the cloud-conf config map in the openshift-cloud-controller-manager namespace. As part of this synchronization, the OpenStack CCM Operator alters the new config map such that its properties are compatible with the external cloud provider. The file is changed in the following ways: The [Global] secret-name , [Global] secret-namespace , and [Global] kubeconfig-path options are removed. They do not apply to the external cloud provider. The [Global] use-clouds , [Global] clouds-file , and [Global] cloud options are added. The entire [BlockStorage] section is removed. External cloud providers no longer perform storage operations. Block storage configuration is managed by the Cinder CSI driver. Additionally, the CCM Operator enforces a number of default options. Values for these options are always overriden as follows: [Global] use-clouds = true clouds-file = /etc/openstack/secret/clouds.yaml cloud = openstack ... [LoadBalancer] enabled = true The clouds-value value, /etc/openstack/secret/clouds.yaml , is mapped to the openstack-cloud-credentials config in the openshift-cloud-controller-manager namespace. You can modify the RHOSP cloud in this file as you do any other clouds.yaml file. 7.2. The OpenStack Cloud Controller Manager (CCM) config map An OpenStack CCM config map defines how your cluster interacts with your RHOSP cloud. By default, this configuration is stored under the cloud.conf key in the cloud-conf config map in the openshift-cloud-controller-manager namespace. Important The cloud-conf config map is generated from the cloud-provider-config config map in the openshift-config namespace. To change the settings that are described by the cloud-conf config map, modify the cloud-provider-config config map. As part of this synchronization, the CCM Operator overrides some options. For more information, see "The RHOSP Cloud Controller Manager". For example: An example cloud-conf config map apiVersion: v1 data: cloud.conf: | [Global] 1 secret-name = openstack-credentials secret-namespace = kube-system region = regionOne [LoadBalancer] enabled = True kind: ConfigMap metadata: creationTimestamp: "2022-12-20T17:01:08Z" name: cloud-conf namespace: openshift-cloud-controller-manager resourceVersion: "2519" uid: cbbeedaf-41ed-41c2-9f37-4885732d3677 1 Set global options by using a clouds.yaml file rather than modifying the config map. The following options are present in the config map. Except when indicated otherwise, they are mandatory for clusters that run on RHOSP. 7.2.1. Load balancer options CCM supports several load balancer options for deployments that use Octavia. Note Neutron-LBaaS support is deprecated. Option Description enabled Whether or not to enable the LoadBalancer type of services integration. The default value is true . floating-network-id Optional. The external network used to create floating IP addresses for load balancer virtual IP addresses (VIPs). If there are multiple external networks in the cloud, this option must be set or the user must specify loadbalancer.openstack.org/floating-network-id in the service annotation. floating-subnet-id Optional. The external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet-id . floating-subnet Optional. A name pattern (glob or regular expression if starting with ~ ) for the external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet . If multiple subnets match the pattern, the first one with available IP addresses is used. floating-subnet-tags Optional. Tags for the external network subnet used to create floating IP addresses for the load balancer VIP. Can be overridden by the service annotation loadbalancer.openstack.org/floating-subnet-tags . If multiple subnets match these tags, the first one with available IP addresses is used. If the RHOSP network is configured with sharing disabled, for example, with the --no-share flag used during creation, this option is unsupported. Set the network to share to use this option. lb-method The load balancing algorithm used to create the load balancer pool. For the Amphora provider the value can be ROUND_ROBIN , LEAST_CONNECTIONS , or SOURCE_IP . The default value is ROUND_ROBIN . For the OVN provider, only the SOURCE_IP_PORT algorithm is supported. For the Amphora provider, if using the LEAST_CONNECTIONS or SOURCE_IP methods, configure the create-monitor option as true in the cloud-provider-config config map on the openshift-config namespace and ETP:Local on the load-balancer type service to allow balancing algorithm enforcement in the client to service endpoint connections. lb-provider Optional. Used to specify the provider of the load balancer, for example, amphora or octavia . Only the Amphora and Octavia providers are supported. lb-version Optional. The load balancer API version. Only "v2" is supported. subnet-id The ID of the Networking service subnet on which load balancer VIPs are created. For dual stack deployments, leave this option unset. The OpenStack cloud provider automatically selects which subnet to use for a load balancer. network-id The ID of the Networking service network on which load balancer VIPs are created. Unnecessary if subnet-id is set. If this property is not set, the network is automatically selected based on the network that cluster nodes use. create-monitor Whether or not to create a health monitor for the service load balancer. A health monitor is required for services that declare externalTrafficPolicy: Local . The default value is false . This option is unsupported if you use RHOSP earlier than version 17 with the ovn provider. monitor-delay The interval in seconds by which probes are sent to members of the load balancer. The default value is 5 . monitor-max-retries The number of successful checks that are required to change the operating status of a load balancer member to ONLINE . The valid range is 1 to 10 , and the default value is 1 . monitor-timeout The time in seconds that a monitor waits to connect to the back end before it times out. The default value is 3 . internal-lb Whether or not to create an internal load balancer without floating IP addresses. The default value is false . LoadBalancerClass "ClassName" This is a config section that comprises a set of options: floating-network-id floating-subnet-id floating-subnet floating-subnet-tags network-id subnet-id The behavior of these options is the same as that of the identically named options in the load balancer section of the CCM config file. You can set the ClassName value by specifying the service annotation loadbalancer.openstack.org/class . max-shared-lb The maximum number of services that can share a load balancer. The default value is 2 . 7.2.2. Options that the Operator overrides The CCM Operator overrides the following options, which you might recognize from configuring RHOSP. Do not configure them yourself. They are included in this document for informational purposes only. Option Description auth-url The RHOSP Identity service URL. For example, http://128.110.154.166/identity . os-endpoint-type The type of endpoint to use from the service catalog. username The Identity service user name. password The Identity service user password. domain-id The Identity service user domain ID. domain-name The Identity service user domain name. tenant-id The Identity service project ID. Leave this option unset if you are using Identity service application credentials. In version 3 of the Identity API, which changed the identifier tenant to project , the value of tenant-id is automatically mapped to the project construct in the API. tenant-name The Identity service project name. tenant-domain-id The Identity service project domain ID. tenant-domain-name The Identity service project domain name. user-domain-id The Identity service user domain ID. user-domain-name The Identity service user domain name. use-clouds Whether or not to fetch authorization credentials from a clouds.yaml file. Options set in this section are prioritized over values read from the clouds.yaml file. CCM searches for the file in the following places: The value of the clouds-file option. A file path stored in the environment variable OS_CLIENT_CONFIG_FILE . The directory pkg/openstack . The directory ~/.config/openstack . The directory /etc/openstack . clouds-file The file path of a clouds.yaml file. It is used if the use-clouds option is set to true . cloud The named cloud in the clouds.yaml file that you want to use. It is used if the use-clouds option is set to true .
[ "[Global] use-clouds = true clouds-file = /etc/openstack/secret/clouds.yaml cloud = openstack [LoadBalancer] enabled = true", "apiVersion: v1 data: cloud.conf: | [Global] 1 secret-name = openstack-credentials secret-namespace = kube-system region = regionOne [LoadBalancer] enabled = True kind: ConfigMap metadata: creationTimestamp: \"2022-12-20T17:01:08Z\" name: cloud-conf namespace: openshift-cloud-controller-manager resourceVersion: \"2519\" uid: cbbeedaf-41ed-41c2-9f37-4885732d3677" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_openstack/installing-openstack-cloud-config-reference
Chapter 1. OpenID Connect (OIDC) Bearer token authentication
Chapter 1. OpenID Connect (OIDC) Bearer token authentication Secure HTTP access to Jakarta REST (formerly known as JAX-RS) endpoints in your application with Bearer token authentication by using the Quarkus OpenID Connect (OIDC) extension. 1.1. Overview of the Bearer token authentication mechanism in Quarkus Quarkus supports the Bearer token authentication mechanism through the Quarkus OpenID Connect (OIDC) extension. The bearer tokens are issued by OIDC and OAuth 2.0 compliant authorization servers, such as Keycloak . Bearer token authentication is the process of authorizing HTTP requests based on the existence and validity of a bearer token. The bearer token provides information about the subject of the call, which is used to determine whether or not an HTTP resource can be accessed. The following diagrams outline the Bearer token authentication mechanism in Quarkus: Figure 1.1. Bearer token authentication mechanism in Quarkus with application The Quarkus service retrieves verification keys from the OIDC provider. The verification keys are used to verify the bearer access token signatures. The Quarkus user accesses the application (SPA). The application uses Authorization Code Flow to authenticate the user and retrieve tokens from the OIDC provider. The application uses the access token to retrieve the service data from the Quarkus service. The Quarkus service verifies the bearer access token signature by using the verification keys, checks the token expiry date and other claims, allows the request to proceed if the token is valid, and returns the service response to the application. The application returns the same data to the Quarkus user. Figure 1.2. Bearer token authentication mechanism in Quarkus with Java or command line client The Quarkus service retrieves verification keys from the OIDC provider. The verification keys are used to verify the bearer access token signatures. The client uses client_credentials that requires client id and secret or password grant, which requires client id, secret, username, and password to retrieve the access token from the OIDC provider. The client uses the access token to retrieve the service data from the Quarkus service. The Quarkus service verifies the bearer access token signature by using the verification keys, checks the token expiry date and other claims, allows the request to proceed if the token is valid, and returns the service response to the client. If you need to authenticate and authorize users by using OIDC authorization code flow, see the Quarkus OpenID Connect authorization code flow mechanism for protecting web applications guide. Also, if you use Keycloak and bearer tokens, see the Quarkus Using Keycloak to centralize authorization guide. To learn about how you can protect service applications by using OIDC Bearer token authentication, see the following tutorial: Protect a service application by using OpenID Connect (OIDC) Bearer token authentication . For information about how to support multiple tenants, see the Quarkus Using OpenID Connect Multi-Tenancy guide. 1.1.1. Accessing JWT claims If you need to access JWT token claims, you can inject JsonWebToken : package org.acme.security.openid.connect; import org.eclipse.microprofile.jwt.JsonWebToken; import jakarta.inject.Inject; import jakarta.annotation.security.RolesAllowed; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path("/api/admin") public class AdminResource { @Inject JsonWebToken jwt; @GET @RolesAllowed("admin") @Produces(MediaType.TEXT_PLAIN) public String admin() { return "Access for subject " + jwt.getSubject() + " is granted"; } } Injection of JsonWebToken is supported in @ApplicationScoped , @Singleton , and @RequestScoped scopes. However, the use of @RequestScoped is required if the individual claims are injected as simple types. For more information, see the Supported injection scopes section of the Quarkus "Using JWT RBAC" guide. 1.1.2. UserInfo If you must request a UserInfo JSON object from the OIDC UserInfo endpoint, set quarkus.oidc.authentication.user-info-required=true . A request is sent to the OIDC provider UserInfo endpoint, and an io.quarkus.oidc.UserInfo (a simple javax.json.JsonObject wrapper) object is created. io.quarkus.oidc.UserInfo can be injected or accessed as a SecurityIdentity userinfo attribute. quarkus.oidc.authentication.user-info-required is automatically enabled if one of these conditions is met: if quarkus.oidc.roles.source is set to userinfo or quarkus.oidc.token.verify-access-token-with-user-info is set to true or quarkus.oidc.authentication.id-token-required is set to false , the current OIDC tenant must support a UserInfo endpoint in these cases. if io.quarkus.oidc.UserInfo injection point is detected but only if the current OIDC tenant supports a UserInfo endpoint. 1.1.3. Configuration metadata The current tenant's discovered OpenID Connect Configuration Metadata is represented by io.quarkus.oidc.OidcConfigurationMetadata and can be injected or accessed as a SecurityIdentity configuration-metadata attribute. The default tenant's OidcConfigurationMetadata is injected if the endpoint is public. 1.1.4. Token claims and SecurityIdentity roles You can map SecurityIdentity roles from the verified JWT access tokens as follows: If the quarkus.oidc.roles.role-claim-path property is set, and matching array or string claims are found, then the roles are extracted from these claims. For example, customroles , customroles/array , scope , "http://namespace-qualified-custom-claim"/roles , "http://namespace-qualified-roles" . If a groups claim is available, then its value is used. If a realm_access/roles or resource_access/client_id/roles (where client_id is the value of the quarkus.oidc.client-id property) claim is available, then its value is used. This check supports the tokens issued by Keycloak. For example, the following JWT token has a complex groups claim that contains a roles array that includes roles: { "iss": "https://server.example.com", "sub": "24400320", "upn": "[email protected]", "preferred_username": "jdoe", "exp": 1311281970, "iat": 1311280970, "groups": { "roles": [ "microprofile_jwt_user" ], } } You must map the microprofile_jwt_user role to SecurityIdentity roles, and you can do so with this configuration: quarkus.oidc.roles.role-claim-path=groups/roles . If the token is opaque (binary), then a scope property from the remote token introspection response is used. If UserInfo is the source of the roles, then set quarkus.oidc.authentication.user-info-required=true and quarkus.oidc.roles.source=userinfo , and if needed, set quarkus.oidc.roles.role-claim-path . Additionally, a custom SecurityIdentityAugmentor can also be used to add the roles. For more information, see the Security identity customization section of the Quarkus "Security tips and tricks" guide. You can also map SecurityIdentity roles created from token claims to deployment-specific roles by using the HTTP Security policy . 1.1.5. Token scopes and SecurityIdentity permissions SecurityIdentity permissions are mapped in the form of io.quarkus.security.StringPermission from the scope parameter of the source of the roles and using the same claim separator. import java.util.List; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.eclipse.microprofile.jwt.Claims; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.security.PermissionsAllowed; @Path("/service") public class ProtectedResource { @Inject JsonWebToken accessToken; @PermissionsAllowed("email") 1 @GET @Path("/email") public Boolean isUserEmailAddressVerifiedByUser() { return accessToken.getClaim(Claims.email_verified.name()); } @PermissionsAllowed("orders_read") 2 @GET @Path("/order") public List<Order> listOrders() { return List.of(new Order("1")); } public static class Order { String id; public Order() { } public Order(String id) { this.id = id; } public String getId() { return id; } public void setId() { this.id = id; } } } 1 Only requests with OpenID Connect scope email will be granted access. 2 The read access is limited to the client requests with the orders_read scope. For more information about the io.quarkus.security.PermissionsAllowed annotation, see the Permission annotation section of the "Authorization of web endpoints" guide. 1.1.6. Token verification and introspection If the token is a JWT token, then, by default, it is verified with a JsonWebKey (JWK) key from a local JsonWebKeySet , retrieved from the OIDC provider's JWK endpoint. The token's key identifier ( kid ) header value is used to find the matching JWK key. If no matching JWK is available locally, then JsonWebKeySet is refreshed by fetching the current key set from the JWK endpoint. The JsonWebKeySet refresh can be repeated only after the quarkus.oidc.token.forced-jwk-refresh-interval expires. The default expiry time is 10 minutes. If no matching JWK is available after the refresh, the JWT token is sent to the OIDC provider's token introspection endpoint. If the token is opaque, which means it can be a binary token or an encrypted JWT token, then it is always sent to the OIDC provider's token introspection endpoint. If you work only with JWT tokens and expect a matching JsonWebKey to always be available, for example, after refreshing a key set, you must disable token introspection, as shown in the following example: quarkus.oidc.token.allow-jwt-introspection=false quarkus.oidc.token.allow-opaque-token-introspection=false There might be cases where JWT tokens must be verified through introspection only, which can be forced by configuring an introspection endpoint address only. The following properties configuration shows you an example of how you can achieve this with Keycloak: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.discovery-enabled=false # Token Introspection endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/tokens/introspect quarkus.oidc.introspection-path=/protocol/openid-connect/tokens/introspect There are advantages and disadvantages to indirectly enforcing the introspection of JWT tokens remotely. An advantage is that you eliminate the need for two remote calls: a remote OIDC metadata discovery call followed by another remote call to fetch the verification keys that will not be used. A disadvantage is that you need to know the introspection endpoint address and configure it manually. The alternative approach is to allow the default option of OIDC metadata discovery but also require that only the remote JWT introspection is performed, as shown in the following example: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.token.require-jwt-introspection-only=true An advantage of this approach is that the configuration is simpler and easier to understand. A disadvantage is that a remote OIDC metadata discovery call is required to discover an introspection endpoint address, even though the verification keys will not be fetched. The io.quarkus.oidc.TokenIntrospection , a simple jakarta.json.JsonObject wrapper object, will be created. It can be injected or accessed as a SecurityIdentity introspection attribute, providing either the JWT or opaque token has been successfully introspected. 1.1.7. Token introspection and UserInfo cache All opaque access tokens must be remotely introspected. Sometimes, JWT access tokens might also have to be introspected. If UserInfo is also required, the same access token is used in a subsequent remote call to the OIDC provider. So, if UserInfo is required, and the current access token is opaque, two remote calls are made for every such token; one remote call to introspect the token and another to get UserInfo . If the token is JWT, only a single remote call to get UserInfo is needed, unless it also has to be introspected. The cost of making up to two remote calls for every incoming bearer or code flow access token can sometimes be problematic. If this is the case in production, consider caching the token introspection and UserInfo data for a short period, for example, 3 or 5 minutes. quarkus-oidc provides quarkus.oidc.TokenIntrospectionCache and quarkus.oidc.UserInfoCache interfaces, usable for @ApplicationScoped cache implementation. Use @ApplicationScoped cache implementation to store and retrieve quarkus.oidc.TokenIntrospection and/or quarkus.oidc.UserInfo objects, as outlined in the following example: @ApplicationScoped @Alternative @Priority(1) public class CustomIntrospectionUserInfoCache implements TokenIntrospectionCache, UserInfoCache { ... } Each OIDC tenant can either permit or deny the storing of its quarkus.oidc.TokenIntrospection data, quarkus.oidc.UserInfo data, or both with boolean quarkus.oidc."tenant".allow-token-introspection-cache and quarkus.oidc."tenant".allow-user-info-cache properties. Additionally, quarkus-oidc provides a simple default memory-based token cache, which implements both quarkus.oidc.TokenIntrospectionCache and quarkus.oidc.UserInfoCache interfaces. You can configure and activate the default OIDC token cache as follows: # 'max-size' is 0 by default, so the cache can be activated by setting 'max-size' to a positive value: quarkus.oidc.token-cache.max-size=1000 # 'time-to-live' specifies how long a cache entry can be valid for and will be used by a cleanup timer: quarkus.oidc.token-cache.time-to-live=3M # 'clean-up-timer-interval' is not set by default, so the cleanup timer can be activated by setting 'clean-up-timer-interval': quarkus.oidc.token-cache.clean-up-timer-interval=1M The default cache uses a token as a key, and each entry can have TokenIntrospection , UserInfo , or both. It will only keep up to a max-size number of entries. If the cache is already full when a new entry is to be added, an attempt is made to find a space by removing a single expired entry. Additionally, the cleanup timer, if activated, periodically checks for expired entries and removes them. You can experiment with the default cache implementation or register a custom one. 1.1.8. JSON Web Token claim verification After the bearer JWT token's signature has been verified and its expires at ( exp ) claim has been checked, the iss ( issuer ) claim value is verified . By default, the iss claim value is compared to the issuer property, which might have been discovered in the well-known provider configuration. However, if the quarkus.oidc.token.issuer property is set, then the iss claim value is compared to it instead. In some cases, this iss claim verification might not work. For example, if the discovered issuer property contains an internal HTTP/IP address while the token iss claim value contains an external HTTP/IP address. Or when a discovered issuer property contains the template tenant variable, but the token iss claim value has the complete tenant-specific issuer value. In such cases, consider skipping the issuer verification by setting quarkus.oidc.token.issuer=any . Only skip the issuer verification if no other options are available: If you are using Keycloak and observe the issuer verification errors caused by the different host addresses, configure Keycloak with a KEYCLOAK_FRONTEND_URL property to ensure the same host address is used. If the iss property is tenant-specific in a multitenant deployment, use the SecurityIdentity tenant-id attribute to check that the issuer is correct in the endpoint or the custom Jakarta filter. For example: import jakarta.inject.Inject; import jakarta.ws.rs.container.ContainerRequestContext; import jakarta.ws.rs.container.ContainerRequestFilter; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.ext.Provider; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.OidcConfigurationMetadata; import io.quarkus.security.identity.SecurityIdentity; @Provider public class IssuerValidator implements ContainerRequestFilter { @Inject OidcConfigurationMetadata configMetadata; @Inject JsonWebToken jwt; @Inject SecurityIdentity identity; public void filter(ContainerRequestContext requestContext) { String issuer = configMetadata.getIssuer().replace("{tenant-id}", identity.getAttribute("tenant-id")); if (!issuer.equals(jwt.getIssuer())) { requestContext.abortWith(Response.status(401).build()); } } } Note Consider using the quarkus.oidc.token.audience property to verify the token aud ( audience ) claim value. 1.1.9. Jose4j Validator You can register a custom Jose4j Validator to customize the JWT claim verification process, before org.eclipse.microprofile.jwt.JsonWebToken is initialized. For example: package org.acme.security.openid.connect; import static org.eclipse.microprofile.jwt.Claims.iss; import io.quarkus.arc.Unremovable; import jakarta.enterprise.context.ApplicationScoped; import org.jose4j.jwt.MalformedClaimException; import org.jose4j.jwt.consumer.JwtContext; import org.jose4j.jwt.consumer.Validator; @Unremovable @ApplicationScoped public class IssuerValidator implements Validator { 1 @Override public String validate(JwtContext jwtContext) throws MalformedClaimException { if (jwtContext.getJwtClaims().hasClaim(iss.name()) && "my-issuer".equals(jwtContext.getJwtClaims().getClaimValueAsString(iss.name()))) { return "wrong issuer"; 2 } return null; 3 } } 1 Register Jose4j Validator to verify JWT tokens for all OIDC tenants. 2 Return the claim verification error description. 3 Return null to confirm that this Validator has successfully verified the token. Tip Use a @quarkus.oidc.TenantFeature annotation to bind a custom Validator to a specific OIDC tenant only. 1.1.10. Cross-origin resource sharing If you plan to use your OIDC service application from a application running on a different domain, you must configure cross-origin resource sharing (CORS). For more information, see the CORS filter section of the "Cross-origin resource sharing" guide. 1.1.11. Provider endpoint configuration An OIDC service application needs to know the OIDC provider's token, JsonWebKey (JWK) set, and possibly UserInfo and introspection endpoint addresses. By default, they are discovered by adding a /.well-known/openid-configuration path to the configured quarkus.oidc.auth-server-url . Alternatively, if the discovery endpoint is not available, or if you want to save on the discovery endpoint round-trip, you can disable the discovery and configure them with relative path values. For example: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.discovery-enabled=false # Token endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/token quarkus.oidc.token-path=/protocol/openid-connect/token # JWK set endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/certs quarkus.oidc.jwks-path=/protocol/openid-connect/certs # UserInfo endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/userinfo quarkus.oidc.user-info-path=/protocol/openid-connect/userinfo # Token Introspection endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/tokens/introspect quarkus.oidc.introspection-path=/protocol/openid-connect/tokens/introspect 1.1.12. Token propagation For information about bearer access token propagation to the downstream services, see the Token propagation section of the Quarkus "OpenID Connect (OIDC) and OAuth2 client and filters reference" guide. 1.1.13. JWT token certificate chain In some cases, JWT bearer tokens have an x5c header which represents an X509 certificate chain whose leaf certificate contains a public key that must be used to verify this token's signature. Before this public key can be accepted to verify the signature, the certificate chain must be validated first. The certificate chain validation involves several steps: Confirm that every certificate but the root one is signed by the parent certificate. Confirm the chain's root certificate is also imported in the truststore. Validate the chain's leaf certificate. If a common name of the leaf certificate is configured then a common name of the chain's leaf certificate must match it. Otherwise the chain's leaf certificate must also be avaiable in the truststore, unless one or more custom TokenCertificateValidator implementations are registered. quarkus.oidc.TokenCertificateValidator can be used to add a custom certificate chain validation step. It can be used by all tenants expecting tokens with the certificate chain or bound to specific OIDC tenants with the @quarkus.oidc.TenantFeature annotation. For example, here is how you can configure Quarkus OIDC to verify the token's certificate chain, without using quarkus.oidc.TokenCertificateValidator : quarkus.oidc.certificate-chain.trust-store-file=truststore-rootcert.p12 1 quarkus.oidc.certificate-chain.trust-store-password=storepassword quarkus.oidc.certificate-chain.leaf-certificate-name=www.quarkusio.com 2 1 The truststore must contain the certificate chain's root certificate. 2 The certificate chain's leaf certificate must have a common name equal to www.quarkusio.com . If this property is not configured then the truststore must contain the certificate chain's leaf certificate unless one or more custom TokenCertificateValidator implementations are registered. You can add a custom certificate chain validation step by registering a custom quarkus.oidc.TokenCertificateValidator , for example: package io.quarkus.it.keycloak; import java.security.cert.CertificateException; import java.security.cert.X509Certificate; import java.util.List; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.TokenCertificateValidator; import io.quarkus.oidc.runtime.TrustStoreUtils; import io.vertx.core.json.JsonObject; @ApplicationScoped @Unremovable public class BearerGlobalTokenChainValidator implements TokenCertificateValidator { @Override public void validate(OidcTenantConfig oidcConfig, List<X509Certificate> chain, String tokenClaims) throws CertificateException { String rootCertificateThumbprint = TrustStoreUtils.calculateThumprint(chain.get(chain.size() - 1)); JsonObject claims = new JsonObject(tokenClaims); if (!rootCertificateThumbprint.equals(claims.getString("root-certificate-thumbprint"))) { 1 throw new CertificateException("Invalid root certificate"); } } } 1 Confirm that the certificate chain's root certificate is bound to the custom JWT token's claim. 1.1.14. OIDC provider client authentication quarkus.oidc.runtime.OidcProviderClient is used when a remote request to an OIDC provider is required. If introspection of the Bearer token is necessary, then OidcProviderClient must authenticate to the OIDC provider. For more information about supported authentication options, see the OIDC provider client authentication section in the Quarkus "OpenID Connect authorization code flow mechanism for protecting web applications" guide. 1.1.15. Testing Note If you have to test Quarkus OIDC service endpoints that require Keycloak authorization , follow the Test Keycloak authorization section. You can begin testing by adding the following dependencies to your test project: Using Maven: <dependency> <groupId>io.rest-assured</groupId> <artifactId>rest-assured</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> Using Gradle: testImplementation("io.rest-assured:rest-assured") testImplementation("io.quarkus:quarkus-junit5") 1.1.15.1. WireMock Add the following dependencies to your test project: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-test-oidc-server</artifactId> <scope>test</scope> </dependency> Using Gradle: testImplementation("io.quarkus:quarkus-test-oidc-server") Prepare the REST test endpoint and set application.properties . For example: # keycloak.url is set by OidcWiremockTestResource quarkus.oidc.auth-server-url=USD{keycloak.url:replaced-by-test-resource}/realms/quarkus/ quarkus.oidc.client-id=quarkus-service-app quarkus.oidc.application-type=service Finally, write the test code. For example: import static org.hamcrest.Matchers.equalTo; import java.util.Set; import org.junit.jupiter.api.Test; import io.quarkus.test.common.QuarkusTestResource; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.oidc.server.OidcWiremockTestResource; import io.restassured.RestAssured; import io.smallrye.jwt.build.Jwt; @QuarkusTest @QuarkusTestResource(OidcWiremockTestResource.class) public class BearerTokenAuthorizationTest { @Test public void testBearerToken() { RestAssured.given().auth().oauth2(getAccessToken("alice", Set.of("user"))) .when().get("/api/users/me") .then() .statusCode(200) // The test endpoint returns the name extracted from the injected `SecurityIdentity` principal. .body("userName", equalTo("alice")); } private String getAccessToken(String userName, Set<String> groups) { return Jwt.preferredUserName(userName) .groups(groups) .issuer("https://server.example.com") .audience("https://service.example.com") .sign(); } } The quarkus-test-oidc-server extension includes a signing RSA private key file in a JSON Web Key ( JWK ) format and points to it with a smallrye.jwt.sign.key.location configuration property. It allows you to sign the token by using a no-argument sign() operation. Testing your quarkus-oidc service application with OidcWiremockTestResource provides the best coverage because even the communication channel is tested against the WireMock HTTP stubs. If you need to run a test with WireMock stubs that are not yet supported by OidcWiremockTestResource , you can inject a WireMockServer instance into the test class, as shown in the following example: Note OidcWiremockTestResource does not work with @QuarkusIntegrationTest against Docker containers because the WireMock server runs in the JVM that runs the test, which is inaccessible from the Docker container that runs the Quarkus application. package io.quarkus.it.keycloak; import static com.github.tomakehurst.wiremock.client.WireMock.matching; import static org.hamcrest.Matchers.equalTo; import org.junit.jupiter.api.Test; import com.github.tomakehurst.wiremock.WireMockServer; import com.github.tomakehurst.wiremock.client.WireMock; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.oidc.server.OidcWireMock; import io.restassured.RestAssured; @QuarkusTest public class CustomOidcWireMockStubTest { @OidcWireMock WireMockServer wireMockServer; @Test public void testInvalidBearerToken() { wireMockServer.stubFor(WireMock.post("/auth/realms/quarkus/protocol/openid-connect/token/introspect") .withRequestBody(matching(".*token=invalid_token.*")) .willReturn(WireMock.aResponse().withStatus(400))); RestAssured.given().auth().oauth2("invalid_token").when() .get("/api/users/me/bearer") .then() .statusCode(401) .header("WWW-Authenticate", equalTo("Bearer")); } } 1.1.16. OidcTestClient If you use SaaS OIDC providers, such as Auth0 , and want to run tests against the test (development) domain or to run tests against a remote Keycloak test realm, if you already have quarkus.oidc.auth-server-url configured, you can use OidcTestClient . For example, you have the following configuration: %test.quarkus.oidc.auth-server-url=https://dev-123456.eu.auth0.com/ %test.quarkus.oidc.client-id=test-auth0-client %test.quarkus.oidc.credentials.secret=secret To start, add the same dependency, quarkus-test-oidc-server , as described in the WireMock section. , write the test code as follows: package org.acme; import org.junit.jupiter.api.AfterAll; import static io.restassured.RestAssured.given; import static org.hamcrest.CoreMatchers.is; import java.util.Map; import org.junit.jupiter.api.Test; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.oidc.client.OidcTestClient; @QuarkusTest public class GreetingResourceTest { static OidcTestClient oidcTestClient = new OidcTestClient(); @AfterAll public static void close() { oidcTestClient.close(); } @Test public void testHelloEndpoint() { given() .auth().oauth2(getAccessToken("alice", "alice")) .when().get("/hello") .then() .statusCode(200) .body(is("Hello, Alice")); } private String getAccessToken(String name, String secret) { return oidcTestClient.getAccessToken(name, secret, Map.of("audience", "https://dev-123456.eu.auth0.com/api/v2/", "scope", "profile")); } } This test code acquires a token by using a password grant from the test Auth0 domain, which has registered an application with the client id test-auth0-client , and created the user alice with password alice . For a test like this to work, the test Auth0 application must have the password grant enabled. This example code also shows how to pass additional parameters. For Auth0 , these are the audience and scope parameters. 1.1.16.1. Dev Services for Keycloak The preferred approach for integration testing against Keycloak is Dev Services for Keycloak . Dev Services for Keycloak will start and initialize a test container. Then, it will create a quarkus realm and a quarkus-app client ( secret secret) and add alice ( admin and user roles) and bob ( user role) users, where all of these properties can be customized. First, add the following dependency, which provides a utility class io.quarkus.test.keycloak.client.KeycloakTestClient that you can use in tests for acquiring the access tokens: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-test-keycloak-server</artifactId> <scope>test</scope> </dependency> Using Gradle: testImplementation("io.quarkus:quarkus-test-keycloak-server") , prepare your application.properties configuration file. You can start with an empty application.properties file because Dev Services for Keycloak registers quarkus.oidc.auth-server-url and points it to the running test container, quarkus.oidc.client-id=quarkus-app , and quarkus.oidc.credentials.secret=secret . However, if you have already configured the required quarkus-oidc properties, then you only need to associate quarkus.oidc.auth-server-url with the prod profile for `Dev Services for Keycloak`to start a container, as shown in the following example: %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus If a custom realm file has to be imported into Keycloak before running the tests, configure Dev Services for Keycloak as follows: %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.keycloak.devservices.realm-path=quarkus-realm.json Finally, write your test, which will be executed in JVM mode, as shown in the following examples: Example of a test executed in JVM mode: package org.acme.security.openid.connect; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.keycloak.client.KeycloakTestClient; import io.restassured.RestAssured; import org.junit.jupiter.api.Test; @QuarkusTest public class BearerTokenAuthenticationTest { KeycloakTestClient keycloakClient = new KeycloakTestClient(); @Test public void testAdminAccess() { RestAssured.given().auth().oauth2(getAccessToken("alice")) .when().get("/api/admin") .then() .statusCode(200); RestAssured.given().auth().oauth2(getAccessToken("bob")) .when().get("/api/admin") .then() .statusCode(403); } protected String getAccessToken(String userName) { return keycloakClient.getAccessToken(userName); } } Example of a test executed in native mode: package org.acme.security.openid.connect; import io.quarkus.test.junit.QuarkusIntegrationTest; @QuarkusIntegrationTest public class NativeBearerTokenAuthenticationIT extends BearerTokenAuthenticationTest { } For more information about initializing and configuring Dev Services for Keycloak, see the Dev Services for Keycloak guide. 1.1.16.2. Local public key You can use a local inlined public key for testing your quarkus-oidc service applications, as shown in the following example: quarkus.oidc.client-id=test quarkus.oidc.public-key=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAlivFI8qB4D0y2jy0CfEqFyy46R0o7S8TKpsx5xbHKoU1VWg6QkQm+ntyIv1p4kE1sPEQO73+HY8+Bzs75XwRTYL1BmR1w8J5hmjVWjc6R2BTBGAYRPFRhor3kpM6ni2SPmNNhurEAHw7TaqszP5eUF/F9+KEBWkwVta+PZ37bwqSE4sCb1soZFrVz/UT/LF4tYpuVYt3YbqToZ3pZOZ9AX2o1GCG3xwOjkc4x0W7ezbQZdC9iftPxVHR8irOijJRRjcPDtA6vPKpzLl6CyYnsIYPd99ltwxTHjr3npfv/3Lw50bAkbT4HeLFxTx4flEoZLKO/g0bAoV2uqBhkA9xnQIDAQAB smallrye.jwt.sign.key.location=/privateKey.pem To generate JWT tokens, copy privateKey.pem from the integration-tests/oidc-tenancy in the main Quarkus repository and use a test code similar to the one in the preceding WireMock section. You can use your own test keys, if preferred. This approach provides limited coverage compared to the WireMock approach. For example, the remote communication code is not covered. 1.1.16.3. TestSecurity annotation You can use @TestSecurity and @OidcSecurity annotations to test the service application endpoint code, which depends on either one, or all three, of the following injections: JsonWebToken UserInfo OidcConfigurationMetadata First, add the following dependency: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-test-security-oidc</artifactId> <scope>test</scope> </dependency> Using Gradle: testImplementation("io.quarkus:quarkus-test-security-oidc") Write a test code as outlined in the following example: import static org.hamcrest.Matchers.is; import org.junit.jupiter.api.Test; import io.quarkus.test.common.http.TestHTTPEndpoint; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.security.TestSecurity; import io.quarkus.test.security.oidc.Claim; import io.quarkus.test.security.oidc.ConfigMetadata; import io.quarkus.test.security.oidc.OidcSecurity; import io.quarkus.test.security.oidc.UserInfo; import io.restassured.RestAssured; @QuarkusTest @TestHTTPEndpoint(ProtectedResource.class) public class TestSecurityAuthTest { @Test @TestSecurity(user = "userOidc", roles = "viewer") public void testOidc() { RestAssured.when().get("test-security-oidc").then() .body(is("userOidc:viewer")); } @Test @TestSecurity(user = "userOidc", roles = "viewer") @OidcSecurity(claims = { @Claim(key = "email", value = "[email protected]") }, userinfo = { @UserInfo(key = "sub", value = "subject") }, config = { @ConfigMetadata(key = "issuer", value = "issuer") }) public void testOidcWithClaimsUserInfoAndMetadata() { RestAssured.when().get("test-security-oidc-claims-userinfo-metadata").then() .body(is("userOidc:viewer:[email protected]:subject:issuer")); } } The ProtectedResource class, which is used in this code example, might look like this: import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import io.quarkus.oidc.OidcConfigurationMetadata; import io.quarkus.oidc.UserInfo; import io.quarkus.security.Authenticated; import org.eclipse.microprofile.jwt.JsonWebToken; @Path("/service") @Authenticated public class ProtectedResource { @Inject JsonWebToken accessToken; @Inject UserInfo userInfo; @Inject OidcConfigurationMetadata configMetadata; @GET @Path("test-security-oidc") public String testSecurityOidc() { return accessToken.getName() + ":" + accessToken.getGroups().iterator().(); } @GET @Path("test-security-oidc-claims-userinfo-metadata") public String testSecurityOidcWithClaimsUserInfoMetadata() { return accessToken.getName() + ":" + accessToken.getGroups().iterator().() + ":" + accessToken.getClaim("email") + ":" + userInfo.getString("sub") + ":" + configMetadata.get("issuer"); } } You must always use the @TestSecurity annotation. Its user property is returned as JsonWebToken.getName() and its roles property is returned as JsonWebToken.getGroups() . The @OidcSecurity annotation is optional and you can use it to set the additional token claims and the UserInfo and OidcConfigurationMetadata properties. Additionally, if the quarkus.oidc.token.issuer property is configured, it is used as an OidcConfigurationMetadata issuer property value. If you work with opaque tokens, you can test them as shown in the following code example: import static org.hamcrest.Matchers.is; import org.junit.jupiter.api.Test; import io.quarkus.test.common.http.TestHTTPEndpoint; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.security.TestSecurity; import io.quarkus.test.security.oidc.OidcSecurity; import io.quarkus.test.security.oidc.TokenIntrospection; import io.restassured.RestAssured; @QuarkusTest @TestHTTPEndpoint(ProtectedResource.class) public class TestSecurityAuthTest { @Test @TestSecurity(user = "userOidc", roles = "viewer") @OidcSecurity(introspectionRequired = true, introspection = { @TokenIntrospection(key = "email", value = "[email protected]") } ) public void testOidcWithClaimsUserInfoAndMetadata() { RestAssured.when().get("test-security-oidc-opaque-token").then() .body(is("userOidc:viewer:userOidc:viewer:[email protected]")); } } The ProtectedResource class, which is used in this code example, might look like this: import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import io.quarkus.oidc.TokenIntrospection; import io.quarkus.security.Authenticated; import io.quarkus.security.identity.SecurityIdentity; @Path("/service") @Authenticated public class ProtectedResource { @Inject SecurityIdentity securityIdentity; @Inject TokenIntrospection introspection; @GET @Path("test-security-oidc-opaque-token") public String testSecurityOidcOpaqueToken() { return securityIdentity.getPrincipal().getName() + ":" + securityIdentity.getRoles().iterator().() + ":" + introspection.getString("username") + ":" + introspection.getString("scope") + ":" + introspection.getString("email"); } } The @TestSecurity , user , and roles attributes are available as TokenIntrospection , username , and scope properties. Use io.quarkus.test.security.oidc.TokenIntrospection to add the additional introspection response properties, such as an email , and so on. Tip @TestSecurity and @OidcSecurity can be combined in a meta-annotation, as outlined in the following example: @Retention(RetentionPolicy.RUNTIME) @Target({ ElementType.METHOD }) @TestSecurity(user = "userOidc", roles = "viewer") @OidcSecurity(introspectionRequired = true, introspection = { @TokenIntrospection(key = "email", value = "[email protected]") } ) public @interface TestSecurityMetaAnnotation { } This is particularly useful if multiple test methods must use the same set of security settings. 1.1.17. Check errors in the logs To see more details about token verification errors, enable io.quarkus.oidc.runtime.OidcProvider and TRACE level logging: quarkus.log.category."io.quarkus.oidc.runtime.OidcProvider".level=TRACE quarkus.log.category."io.quarkus.oidc.runtime.OidcProvider".min-level=TRACE To see more details about OidcProvider client initialization errors, enable io.quarkus.oidc.runtime.OidcRecorder and TRACE level logging as follows: quarkus.log.category."io.quarkus.oidc.runtime.OidcRecorder".level=TRACE quarkus.log.category."io.quarkus.oidc.runtime.OidcRecorder".min-level=TRACE 1.1.18. External and internal access to OIDC providers The externally-accessible token of the OIDC provider and other endpoints might have different HTTP(S) URLs compared to the URLs that are auto-discovered or configured relative to the quarkus.oidc.auth-server-url internal URL. For example, suppose your SPA acquires a token from an external token endpoint address and sends it to Quarkus as a bearer token. In that case, the endpoint might report an issuer verification failure. In such cases, if you work with Keycloak, start it with the KEYCLOAK_FRONTEND_URL system property set to the externally accessible base URL. If you work with other OIDC providers, refer to your provider's documentation. 1.1.19. Using the client-id property The quarkus.oidc.client-id property identifies the OIDC client that requested the current bearer token. The OIDC client can be an SPA application running in a browser or a Quarkus web-app confidential client application propagating the access token to the Quarkus service application. This property is required if the service application is expected to introspect the tokens remotely, which is always the case for the opaque tokens. This property is optional for local JSON Web Token (JWT) verification only. Setting the quarkus.oidc.client-id property is encouraged even if the endpoint does not require access to the remote introspection endpoint. This is because when client-id is set, it can be used to verify the token audience. It will also be included in logs when the token verification fails, enabling better traceability of tokens issued to specific clients and analysis over a longer period. For example, if your OIDC provider sets a token audience, consider the following configuration pattern: # Set client-id quarkus.oidc.client-id=quarkus-app # Token audience claim must contain 'quarkus-app' quarkus.oidc.token.audience=USD{quarkus.oidc.client-id} If you set quarkus.oidc.client-id , but your endpoint does not require remote access to one of the OIDC provider endpoints (introspection, token acquisition, and so on), do not set a client secret with quarkus.oidc.credentials or similar properties because it will not be used. Note Quarkus web-app applications always require the quarkus.oidc.client-id property. 1.2. Authentication after an HTTP request has completed Sometimes, SecurityIdentity for a given token must be created when there is no active HTTP request context. The quarkus-oidc extension provides io.quarkus.oidc.TenantIdentityProvider to convert a token to a SecurityIdentity instance. For example, one situation when you must verify the token after the HTTP request has completed is when you are processing messages with Vert.x event bus . The example below uses the 'product-order' message within different CDI request contexts. Therefore, an injected SecurityIdentity would not correctly represent the verified identity and be anonymous. package org.acme.quickstart.oidc; import static jakarta.ws.rs.core.HttpHeaders.AUTHORIZATION; import jakarta.inject.Inject; import jakarta.ws.rs.HeaderParam; import jakarta.ws.rs.POST; import jakarta.ws.rs.Path; import io.vertx.core.eventbus.EventBus; @Path("order") public class OrderResource { @Inject EventBus eventBus; @POST public void order(String product, @HeaderParam(AUTHORIZATION) String bearer) { String rawToken = bearer.substring("Bearer ".length()); 1 eventBus.publish("product-order", new Product(product, rawToken)); } public static class Product { public String product; public String customerAccessToken; public Product() { } public Product(String product, String customerAccessToken) { this.product = product; this.customerAccessToken = customerAccessToken; } } } 1 At this point, the token is not verified when proactive authentication is disabled. package org.acme.quickstart.oidc; import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; import io.quarkus.oidc.AccessTokenCredential; import io.quarkus.oidc.Tenant; import io.quarkus.oidc.TenantIdentityProvider; import io.quarkus.security.identity.SecurityIdentity; import io.quarkus.vertx.ConsumeEvent; import io.smallrye.common.annotation.Blocking; @ApplicationScoped public class OrderService { @Tenant("tenantId") @Inject TenantIdentityProvider identityProvider; @Inject TenantIdentityProvider defaultIdentityProvider; 1 @Blocking @ConsumeEvent("product-order") void processOrder(OrderResource.Product product) { AccessTokenCredential tokenCredential = new AccessTokenCredential(product.customerAccessToken); SecurityIdentity securityIdentity = identityProvider.authenticate(tokenCredential).await().indefinitely(); 2 ... } } 1 For the default tenant, the Tenant qualifier is optional. 2 Executes token verification and converts the token to a SecurityIdentity . Note When the provider is used during an HTTP request, the tenant configuration can be resolved as described in the Using OpenID Connect Multi-Tenancy guide. However, when there is no active HTTP request, you must select the tenant explicitly with the io.quarkus.oidc.Tenant qualifier. Warning Dynamic tenant configuration resolution is currently not supported. Authentication that requires a dynamic tenant will fail. 1.3. OIDC request filters You can filter OIDC requests made by Quarkus to the OIDC provider by registering one or more OidcRequestFilter implementations, which can update or add new request headers, and log requests. For more information, see OIDC request filters . 1.4. References OIDC configuration properties Protect a service application by using OIDC Bearer token authentication Keycloak documentation OpenID Connect JSON Web Token OpenID Connect and OAuth2 client and filters reference guide Dev Services for Keycloak Sign and encrypt JWT tokens with SmallRye JWT Build Choosing between OpenID Connect, SmallRye JWT, and OAuth2 authentication mechanisms Combining authentication mechanisms Quarkus Security overview Using OpenID Connect Multi-Tenancy
[ "package org.acme.security.openid.connect; import org.eclipse.microprofile.jwt.JsonWebToken; import jakarta.inject.Inject; import jakarta.annotation.security.RolesAllowed; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path(\"/api/admin\") public class AdminResource { @Inject JsonWebToken jwt; @GET @RolesAllowed(\"admin\") @Produces(MediaType.TEXT_PLAIN) public String admin() { return \"Access for subject \" + jwt.getSubject() + \" is granted\"; } }", "{ \"iss\": \"https://server.example.com\", \"sub\": \"24400320\", \"upn\": \"[email protected]\", \"preferred_username\": \"jdoe\", \"exp\": 1311281970, \"iat\": 1311280970, \"groups\": { \"roles\": [ \"microprofile_jwt_user\" ], } }", "import java.util.List; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import org.eclipse.microprofile.jwt.Claims; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.security.PermissionsAllowed; @Path(\"/service\") public class ProtectedResource { @Inject JsonWebToken accessToken; @PermissionsAllowed(\"email\") 1 @GET @Path(\"/email\") public Boolean isUserEmailAddressVerifiedByUser() { return accessToken.getClaim(Claims.email_verified.name()); } @PermissionsAllowed(\"orders_read\") 2 @GET @Path(\"/order\") public List<Order> listOrders() { return List.of(new Order(\"1\")); } public static class Order { String id; public Order() { } public Order(String id) { this.id = id; } public String getId() { return id; } public void setId() { this.id = id; } } }", "quarkus.oidc.token.allow-jwt-introspection=false quarkus.oidc.token.allow-opaque-token-introspection=false", "quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.discovery-enabled=false Token Introspection endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/tokens/introspect quarkus.oidc.introspection-path=/protocol/openid-connect/tokens/introspect", "quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.token.require-jwt-introspection-only=true", "@ApplicationScoped @Alternative @Priority(1) public class CustomIntrospectionUserInfoCache implements TokenIntrospectionCache, UserInfoCache { }", "'max-size' is 0 by default, so the cache can be activated by setting 'max-size' to a positive value: quarkus.oidc.token-cache.max-size=1000 'time-to-live' specifies how long a cache entry can be valid for and will be used by a cleanup timer: quarkus.oidc.token-cache.time-to-live=3M 'clean-up-timer-interval' is not set by default, so the cleanup timer can be activated by setting 'clean-up-timer-interval': quarkus.oidc.token-cache.clean-up-timer-interval=1M", "import jakarta.inject.Inject; import jakarta.ws.rs.container.ContainerRequestContext; import jakarta.ws.rs.container.ContainerRequestFilter; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.ext.Provider; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.OidcConfigurationMetadata; import io.quarkus.security.identity.SecurityIdentity; @Provider public class IssuerValidator implements ContainerRequestFilter { @Inject OidcConfigurationMetadata configMetadata; @Inject JsonWebToken jwt; @Inject SecurityIdentity identity; public void filter(ContainerRequestContext requestContext) { String issuer = configMetadata.getIssuer().replace(\"{tenant-id}\", identity.getAttribute(\"tenant-id\")); if (!issuer.equals(jwt.getIssuer())) { requestContext.abortWith(Response.status(401).build()); } } }", "package org.acme.security.openid.connect; import static org.eclipse.microprofile.jwt.Claims.iss; import io.quarkus.arc.Unremovable; import jakarta.enterprise.context.ApplicationScoped; import org.jose4j.jwt.MalformedClaimException; import org.jose4j.jwt.consumer.JwtContext; import org.jose4j.jwt.consumer.Validator; @Unremovable @ApplicationScoped public class IssuerValidator implements Validator { 1 @Override public String validate(JwtContext jwtContext) throws MalformedClaimException { if (jwtContext.getJwtClaims().hasClaim(iss.name()) && \"my-issuer\".equals(jwtContext.getJwtClaims().getClaimValueAsString(iss.name()))) { return \"wrong issuer\"; 2 } return null; 3 } }", "quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.discovery-enabled=false Token endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/token quarkus.oidc.token-path=/protocol/openid-connect/token JWK set endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/certs quarkus.oidc.jwks-path=/protocol/openid-connect/certs UserInfo endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/userinfo quarkus.oidc.user-info-path=/protocol/openid-connect/userinfo Token Introspection endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/tokens/introspect quarkus.oidc.introspection-path=/protocol/openid-connect/tokens/introspect", "quarkus.oidc.certificate-chain.trust-store-file=truststore-rootcert.p12 1 quarkus.oidc.certificate-chain.trust-store-password=storepassword quarkus.oidc.certificate-chain.leaf-certificate-name=www.quarkusio.com 2", "package io.quarkus.it.keycloak; import java.security.cert.CertificateException; import java.security.cert.X509Certificate; import java.util.List; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.TokenCertificateValidator; import io.quarkus.oidc.runtime.TrustStoreUtils; import io.vertx.core.json.JsonObject; @ApplicationScoped @Unremovable public class BearerGlobalTokenChainValidator implements TokenCertificateValidator { @Override public void validate(OidcTenantConfig oidcConfig, List<X509Certificate> chain, String tokenClaims) throws CertificateException { String rootCertificateThumbprint = TrustStoreUtils.calculateThumprint(chain.get(chain.size() - 1)); JsonObject claims = new JsonObject(tokenClaims); if (!rootCertificateThumbprint.equals(claims.getString(\"root-certificate-thumbprint\"))) { 1 throw new CertificateException(\"Invalid root certificate\"); } } }", "<dependency> <groupId>io.rest-assured</groupId> <artifactId>rest-assured</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency>", "testImplementation(\"io.rest-assured:rest-assured\") testImplementation(\"io.quarkus:quarkus-junit5\")", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-test-oidc-server</artifactId> <scope>test</scope> </dependency>", "testImplementation(\"io.quarkus:quarkus-test-oidc-server\")", "keycloak.url is set by OidcWiremockTestResource quarkus.oidc.auth-server-url=USD{keycloak.url:replaced-by-test-resource}/realms/quarkus/ quarkus.oidc.client-id=quarkus-service-app quarkus.oidc.application-type=service", "import static org.hamcrest.Matchers.equalTo; import java.util.Set; import org.junit.jupiter.api.Test; import io.quarkus.test.common.QuarkusTestResource; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.oidc.server.OidcWiremockTestResource; import io.restassured.RestAssured; import io.smallrye.jwt.build.Jwt; @QuarkusTest @QuarkusTestResource(OidcWiremockTestResource.class) public class BearerTokenAuthorizationTest { @Test public void testBearerToken() { RestAssured.given().auth().oauth2(getAccessToken(\"alice\", Set.of(\"user\"))) .when().get(\"/api/users/me\") .then() .statusCode(200) // The test endpoint returns the name extracted from the injected `SecurityIdentity` principal. .body(\"userName\", equalTo(\"alice\")); } private String getAccessToken(String userName, Set<String> groups) { return Jwt.preferredUserName(userName) .groups(groups) .issuer(\"https://server.example.com\") .audience(\"https://service.example.com\") .sign(); } }", "package io.quarkus.it.keycloak; import static com.github.tomakehurst.wiremock.client.WireMock.matching; import static org.hamcrest.Matchers.equalTo; import org.junit.jupiter.api.Test; import com.github.tomakehurst.wiremock.WireMockServer; import com.github.tomakehurst.wiremock.client.WireMock; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.oidc.server.OidcWireMock; import io.restassured.RestAssured; @QuarkusTest public class CustomOidcWireMockStubTest { @OidcWireMock WireMockServer wireMockServer; @Test public void testInvalidBearerToken() { wireMockServer.stubFor(WireMock.post(\"/auth/realms/quarkus/protocol/openid-connect/token/introspect\") .withRequestBody(matching(\".*token=invalid_token.*\")) .willReturn(WireMock.aResponse().withStatus(400))); RestAssured.given().auth().oauth2(\"invalid_token\").when() .get(\"/api/users/me/bearer\") .then() .statusCode(401) .header(\"WWW-Authenticate\", equalTo(\"Bearer\")); } }", "%test.quarkus.oidc.auth-server-url=https://dev-123456.eu.auth0.com/ %test.quarkus.oidc.client-id=test-auth0-client %test.quarkus.oidc.credentials.secret=secret", "package org.acme; import org.junit.jupiter.api.AfterAll; import static io.restassured.RestAssured.given; import static org.hamcrest.CoreMatchers.is; import java.util.Map; import org.junit.jupiter.api.Test; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.oidc.client.OidcTestClient; @QuarkusTest public class GreetingResourceTest { static OidcTestClient oidcTestClient = new OidcTestClient(); @AfterAll public static void close() { oidcTestClient.close(); } @Test public void testHelloEndpoint() { given() .auth().oauth2(getAccessToken(\"alice\", \"alice\")) .when().get(\"/hello\") .then() .statusCode(200) .body(is(\"Hello, Alice\")); } private String getAccessToken(String name, String secret) { return oidcTestClient.getAccessToken(name, secret, Map.of(\"audience\", \"https://dev-123456.eu.auth0.com/api/v2/\", \"scope\", \"profile\")); } }", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-test-keycloak-server</artifactId> <scope>test</scope> </dependency>", "testImplementation(\"io.quarkus:quarkus-test-keycloak-server\")", "%prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus", "%prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.keycloak.devservices.realm-path=quarkus-realm.json", "package org.acme.security.openid.connect; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.keycloak.client.KeycloakTestClient; import io.restassured.RestAssured; import org.junit.jupiter.api.Test; @QuarkusTest public class BearerTokenAuthenticationTest { KeycloakTestClient keycloakClient = new KeycloakTestClient(); @Test public void testAdminAccess() { RestAssured.given().auth().oauth2(getAccessToken(\"alice\")) .when().get(\"/api/admin\") .then() .statusCode(200); RestAssured.given().auth().oauth2(getAccessToken(\"bob\")) .when().get(\"/api/admin\") .then() .statusCode(403); } protected String getAccessToken(String userName) { return keycloakClient.getAccessToken(userName); } }", "package org.acme.security.openid.connect; import io.quarkus.test.junit.QuarkusIntegrationTest; @QuarkusIntegrationTest public class NativeBearerTokenAuthenticationIT extends BearerTokenAuthenticationTest { }", "quarkus.oidc.client-id=test quarkus.oidc.public-key=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAlivFI8qB4D0y2jy0CfEqFyy46R0o7S8TKpsx5xbHKoU1VWg6QkQm+ntyIv1p4kE1sPEQO73+HY8+Bzs75XwRTYL1BmR1w8J5hmjVWjc6R2BTBGAYRPFRhor3kpM6ni2SPmNNhurEAHw7TaqszP5eUF/F9+KEBWkwVta+PZ37bwqSE4sCb1soZFrVz/UT/LF4tYpuVYt3YbqToZ3pZOZ9AX2o1GCG3xwOjkc4x0W7ezbQZdC9iftPxVHR8irOijJRRjcPDtA6vPKpzLl6CyYnsIYPd99ltwxTHjr3npfv/3Lw50bAkbT4HeLFxTx4flEoZLKO/g0bAoV2uqBhkA9xnQIDAQAB smallrye.jwt.sign.key.location=/privateKey.pem", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-test-security-oidc</artifactId> <scope>test</scope> </dependency>", "testImplementation(\"io.quarkus:quarkus-test-security-oidc\")", "import static org.hamcrest.Matchers.is; import org.junit.jupiter.api.Test; import io.quarkus.test.common.http.TestHTTPEndpoint; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.security.TestSecurity; import io.quarkus.test.security.oidc.Claim; import io.quarkus.test.security.oidc.ConfigMetadata; import io.quarkus.test.security.oidc.OidcSecurity; import io.quarkus.test.security.oidc.UserInfo; import io.restassured.RestAssured; @QuarkusTest @TestHTTPEndpoint(ProtectedResource.class) public class TestSecurityAuthTest { @Test @TestSecurity(user = \"userOidc\", roles = \"viewer\") public void testOidc() { RestAssured.when().get(\"test-security-oidc\").then() .body(is(\"userOidc:viewer\")); } @Test @TestSecurity(user = \"userOidc\", roles = \"viewer\") @OidcSecurity(claims = { @Claim(key = \"email\", value = \"[email protected]\") }, userinfo = { @UserInfo(key = \"sub\", value = \"subject\") }, config = { @ConfigMetadata(key = \"issuer\", value = \"issuer\") }) public void testOidcWithClaimsUserInfoAndMetadata() { RestAssured.when().get(\"test-security-oidc-claims-userinfo-metadata\").then() .body(is(\"userOidc:viewer:[email protected]:subject:issuer\")); } }", "import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import io.quarkus.oidc.OidcConfigurationMetadata; import io.quarkus.oidc.UserInfo; import io.quarkus.security.Authenticated; import org.eclipse.microprofile.jwt.JsonWebToken; @Path(\"/service\") @Authenticated public class ProtectedResource { @Inject JsonWebToken accessToken; @Inject UserInfo userInfo; @Inject OidcConfigurationMetadata configMetadata; @GET @Path(\"test-security-oidc\") public String testSecurityOidc() { return accessToken.getName() + \":\" + accessToken.getGroups().iterator().next(); } @GET @Path(\"test-security-oidc-claims-userinfo-metadata\") public String testSecurityOidcWithClaimsUserInfoMetadata() { return accessToken.getName() + \":\" + accessToken.getGroups().iterator().next() + \":\" + accessToken.getClaim(\"email\") + \":\" + userInfo.getString(\"sub\") + \":\" + configMetadata.get(\"issuer\"); } }", "import static org.hamcrest.Matchers.is; import org.junit.jupiter.api.Test; import io.quarkus.test.common.http.TestHTTPEndpoint; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.security.TestSecurity; import io.quarkus.test.security.oidc.OidcSecurity; import io.quarkus.test.security.oidc.TokenIntrospection; import io.restassured.RestAssured; @QuarkusTest @TestHTTPEndpoint(ProtectedResource.class) public class TestSecurityAuthTest { @Test @TestSecurity(user = \"userOidc\", roles = \"viewer\") @OidcSecurity(introspectionRequired = true, introspection = { @TokenIntrospection(key = \"email\", value = \"[email protected]\") } ) public void testOidcWithClaimsUserInfoAndMetadata() { RestAssured.when().get(\"test-security-oidc-opaque-token\").then() .body(is(\"userOidc:viewer:userOidc:viewer:[email protected]\")); } }", "import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import io.quarkus.oidc.TokenIntrospection; import io.quarkus.security.Authenticated; import io.quarkus.security.identity.SecurityIdentity; @Path(\"/service\") @Authenticated public class ProtectedResource { @Inject SecurityIdentity securityIdentity; @Inject TokenIntrospection introspection; @GET @Path(\"test-security-oidc-opaque-token\") public String testSecurityOidcOpaqueToken() { return securityIdentity.getPrincipal().getName() + \":\" + securityIdentity.getRoles().iterator().next() + \":\" + introspection.getString(\"username\") + \":\" + introspection.getString(\"scope\") + \":\" + introspection.getString(\"email\"); } }", "@Retention(RetentionPolicy.RUNTIME) @Target({ ElementType.METHOD }) @TestSecurity(user = \"userOidc\", roles = \"viewer\") @OidcSecurity(introspectionRequired = true, introspection = { @TokenIntrospection(key = \"email\", value = \"[email protected]\") } ) public @interface TestSecurityMetaAnnotation { }", "quarkus.log.category.\"io.quarkus.oidc.runtime.OidcProvider\".level=TRACE quarkus.log.category.\"io.quarkus.oidc.runtime.OidcProvider\".min-level=TRACE", "quarkus.log.category.\"io.quarkus.oidc.runtime.OidcRecorder\".level=TRACE quarkus.log.category.\"io.quarkus.oidc.runtime.OidcRecorder\".min-level=TRACE", "Set client-id quarkus.oidc.client-id=quarkus-app Token audience claim must contain 'quarkus-app' quarkus.oidc.token.audience=USD{quarkus.oidc.client-id}", "package org.acme.quickstart.oidc; import static jakarta.ws.rs.core.HttpHeaders.AUTHORIZATION; import jakarta.inject.Inject; import jakarta.ws.rs.HeaderParam; import jakarta.ws.rs.POST; import jakarta.ws.rs.Path; import io.vertx.core.eventbus.EventBus; @Path(\"order\") public class OrderResource { @Inject EventBus eventBus; @POST public void order(String product, @HeaderParam(AUTHORIZATION) String bearer) { String rawToken = bearer.substring(\"Bearer \".length()); 1 eventBus.publish(\"product-order\", new Product(product, rawToken)); } public static class Product { public String product; public String customerAccessToken; public Product() { } public Product(String product, String customerAccessToken) { this.product = product; this.customerAccessToken = customerAccessToken; } } }", "package org.acme.quickstart.oidc; import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; import io.quarkus.oidc.AccessTokenCredential; import io.quarkus.oidc.Tenant; import io.quarkus.oidc.TenantIdentityProvider; import io.quarkus.security.identity.SecurityIdentity; import io.quarkus.vertx.ConsumeEvent; import io.smallrye.common.annotation.Blocking; @ApplicationScoped public class OrderService { @Tenant(\"tenantId\") @Inject TenantIdentityProvider identityProvider; @Inject TenantIdentityProvider defaultIdentityProvider; 1 @Blocking @ConsumeEvent(\"product-order\") void processOrder(OrderResource.Product product) { AccessTokenCredential tokenCredential = new AccessTokenCredential(product.customerAccessToken); SecurityIdentity securityIdentity = identityProvider.authenticate(tokenCredential).await().indefinitely(); 2 } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/openid_connect_oidc_authentication/security-oidc-bearer-token-authentication
Preface
Preface Thank you for your interest in Red Hat Ansible Automation Platform automation controller. Automation controller helps teams manage complex multitiered deployments by adding control, knowledge, and delegation to Ansible-powered environments. The Automation controller User Guide describes all of the functionality available in automation controller. It assumes moderate familiarity with Ansible, including concepts such as playbooks, variables, and tags. For more information about these and other Ansible concepts, see the Ansible documentation .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/pr01
Chapter 5. Visualizing power monitoring metrics
Chapter 5. Visualizing power monitoring metrics Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can visualize power monitoring metrics in the OpenShift Container Platform web console by accessing power monitoring dashboards or by exploring Metrics under the Observe tab. 5.1. Power monitoring dashboards overview There are two types of power monitoring dashboards. Both provide different levels of details around power consumption metrics for a single cluster: Power Monitoring / Overview dashboard With this dashboard, you can observe the following information: An aggregated view of CPU architecture and its power source ( rapl-sysfs , rapl-msr , or estimator ) along with total nodes with this configuration Total energy consumption by a cluster in the last 24 hours (measured in kilowatt-hour) The amount of power consumed by the top 10 namespaces in a cluster in the last 24 hours Detailed node information, such as its CPU architecture and component power source These features allow you to effectively monitor the energy consumption of the cluster without needing to investigate each namespace separately. Warning Ensure that the Components Source column does not display estimator as the power source. Figure 5.1. The Detailed Node Information table with rapl-sysfs as the component power source If Kepler is unable to obtain hardware power consumption metrics, the Components Source column displays estimator as the power source, which is not supported in Technology Preview. If that happens, then the values from the nodes are not accurate. Power Monitoring / Namespace dashboard This dashboard allows you to view metrics by namespace and pod. You can observe the following information: The power consumption metrics, such as consumption in DRAM and PKG The energy consumption metrics in the last hour, such as consumption in DRAM and PKG for core and uncore components This feature allows you to investigate key peaks and easily identify the primary root causes of high consumption. 5.2. Accessing power monitoring dashboards as a cluster administrator You can access power monitoring dashboards from the Administrator perspective of the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. You have installed the Power monitoring Operator. You have deployed Kepler in your cluster. You have enabled monitoring for user-defined projects. Procedure In the Administrator perspective of the web console, go to Observe Dashboards . From the Dashboard drop-down list, select the power monitoring dashboard you want to see: Power Monitoring / Overview Power Monitoring / Namespace 5.3. Accessing power monitoring dashboards as a developer You can access power monitoring dashboards from the Developer perspective of the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console. You have access to the cluster as a developer or as a user. You have installed the Power monitoring Operator. You have deployed Kepler in your cluster. You have enabled monitoring for user-defined projects. You have view permissions for the namespace openshift-power-monitoring , the namespace where Kepler is deployed to. Procedure In the Developer perspective of the web console, go to Observe Dashboard . From the Dashboard drop-down list, select the power monitoring dashboard you want to see: Power Monitoring / Overview 5.4. Power monitoring metrics overview The Power monitoring Operator exposes the following metrics, which you can view by using the OpenShift Container Platform web console under the Observe Metrics tab. Warning This list of exposed metrics is not definitive. Metrics might be added or removed in future releases. Table 5.1. Power monitoring Operator metrics Metric name Description kepler_container_joules_total The aggregated package or socket energy consumption of CPU, DRAM, and other host components by a container. kepler_container_core_joules_total The total energy consumption across CPU cores used by a container. If the system has access to RAPL_ metrics, this metric reflects the proportional container energy consumption of the RAPL Power Plan 0 (PP0), which is the energy consumed by all CPU cores in the socket. kepler_container_dram_joules_total The total energy consumption of DRAM by a container. kepler_container_uncore_joules_total The cumulative energy consumption by uncore components used by a container. The number of components might vary depending on the system. The uncore metric is processor model-specific and might not be available on some server CPUs. kepler_container_package_joules_total The cumulative energy consumed by the CPU socket used by a container. It includes all core and uncore components. kepler_container_other_joules_total The cumulative energy consumption of host components, excluding CPU and DRAM, used by a container. Generally, this metric is the energy consumption of ACPI hosts. kepler_container_bpf_cpu_time_us_total The total CPU time used by the container that utilizes the BPF tracing. kepler_container_cpu_cycles_total The total CPU cycles used by the container that utilizes hardware counters. CPU cycles is a metric directly related to CPU frequency. On systems where processors run at a fixed frequency, CPU cycles and total CPU time are roughly equivalent. On systems where processors run at varying frequencies, CPU cycles and total CPU time have different values. kepler_container_cpu_instructions_total The total CPU instructions used by the container that utilizes hardware counters. CPU instructions is a metric that accounts how the CPU is used. kepler_container_cache_miss_total The total cache miss that occurs for a container that uses hardware counters. kepler_container_cgroupfs_cpu_usage_us_total The total CPU time used by a container reading from control group statistics. kepler_container_cgroupfs_memory_usage_bytes_total The total memory in bytes used by a container reading from control group statistics. kepler_container_cgroupfs_system_cpu_usage_us_total The total CPU time in kernel space used by the container reading from control group statistics. kepler_container_cgroupfs_user_cpu_usage_us_total The total CPU time in user space used by a container reading from control group statistics. kepler_container_bpf_net_tx_irq_total The total number of packets transmitted to network cards of a container that uses the BPF tracing. kepler_container_bpf_net_rx_irq_total The total number of packets received from network cards of a container that uses the BPF tracing. kepler_container_bpf_block_irq_total The total number of block I/O calls of a container that uses the BPF tracing. kepler_node_info The node metadata, such as the node CPU architecture. kepler_node_core_joules_total The total energy consumption across CPU cores used by all containers running on a node and operating system. kepler_node_uncore_joules_total The cumulative energy consumption by uncore components used by all containers running on the node and operating system. The number of components might vary depending on the system. kepler_node_dram_joules_total The total energy consumption of DRAM by all containers running on the node and operating system. kepler_node_package_joules_total The cumulative energy consumed by the CPU socket used by all containers running on the node and operating system. It includes all core and uncore components. kepler_node_other_host_components_joules_total The cumulative energy consumption of host components, excluding CPU and DRAM, used by all containers running on the node and operating system. Generally, this metric is the energy consumption of ACPI hosts. kepler_node_platform_joules_total The total energy consumption of the host. Generally, this metric is the host energy consumption from Redfish BMC or ACPI. kepler_node_energy_stat Multiple metrics from nodes labeled with container resource utilization control group metrics that are used in the model server. kepler_node_accelerator_intel_qat The utilization of the accelerator Intel QAT on a certain node. If the system contains Intel QATs, Kepler can calculate the utilization of the node's QATs through telemetry. 5.5. Additional resources Enabling monitoring for user-defined projects
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/power_monitoring/visualizing-power-monitoring-metrics
Chapter 1. About the Migration Toolkit for Containers
Chapter 1. About the Migration Toolkit for Containers The Migration Toolkit for Containers (MTC enables you to migrate stateful application workloads between OpenShift Container Platform 4 clusters at the granularity of a namespace. Note If you are migrating from OpenShift Container Platform 3, see About migrating from OpenShift Container Platform 3 to 4 and Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . You can migrate applications within the same cluster or between clusters by using state migration. MTC provides a web console and an API, based on Kubernetes custom resources, to help you control the migration and minimize application downtime. The MTC console is installed on the target cluster by default. You can configure the Migration Toolkit for Containers Operator to install the console on a remote cluster . See Advanced migration options for information about the following topics: Automating your migration with migration hooks and the MTC API. Configuring your migration plan to exclude resources, support large-scale migrations, and enable automatic PV resizing for direct volume migration. 1.1. Terminology Table 1.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 1.2. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.9 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volumes that are linked to the persistent volume claims of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. 1.3. About data copy methods The Migration Toolkit for Containers (MTC) supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 1.3.1. File system copy method MTC copies data files from the source cluster to the replication repository, and from there to the target cluster. The file system copy method uses Restic for indirect migration or Rsync for direct volume migration. Table 1.2. File system copy method summary Benefits Limitations Clusters can have different storage classes. Supported for all S3 storage providers. Optional data verification with checksum. Supports direct volume migration, which significantly increases performance. Slower than the snapshot copy method. Optional data verification significantly reduces performance. Note The Restic and Rsync PV migration assumes that the PVs supported are only volumeMode=filesystem . Using volumeMode=Block for file system migration is not supported. 1.3.2. Snapshot copy method MTC copies a snapshot of the source cluster data to the replication repository of a cloud provider. The data is restored on the target cluster. The snapshot copy method can be used with Amazon Web Services, Google Cloud Provider, and Microsoft Azure. Table 1.3. Snapshot copy method summary Benefits Limitations Faster than the file system copy method. Cloud provider must support snapshots. Clusters must be on the same cloud provider. Clusters must be in the same location or region. Clusters must have the same storage class. Storage class must be compatible with snapshots. Does not support direct volume migration. 1.4. Direct volume migration and direct image migration You can use direct image migration (DIM) and direct volume migration (DVM) to migrate images and data directly from the source cluster to the target cluster. If you run DVM with nodes that are in different availability zones, the migration might fail because the migrated pods cannot access the persistent volume claim. DIM and DVM have significant performance benefits because the intermediate steps of backing up files from the source cluster to the replication repository and restoring files from the replication repository to the target cluster are skipped. The data is transferred with Rsync . DIM and DVM have additional prerequisites.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/migration_toolkit_for_containers/about-mtc
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_4.0.0/making-open-source-more-inclusive
7.100. kdelibs3
7.100. kdelibs3 7.100.1. RHBA-2012:1244 - kdelibs3 bug fix update Updated kdelibs3 packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The kdelibs3 packages provide libraries for the K Desktop Environment (KDE). Bug Fixes BZ# 681901 Prior to this update, the kdelibs3 libraries caused a conflict for the subversion version control tool. As a consequence, subvervision was not correctly built if the kdelibs3 libraries were installed. This update modifies the underlying code to avoid this conflict. Now, subversion builds as expected with kdelibs3. BZ# 734447 kdelibs3 provided its own set of trusted Certificate Authority (CA) certificates. This update makes kdelibs3 use the system set from the ca-certificates package, instead of its own copy. All users of kdelibs3 are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/kdelibs3
4.9. Mounting File Systems
4.9. Mounting File Systems By default, when a file system that supports extended attributes is mounted, the security context for each file is obtained from the security.selinux extended attribute of the file. Files in file systems that do not support extended attributes are assigned a single, default security context from the policy configuration, based on file system type. Use the mount -o context command to override existing extended attributes, or to specify a different, default context for file systems that do not support extended attributes. This is useful if you do not trust a file system to supply the correct attributes, for example, removable media used in multiple systems. The mount -o context command can also be used to support labeling for file systems that do not support extended attributes, such as File Allocation Table (FAT) or NFS volumes. The context specified with the context option is not written to disk: the original contexts are preserved, and are seen when mounting without context if the file system had extended attributes in the first place. For further information about file system labeling, see James Morris's "Filesystem Labeling in SELinux" article: http://www.linuxjournal.com/article/7426 . 4.9.1. Context Mounts To mount a file system with the specified context, overriding existing contexts if they exist, or to specify a different, default context for a file system that does not support extended attributes, as the root user, use the mount -o context= SELinux_user:role:type:level command when mounting the required file system. Context changes are not written to disk. By default, NFS mounts on the client side are labeled with a default context defined by policy for NFS volumes. In common policies, this default context uses the nfs_t type. Without additional mount options, this may prevent sharing NFS volumes using other services, such as the Apache HTTP Server. The following example mounts an NFS volume so that it can be shared using the Apache HTTP Server: Newly-created files and directories on this file system appear to have the SELinux context specified with -o context . However, since these changes are not written to disk, the context specified with this option does not persist between mounts. Therefore, this option must be used with the same context specified during every mount to retain the required context. For information about making context mount persistent, see Section 4.9.5, "Making Context Mounts Persistent" . Type Enforcement is the main permission control used in SELinux targeted policy. For the most part, SELinux users and roles can be ignored, so, when overriding the SELinux context with -o context , use the SELinux system_u user and object_r role, and concentrate on the type. If you are not using the MLS policy or multi-category security, use the s0 level. Note When a file system is mounted with a context option, context changes by users and processes are prohibited. For example, running the chcon command on a file system mounted with a context option results in a Operation not supported error. 4.9.2. Changing the Default Context As mentioned in Section 4.8, "The file_t and default_t Types" , on file systems that support extended attributes, when a file that lacks an SELinux context on disk is accessed, it is treated as if it had a default context as defined by SELinux policy. In common policies, this default context uses the file_t type. If it is desirable to use a different default context, mount the file system with the defcontext option. The following example mounts a newly-created file system on /dev/sda2 to the newly-created test/ directory. It assumes that there are no rules in /etc/selinux/targeted/contexts/files/ that define a context for the test/ directory: In this example: the defcontext option defines that system_u:object_r:samba_share_t:s0 is "the default security context for unlabeled files" [5] . when mounted, the root directory ( test/ ) of the file system is treated as if it is labeled with the context specified by defcontext (this label is not stored on disk). This affects the labeling for files created under test/ : new files inherit the samba_share_t type, and these labels are stored on disk. files created under test/ while the file system was mounted with a defcontext option retain their labels. 4.9.3. Mounting an NFS Volume By default, NFS mounts on the client side are labeled with a default context defined by policy for NFS volumes. In common policies, this default context uses the nfs_t type. Depending on policy configuration, services, such as Apache HTTP Server and MariaDB, may not be able to read files labeled with the nfs_t type. This may prevent file systems labeled with this type from being mounted and then read or exported by other services. If you would like to mount an NFS volume and read or export that file system with another service, use the context option when mounting to override the nfs_t type. Use the following context option to mount NFS volumes so that they can be shared using the Apache HTTP Server: Since these changes are not written to disk, the context specified with this option does not persist between mounts. Therefore, this option must be used with the same context specified during every mount to retain the required context. For information about making context mount persistent, see Section 4.9.5, "Making Context Mounts Persistent" . As an alternative to mounting file systems with context options, Booleans can be enabled to allow services access to file systems labeled with the nfs_t type. See Part II, "Managing Confined Services" for instructions on configuring Booleans to allow services access to the nfs_t type. 4.9.4. Multiple NFS Mounts When mounting multiple mounts from the same NFS export, attempting to override the SELinux context of each mount with a different context, results in subsequent mount commands failing. In the following example, the NFS server has a single export, export/ , which has two subdirectories, web/ and database/ . The following commands attempt two mounts from a single NFS export, and try to override the context for each one: The second mount command fails, and the following is logged to /var/log/messages : To mount multiple mounts from a single NFS export, with each mount having a different context, use the -o nosharecache,context options. The following example mounts multiple mounts from a single NFS export, with a different context for each mount (allowing a single service access to each one): In this example, server:/export/web is mounted locally to the /local/web/ directory, with all files being labeled with the httpd_sys_content_t type, allowing Apache HTTP Server access. server:/export/database is mounted locally to /local/database/ , with all files being labeled with the mysqld_db_t type, allowing MariaDB access. These type changes are not written to disk. Important The nosharecache options allows you to mount the same subdirectory of an export multiple times with different contexts, for example, mounting /export/web/ multiple times. Do not mount the same subdirectory from an export multiple times with different contexts, as this creates an overlapping mount, where files are accessible under two different contexts. 4.9.5. Making Context Mounts Persistent To make context mounts persistent across remounting and reboots, add entries for the file systems in the /etc/fstab file or an automounter map, and use the required context as a mount option. The following example adds an entry to /etc/fstab for an NFS context mount: [5] Morris, James. "Filesystem Labeling in SELinux". Published 1 October 2004. Accessed 14 October 2008: http://www.linuxjournal.com/article/7426 .
[ "~]# mount server:/export /local/mount/point -o \\ context=\"system_u:object_r:httpd_sys_content_t:s0\"", "~]# mount /dev/sda2 /test/ -o defcontext=\"system_u:object_r:samba_share_t:s0\"", "~]# mount server:/export /local/mount/point -o context=\"system_u:object_r:httpd_sys_content_t:s0\"", "~]# mount server:/export/web /local/web -o context=\"system_u:object_r:httpd_sys_content_t:s0\"", "~]# mount server:/export/database /local/database -o context=\"system_u:object_r:mysqld_db_t:s0\"", "kernel: SELinux: mount invalid. Same superblock, different security settings for (dev 0:15, type nfs)", "~]# mount server:/export/web /local/web -o nosharecache,context=\"system_u:object_r:httpd_sys_content_t:s0\"", "~]# mount server:/export/database /local/database -o \\ nosharecache,context=\"system_u:object_r:mysqld_db_t:s0\"", "server:/export /local/mount/ nfs context=\"system_u:object_r:httpd_sys_content_t:s0\" 0 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-Security-Enhanced_Linux-Working_with_SELinux-Mounting_File_Systems
Chapter 3. Additional malware service concepts
Chapter 3. Additional malware service concepts The following additional information might be useful in using malware detection service. 3.1. System scan At release, Malware detection administrators must initiate the Insights for Red Hat Enterprise Linux malware detection service collector scan on demand. Alternatively, administrators can run the collector command as a playbook or by using another automation method. Note The recommended frequency of scanning is up to your security team; however, because the scan can take significant time to run, the Insights for Red Hat Enterprise Linux malware detection service team recommends running the malware detection scan weekly. 3.1.1. Initiating a malware detection scan Perform the following procedure to run a malware detection scan. After the scan is complete, data are reported in the Insights for Red Hat Enterprise Linux malware detection service. The scan time depends on a number of factors, including configuration options, number of running processes, etc. Prerequisites Running the Insights client command requires sudo access on the system. Procedure Run USD sudo insights-client --collector malware-detection . View results at Security > Malware . Note You can configure a cron job to run malware detection scans at scheduled intervals. For more information, refer to Setting up recurring scans for Insights services. 3.1.2. Setting up recurring scans for Insights services To get the most accurate recommendations from Red Hat Insights services such as compliance and malware detection, you might need to manually scan and upload data collection reports to the services on a regular schedule. For more information about scheduling see the following: Setting up recurring scans for Insights services 3.2. Disabling malware signatures There might be certain malware signatures that are not of interest to you. This might be due to an intentional configuration, test scan, or a high-noise situation wherein the malware detection service reports matches that are not applicable to your security priorities. For example, the signatures XFTI_EICAR_AV_Test and XFTI_WICAR_Javascript_Test are used to detect the EICAR Anti Malware Testfile and WICAR Javascript Crypto Miner test malware. They are intentional test signatures but do not represent actual malware threats. Signatures such as these can be disabled so that matches against them are not reported in the Red Hat Hybrid Cloud Console. Once a signature is disabled, the malware detection service removes any existing matches against that signature from the Hybrid Cloud Console and ignores the signature in future scans. If the signature is re-enabled, the malware detection service again looks for the signature in future malware-detection scans and shows resulting matches. Note Disabling a signature does not erase the history of matches for that signature. Prerequisites You are a member of a Hybrid Cloud Console User Access group with the Malware detection administrator role. Only users with this role can disable and re-enable signatures. Procedure to disable a signature Navigate to Security > Malware > Signatures . Find the signature to disable. Click the options icon (...) at the end of the signature row and select Disable signature from malware analysis . Alternate procedure to disable a signature You can also disable the signature from the signature information page. Navigate to Security > Malware > Signatures . Find the signature to disable. Click the signature name. On the signature details page, click the Actions dropdown and select Disable signature from malware analysis . Disabling several signatures at the same time You can disable several signatures at the same time by checking the box at the start of each signature row, then clicking the options icon (...) to the filter fields and selecting Disable signatures from malware analysis . Viewing disabled malware signatures All users can view disabled malware signatures. Navigate to Security > Malware > Signatures . View the number of disabled malware signatures in the dashboard at the top of the page. Set filters to show the disabled signatures. Set the primary filter to Signatures included in malware analysis . Set the secondary filter to Disabled signatures . Re-enabling malware signatures Follow the same procedures as before to re-enable previously disabled signatures. 3.3. Interpreting malware detection service results In most cases, running a malware detection scan with YARA results in no signature matches. This means that YARA did not find any matching strings or boolean expressions when comparing a known set of malware signatures to the files included in the scan. The malware detection service will send these results to Red Hat Insights. You can see the details of the system scan and lack of matches in the Insights for Red Hat Enterprise Linux malware detection service UI. In the case that the malware detection scan with YARA does detect a match, it sends the results of that match to Red Hat Insights. You can see details of the match in the malware detection service UI, including the file and date. System scan and signature match history is displayed for the last 14 days, so you can detect patterns and provide information to your security incident response team. For example, if a signature match was found in one scan, but not found in the scan of the same system, that can indicate the presence of malware that is detectable only when a certain process is running. 3.3.1. Acknowledging and managing malware matches You can acknowledge malware signatures at both the system and signature levels. This allows you to remove irrelevant messages and information from your environment and efficiently review the status of malware results. The Status field on the Signatures page allows you to select a status for each system or signature that you review. You can change the status of each signature match as you continue investigating and managing malware matches. This helps your system users to stay informed about the progress of remediations or evaluations of malware matches. You can also decide which matches are irrelevant or which pose low or no threats to your systems. If you have Malware Detection Administrator permissions, you can delete irrelevant matches from your systems. The Total Matches column on the Signatures page includes all matches for a signature on a system. You can use the list of matches to track and review the history of malware matches on individual systems in your environment. Insights retains malware matches indefinitely, unless you delete them. Acknowledging malware matches and setting status also works as a form of historical record-keeping. Note that if you delete a system from the malware service, the match records are discarded. The New Matches column shows the number of new matches for a signature. A bell icon indicates each new match. A new match has a match date of up to 30 days from when the match was detected, and has a Not Reviewed status. Matches older than 30 days, or those that have already been reviewed, become part of Total Matches . 3.3.2. Acknowledging malware signature matches Prerequisites To view and filter malware matches, you need a Malware Read-only role. To edit or delete matches, you must have the Malware Detection Administrator role. Procedure Navigate to Security > Malware > Signatures . The list of signatures appears at the bottom of the page. Click on a signature name. The information page for that signature displays. The page shows the list of systems affected by that malware signature. A bell icon indicates new matches for that signature. Use the filters at the top of the list of affected systems to filter by Status . (The default filter is Name .) Click the drop-down menu to the right of the Status filter and select Not Reviewed . Click the drop-down arrow to the name of an affected system. The list of matches displays, with the most recent matches first. Select the checkbox to the match that you want to review. To change the status of a match, select the new status from the Match status drop-down menu. Select from the following options: Not reviewed In review On-hold Benign Malware detection test No action Resolved Optional . Add a note in the Note field to include more information about the match status. The green checkmark indicates that the note has been saved. Additional resources For more information about disabling malware signatures, see Disabling malware signatures . For more information about the User Access settings required to view, edit, and delete matches, see User Access settings in the Red Hat Hybrid Cloud Console . 3.3.3. Deleting a single match Prerequisites To edit or delete matches, you must have the Malware Administrator role. Procedure Navigate to Security > Malware > Signatures . The list of signatures appears at the bottom of the page. Click the drop-down arrow to the signature you want to manage. A list of matches appears below the system, with the most recent match first. Click the options icon (...) at the far right side of the match you want to delete, and then select Delete match . The list of matches refreshes. 3.3.4. Viewing malware matches on systems Prerequisites To view and filter malware matches, you need a Malware Read-only role. To edit or delete matches, you must have the Malware Administrator status. Procedure Only systems that have malware detection enabled appear in the list of affected systems. For more information about how to enable malware detection, see Get started using the Insights for RHEL malware service . Navigate to Security > Malware > Systems . The list of systems displays. If a system has malware matches, the Matched label appears to the system name. Click on a system name. The system details page displays, with the list of matched malware signatures at the bottom. Click the drop-down to a malware signature. The list of matches for the signature on the system displays. Acknowledge the matches in the list. For more information, see Additional malware service concepts . 3.4. Additional configuration options for the malware detection collector The /etc/insights-client/malware-detection-config.yml file includes several configuration options. Configuration options filesystem_scan_only This is essentially an allowlist option whereby you specify which files/directories to scan. ONLY the items specified will be scanned. It can be a single item, or a list of items (adhering to yaml syntax for specifying lists of items). If this option is empty, it essentially means scan all files/directories (depending on other options). filesystem_scan_exclude This is essentially a denylist option whereby you specify which files/directories NOT to scan. A number of directories are already listed meaning they will be excluded by default. These include virtual filesystem directories, eg /proc, /sys, /cgroup; directories that might have external mounted filesystems, eg /mnt and /media; and some other directories recommended to not be scanned, eg /dev and /var/log/insights-client (to prevent false positives). You are free to modify the list to add (or subtract) files/directories. Note that if the same item is specified both in filesystem_scan_only and filesystem_scan_exclude, eg /home, then filesystem_scan_exclude will 'win'. That is, /home will not be scanned. Another example, it's possible to filesysem_scan_only a parent directory, eg /var and then filesystem_scan_exclude certain directories within that, eg /var/lib and /var/log/insights-client. Then everything in /var except for /var/lib and /var/log/insights-client will be scanned. filesystem_scan_since Only scan files that have been modified 'since', where since can be an integer representing days ago or 'last' meaning since last filesystem scan. For example, filesystem_scan_since: 1 means only scan files that have been created or modified since 1 day ago (within the last day); filesystem_scan_since: 7 means only scan files that have been created/modified since 7 days ago (within the last week); and filesystem_scan_since: last means only scan files that have been created/modified since the last successful filesystem_scan of the malware-client. exclude_network_filesystem_mountpoints and network_filesystem_types Setting exclude_network_filesystem_mountpoints: true means that the malware detection collector will not scan mountpoints of mounted network filesystems. This is the default setting and is to prevent scanning external filesystems, resulting in unnecessary and increased network traffic and slower scanning. The filesystems it considers to be network filesystems are listed in the network_filesystem_types option. So any filesystem types that are in that list and that are mounted will be excluded from scanning. These mountpoints are essentially added to the list of excluded directories from the filesystem_scan_exclude option. If you set exclude_network_filesystem_mountpoints: false you can still exclude mountpoints with the filesystem_scan_exclude option. network_filesystem_types Define network filesystem types. scan_processes Note Scan_process is disabled by default to prevent an impact on system performance when scanning numerous or large processes. When the status is false, no processes are scanned and the processes_scan options that follow are ignored. + Include running processes in the scan. processes_scan_only This is similar to filesystem_scan_only but applies to processes. Processes may be specified as a single PID, eg 123, or a range of PIDs, eg 1000..2000, or by process name, eg Chrome. For example, the following values: 123, 1000..2000, and Chrome, would mean that PID 123, PIDs from 1000 to 2000 inclusive and PIDs for process names containing the string 'chrome' would ONLY be scanned. processes_scan_exclude This is similar to filesystem_scan_exclude but applies to processes. Like processes_scan_only, processes may be specified as a single PID, a range of PIDs, or by process name. If a process appears in both processes_scan_only and processes_scan_exclude, then processes_scan_exclude will 'win' and the process will be excluded. processes_scan_since This is similar to filesystem_scan_since but applies to processes. Only scan processes that have been started 'since', where since can be an integer representing days ago or 'last' meaning since the last successful processes scan of the malware-client. Environment variables All of the options in the /etc/insights-client/malware-detection-config.yml file can also be set using environment variables. Using the environment variable overrides the value of the same option in the configuration file. The environment variable has the same name as the configuration file option, but is uppercase. For example, the configuration file option test_scan is the environment variable TEST_SCAN . For the FILESYSTEM_SCAN_ONLY , FILESYSTEM_SCAN_EXCLUDE , PROCESSES_SCAN_ONLY , PROCESSES_SCAN_EXCLUDE , and NETWORK_FILESYSTEM_TYPES environment variables, use a list of comma separated values. For example, to scan only directories /etc , /tmp and /var/lib , use the following environment variable: To specify this on the command line (along with disabling test scan), use the following: Resources For more information about the Insights client, see Client Configuration Guide for Red Hat Insights . 3.5. Enabling notifications and integrations for malware events You can enable the notifications service on Red Hat Hybrid Cloud Console to send notifications whenever the malware service detects a signature match on at least one system scan and generates an alert. Using the notifications service frees you from having to continually check the Red Hat Insights for Red Hat Enterprise Linux dashboard for alerts. For example, you can configure the notifications service to automatically send an email message whenever the malware service detects a possible threat to your systems, or to send an email digest of all the alerts that the malware service generates each day. In addition to sending email messages, you can configure the notifications service to send event data in other ways: Using an authenticated client to query Red Hat Insights APIs for event data Using webhooks to send events to third-party applications that accept inbound requests Integrating notifications with applications such as Splunk to route malware events to the application dashboard Malware service notifications include the following information: name of the affected system how many signature matches were found during the system scan a link to view the details on Red Hat Hybrid Cloud Console Enabling the notifications service requires three main steps: First, an Organization administrator creates a User access group with the Notifications administrator role, and then adds account members to the group. , a Notifications administrator sets up behavior groups for events in the notifications service. Behavior groups specify the delivery method for each notification. For example, a behavior group can specify whether email notifications are sent to all users, or just to Organization administrators. Finally, users who receive email notifications from events must set their user preferences so that they receive individual emails for each event. Additional resources For more information about how to set up notifications for malware alerts, see Configuring notifications on the Red Hat Hybrid Cloud Console .
[ "FILESYSTEM_SCAN_ONLY=/etc,/tmp,/var/lib", "sudo FILESYSTEM_SCAN_ONLY=/etc,/tmp,/var/lib TEST_SCAN=false insights-client --collector malware-detection" ]
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_reporting_malware_signatures_on_rhel_systems/malware-svc-additional-concepts_malware-svc-getting-started
Chapter 14. Containerizing an Application from Packages
Chapter 14. Containerizing an Application from Packages For multiple reasons, it may be advantageous to distribute an application packaged in an RPM package as a container. Prerequisites Understanding of containers An application packaged as one or more RPM packages Procedure To containerize an application from RPM packages, see Getting Started with Containers - Creating Docker images . Additional Information OpenShift Container Platform - Creating Images Red Hat Enterprise Linux Atomic Host - Getting Started with Containers Product Documentation for Red Hat Enterprise Linux Atomic Host Docker Documentation - Get Started, Part 2: Containers Docker Documentation - Dockerfile reference Red Hat Container Catalog listing - Base Images
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/developer_guide/packaging_packaging-rpm2container
3. File Systems
3. File Systems Note The Storage Administration Guide provides further instructions on how to effectively manage file systems on Red Hat Enterprise Linux 6. Additionally, the Global File System 2 document details specific information on configuring and maintaining Red Hat Global File System 2 for Red Hat Enterprise Linux 6. 3.1. Fourth Extended Filesystem (ext4) Support The fourth extended filesystem (ext4) is based on the third extended filesystem (ext3) and features a number of improvements. These include support for larger file systems and larger files, faster and more efficient allocation of disk space, no limit on the number of subdirectories within a directory, faster file system checking, and more robust journaling. The ext4 file system is selected by default and is highly recommended. 3.2. XFS XFS is a highly scalable, high-performance file system which was originally designed at Silicon Graphics, Inc. It was created to support filesystems up to 16 exabytes (approximately 16 million terabytes), files up to 8 exabytes (approximately 8 million terabytes) and directory structures containing tens of millions of entries. XFS supports metadata journaling, which facilitates quicker crash recovery. The XFS file systems can also be defragmented and expanded while mounted and active. 3.3. Block Discard - enhanced support for thinly provisioned LUNs and SSD devices Filesystems in Red Hat Enterprise Linux 6 use the new block discard feature to allows a storage device to be informed when the filesystem detects that portions of a device (also known as blocks) are no longer in active use. While few storage devices feature block discard capabilities, newer solid state drives (SSDs) utilize this feature to optimize internal data layout and invoke proactive wear levelling. Additionally, some high end SCSI devices use block discard information to help implement thinly provisioned LUNs. 3.4. Network File System (NFS) A Network File System (NFS) allows remote hosts to mount file systems over a network and interact with those file systems as though they were mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. Red Hat Enterprise Linux 6 supports NFSv2, NFSv3, and NFSv4 clients. Mounting a file system via NFS now defaults to NFSv4. Additional improvements have been made to the NFS in Red Hat Enterprise Linux 6, providing enhanced support over Internet Protocol version 6 (IPv6)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_release_notes/filesystems
Chapter 1. Preparing to install on Nutanix
Chapter 1. Preparing to install on Nutanix Before you install an OpenShift Container Platform cluster, be sure that your Nutanix environment meets the following requirements. 1.1. Nutanix version requirements You must install the OpenShift Container Platform cluster to a Nutanix environment that meets the following requirements. Table 1.1. Version requirements for Nutanix virtual environments Component Required version Nutanix AOS 6.5.2.7 or later Prism Central pc.2022.6 or later 1.2. Environment requirements Before you install an OpenShift Container Platform cluster, review the following Nutanix AOS environment requirements. 1.2.1. Required account privileges The installation program requires access to a Nutanix account with the necessary permissions to deploy the cluster and to maintain the daily operation of it. The following options are available to you: You can use a local Prism Central user account with administrative privileges. Using a local account is the quickest way to grant access to an account with the required permissions. If your organization's security policies require that you use a more restrictive set of permissions, use the permissions that are listed in the following table to create a custom Cloud Native role in Prism Central. You can then assign the role to a user account that is a member of a Prism Central authentication directory. Consider the following when managing this user account: When assigning entities to the role, ensure that the user can access only the Prism Element and subnet that are required to deploy the virtual machines. Ensure that the user is a member of the project to which it needs to assign virtual machines. For more information, see the Nutanix documentation about creating a Custom Cloud Native role , assigning a role , and adding a user to a project . Example 1.1. Required permissions for creating a Custom Cloud Native role Nutanix Object When required Required permissions in Nutanix API Description Categories Always Create_Category_Mapping Create_Or_Update_Name_Category Create_Or_Update_Value_Category Delete_Category_Mapping Delete_Name_Category Delete_Value_Category View_Category_Mapping View_Name_Category View_Value_Category Create, read, and delete categories that are assigned to the OpenShift Container Platform machines. Images Always Create_Image Delete_Image View_Image Create, read, and delete the operating system images used for the OpenShift Container Platform machines. Virtual Machines Always Create_Virtual_Machine Delete_Virtual_Machine View_Virtual_Machine Create, read, and delete the OpenShift Container Platform machines. Clusters Always View_Cluster View the Prism Element clusters that host the OpenShift Container Platform machines. Subnets Always View_Subnet View the subnets that host the OpenShift Container Platform machines. Projects If you will associate a project with compute machines, control plane machines, or all machines. View_Project View the projects defined in Prism Central and allow a project to be assigned to the OpenShift Container Platform machines. 1.2.2. Cluster limits Available resources vary between clusters. The number of possible clusters within a Nutanix environment is limited primarily by available storage space and any limitations associated with the resources that the cluster creates, and resources that you require to deploy the cluster, such a IP addresses and networks. 1.2.3. Cluster resources A minimum of 800 GB of storage is required to use a standard cluster. When you deploy a OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your Nutanix instance. Although these resources use 856 GB of storage, the bootstrap node is destroyed as part of the installation process. A standard OpenShift Container Platform installation creates the following resources: 1 label Virtual machines: 1 disk image 1 temporary bootstrap node 3 control plane nodes 3 compute machines 1.2.4. Networking requirements You must use either AHV IP Address Management (IPAM) or Dynamic Host Configuration Protocol (DHCP) for the network and ensure that it is configured to provide persistent IP addresses to the cluster machines. Additionally, create the following networking resources before you install the OpenShift Container Platform cluster: IP addresses DNS records Note It is recommended that each OpenShift Container Platform node in the cluster have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, an NTP server prevents errors typically associated with asynchronous server clocks. 1.2.4.1. Required IP Addresses An installer-provisioned installation requires two static virtual IP (VIP) addresses: A VIP address for the API is required. This address is used to access the cluster API. A VIP address for ingress is required. This address is used for cluster ingress traffic. You specify these IP addresses when you install the OpenShift Container Platform cluster. 1.2.4.2. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the Nutanix instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. If you use your own DNS or DHCP server, you must also create records for each node, including the bootstrap, control plane, and compute nodes. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.2. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 1.3. Configuring the Cloud Credential Operator utility The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). To install a cluster on Nutanix, you must set the CCO to manual mode as part of the installation process. To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Preparing to update a cluster with manually maintained credentials
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command." ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_nutanix/preparing-to-install-on-nutanix
7.7. RHEA-2014:1439 - new package: hyperv-daemons
7.7. RHEA-2014:1439 - new package: hyperv-daemons New hyperv-daemons packages are now available for Red Hat Enterprise Linux 6. The hyperv-daemons packages provide a suite of daemons that are needed when a Linux guest is running on a Windows Host with HyperV. The following daemons are included: - hypervkvpd, the guest Hyper-V Key-Value Pair (KVP) daemon. - hypervvssd, the implementation of HyperV VSS functionality for Linux guest. - hypervfcopyd, the implementation of file copy service functionality for Linux Guest running on HyperV. This enhancement update adds the hyperv-daemons packages to Red Hat Enterprise Linux 6. (BZ# 977631 , BZ# 1107559 ) All users who require hyperv-daemons are advised to install these new packages. After installing the packages, rebooting all guest machines is recommended, otherwise the Microsoft Windows server with Hyper-V will not be able to get information from these guest machines. For more information about inclusion of, and guest installation support for, Microsoft Hyper-V drivers, refer to the Red Hat Enterprise Linux 6.6 Release Notes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rhea-2014-1439
Chapter 2. Client development prerequisites
Chapter 2. Client development prerequisites The following prerequisites are required for developing clients to use with Streams for Apache Kafka. You have a Red Hat account. You have a Kafka cluster running in Streams for Apache Kafka. Kafka brokers are configured with listeners for secure client connections. Topics have been created for your cluster. You have an IDE to develop and test your client. JDK 11 or later is installed.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/developing_kafka_client_applications/client-dev-prereqs-str
Chapter 43. Real-Time Kernel
Chapter 43. Real-Time Kernel New scheduler class: SCHED_DEADLINE This update introduces the SCHED_DEADLINE scheduler class for the real-time kernel as a Technology Preview. The new scheduler enables predictable task scheduling based on application deadlines. SCHED_DEADLINE benefits periodic workloads by reducing application timer manipulation. (BZ#1297061)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/technology_previews_real-time_kernel
5.5.6. Adding or Deleting a GULM Lock Server Member
5.5.6. Adding or Deleting a GULM Lock Server Member The procedure for adding or deleting a GULM cluster member depends on the type of GULM node: either a node that functions only as a GULM client (a cluster member capable of running applications, but not eligible to function as a GULM lock server) or a node that functions as a GULM lock server. The procedure in this section describes how to add or delete a member that functions as a GULM lock server. To add a member that functions only as a GULM client, refer to Section 5.5.4, "Adding a GULM Client-only Member" ; to delete a member that functions only as a GULM client, refer to Section 5.5.5, "Deleting a GULM Client-only Member" . Important The number of nodes that can be configured as GULM lock servers is limited to either one, three, or five. To add or delete a GULM member that functions as a GULM lock server in an existing cluster that is currently in operation, follow these steps: At one of the running members (running on a node that is not to be deleted), start system-config-cluster (refer to Section 5.2, "Starting the Cluster Configuration Tool " ). At the Cluster Status Tool tab, disable each service listed under Services . Stop the cluster software on each running node by running the following commands at each node in this order: service rgmanager stop , if the cluster is running high-availability services ( rgmanager ) service gfs stop , if you are using Red Hat GFS service clvmd stop , if CLVM has been used to create clustered volumes service lock_gulmd stop service ccsd stop To add a a GULM lock server member, at system-config-cluster , in the Cluster Configuration Tool tab, add each node and configure fencing for it as in Section 5.5.1, "Adding a Member to a New Cluster" . Make sure to select GULM Lockserver in the Node Properties dialog box (refer to Figure 5.6, "Adding a Member to a New GULM Cluster" ). To delete a GULM lock server member, at system-config-cluster (running on a node that is not to be deleted), in the Cluster Configuration Tool tab, delete each member as follows: If necessary, click the triangle icon to expand the Cluster Nodes property. Select the cluster node to be deleted. At the bottom of the right frame (labeled Properties ), click the Delete Node button. Clicking the Delete Node button causes a warning dialog box to be displayed requesting confirmation of the deletion ( Figure 5.9, "Confirm Deleting a Member" ). Figure 5.9. Confirm Deleting a Member At that dialog box, click Yes to confirm deletion. Propagate the configuration file to the cluster nodes as follows: Log in to the node where you created the configuration file (the same node used for running system-config-cluster ). Using the scp command, copy the /etc/cluster/cluster.conf file to all nodes in the cluster. Note Propagating the cluster configuration file this way is necessary under these circumstances because the cluster software is not running, and therefore not capable of propagating the configuration. Once a cluster is installed and running, the cluster configuration file is propagated using the Red Hat cluster management GUI Send to Cluster button. For more information about propagating the cluster configuration using the GUI Send to Cluster button, refer to Section 6.3, "Modifying the Cluster Configuration" . After you have propagated the cluster configuration to the cluster nodes you can either reboot each node or start the cluster software on each cluster node by running the following commands at each node in this order: service ccsd start service lock_gulmd start service clvmd start , if CLVM has been used to create clustered volumes service gfs start , if you are using Red Hat GFS service rgmanager start , if the node is also functioning as a GULM client and the cluster is running cluster services ( rgmanager ) At system-config-cluster (running on a node that was not deleted), in the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected. Note Make sure to configure other parameters that may be affected by changes in this section. Refer to Section 5.1, "Configuration Tasks" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s2-add-del-member-running-gulm-lockserver-CA
15.2. Importing a Root Certificate
15.2. Importing a Root Certificate First, change directories into the NSS DB: cd /path/to/nssdb Ensure that your web service is taken offline (stopped, disabled, etc.) while performing these steps and ensure no concurrent access to the NSS DB by other processes (such as a browser). Doing so may corrupt the NSS DB or result in improper usage of these certificates. When needing to import a new root certificate, ensure you acquire this certificate in a secure manner as it will be able to sign a number of certificates. We assume you already have it in a file named ca_root.crt . Please substitute the correct name and path to this file as appropriate for your scenario. For more information about the certutil and PKICertImport options used below, see Section 15.1, "About certutil and PKICertImport " . To import the root certificate: Execute PKICertImport -d . -n "CA Root" -t "CT,C,C" -a -i ca_root.crt -u L command. This command validates and imports the root certificate into your NSS DB. The validation succeeds when no error message is printed and the return code is 0. To check the return code, execute echo USD? immediately after executing the command above. In most cases, a visual error message is printed. The certificate usually fails to validate because it is expired or because it is not a CA certificate. Therefore, make sure your certificate file is correct and up-to-date. Contact the issuer and ensure that all intermediate and root certificates are present on your system.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/importing_root_certificate
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/installing_and_using_red_hat_build_of_openjdk_8_for_rhel/providing-direct-documentation-feedback_openjdk
Chapter 4. Identity and Access Management
Chapter 4. Identity and Access Management Red Hat Ceph Storage provides identity and access management for: Ceph Storage Cluster User Access Ceph Object Gateway User Access Ceph Object Gateway LDAP/AD Authentication Ceph Object Gateway OpenStack Keystone Authentication 4.1. Ceph Storage Cluster User Access To identify users and protect against man-in-the-middle attacks, Ceph provides its cephx authentication system to authenticate users and daemons. For additional details on cephx , see Ceph user management . Important The cephx protocol DOES NOT address data encryption in transport or encryption at rest. Cephx uses shared secret keys for authentication, meaning both the client and the monitor cluster have a copy of the client's secret key. The authentication protocol is such that both parties are able to prove to each other they have a copy of the key without actually revealing it. This provides mutual authentication, which means the cluster is sure the user possesses the secret key, and the user is sure that the cluster has a copy of the secret key. Users are either individuals or system actors such as applications, which use Ceph clients to interact with the Red Hat Ceph Storage cluster daemons. Ceph runs with authentication and authorization enabled by default. Ceph clients may specify a user name and a keyring containing the secret key of the specified user, usually by using the command line. If the user and keyring are not provided as arguments, Ceph will use the client.admin administrative user as the default. If a keyring is not specified, Ceph will look for a keyring by using the keyring setting in the Ceph configuration. Important To harden a Ceph cluster, keyrings SHOULD ONLY have read and write permissions for the current user and root . The keyring containing the client.admin administrative user key must be restricted to the root user. For details on configuring the Red Hat Ceph Storage cluster to use authentication, see the Configuration Guide for Red Hat Ceph Storage 6. More specifically, see Ceph authentication configuration . 4.2. Ceph Object Gateway User Access The Ceph Object Gateway provides a RESTful application programming interface (API) service with its own user management that authenticates and authorizes users to access S3 and Swift APIs containing user data. Authentication consists of: S3 User: An access key and secret for a user of the S3 API. Swift User: An access key and secret for a user of the Swift API. The Swift user is a subuser of an S3 user. Deleting the S3 'parent' user will delete the Swift user. Administrative User: An access key and secret for a user of the administrative API. Administrative users should be created sparingly, as the administrative user will be able to access the Ceph Admin API and execute its functions, such as creating users, and giving them permissions to access buckets or containers and their objects among other things. The Ceph Object Gateway stores all user authentication information in Ceph Storage cluster pools. Additional information may be stored about users including names, email addresses, quotas, and usage. For additional details, see User Management and Creating an Administrative User . 4.3. Ceph Object Gateway LDAP or AD authentication Red Hat Ceph Storage supports Light-weight Directory Access Protocol (LDAP) servers for authenticating Ceph Object Gateway users. When configured to use LDAP or Active Directory (AD), Ceph Object Gateway defers to an LDAP server to authenticate users of the Ceph Object Gateway. Ceph Object Gateway controls whether to use LDAP. However, once configured, it is the LDAP server that is responsible for authenticating users. To secure communications between the Ceph Object Gateway and the LDAP server, Red Hat recommends deploying configurations with LDAP Secure or LDAPS. Important When using LDAP, ensure that access to the rgw_ldap_secret = PATH_TO_SECRET_FILE secret file is secure. 4.4. Ceph Object Gateway OpenStack Keystone authentication Red Hat Ceph Storage supports using OpenStack Keystone to authenticate Ceph Object Gateway Swift API users. The Ceph Object Gateway can accept a Keystone token, authenticate the user and create a corresponding Ceph Object Gateway user. When Keystone validates a token, the Ceph Object Gateway considers the user authenticated. Ceph Object Gateway controls whether to use OpenStack Keystone for authentication. However, once configured, it is the OpenStack Keystone service that is responsible for authenticating users. Configuring the Ceph Object Gateway to work with Keystone requires converting the OpenSSL certificates that Keystone uses for creating the requests to the nss db format. Additional Resources See The Ceph Object Gateway and OpenStack Keystone section of the Red Hat Ceph Storage Object Gateway Guide for more information.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/data_security_and_hardening_guide/assembly-identity-and-access-management
Installing on bare metal
Installing on bare metal OpenShift Container Platform 4.17 Installing OpenShift Container Platform on bare metal Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_bare_metal/index
Chapter 4. Red Hat OpenShift Cluster Manager
Chapter 4. Red Hat OpenShift Cluster Manager Red Hat OpenShift Cluster Manager is a managed service where you can install, modify, operate, and upgrade your Red Hat OpenShift clusters. This service allows you to work with all of your organization's clusters from a single dashboard. OpenShift Cluster Manager guides you to install OpenShift Container Platform, Red Hat OpenShift Service on AWS (ROSA), and OpenShift Dedicated clusters. It is also responsible for managing both OpenShift Container Platform clusters after self-installation as well as your ROSA and OpenShift Dedicated clusters. You can use OpenShift Cluster Manager to do the following actions: Create new clusters View cluster details and metrics Manage your clusters with tasks such as scaling, changing node labels, networking, authentication Manage access control Monitor clusters Schedule upgrades 4.1. Accessing Red Hat OpenShift Cluster Manager You can access OpenShift Cluster Manager with your configured OpenShift account. Prerequisites You have an account that is part of an OpenShift organization. If you are creating a cluster, your organization has specified quota. Procedure Log in to OpenShift Cluster Manager using your login credentials. 4.2. General actions On the top right of the cluster page, there are some actions that a user can perform on the entire cluster: Open console launches a web console so that the cluster owner can issue commands to the cluster. Actions drop-down menu allows the cluster owner to rename the display name of the cluster, change the amount of load balancers and persistent storage on the cluster, if applicable, manually set the node count, and delete the cluster. Refresh icon forces a refresh of the cluster. 4.3. Cluster tabs Selecting an active, installed cluster shows tabs associated with that cluster. The following tabs display after the cluster's installation completes: Overview Access control Add-ons Networking Insights Advisor Machine pools Support Settings 4.3.1. Overview tab The Overview tab provides information about how the cluster was configured: Cluster ID is the unique identification for the created cluster. This ID can be used when issuing commands to the cluster from the command line. Domain prefix is the prefix that is used throughout the cluster. The default value is the cluster's name. Type shows the OpenShift version that the cluster is using. Control plane type is the architecture type of the cluster. The field only displays if the cluster uses a hosted control plane architecture. Region is the server region. Availability shows which type of availability zone that the cluster uses, either single or multizone. Version is the OpenShift version that is installed on the cluster. If there is an update available, you can update from this field. Created at shows the date and time that the cluster was created. Owner identifies who created the cluster and has owner rights. Delete Protection: <status> shows whether or not the cluster's delete protection is enabled. Total vCPU shows the total available virtual CPU for this cluster. Total memory shows the total available memory for this cluster. Infrastructure AWS account displays the AWS account that is responsible for cluster creation and maintenance. Nodes shows the actual and desired nodes on the cluster. These numbers might not match due to cluster scaling. Network field shows the address and prefixes for network connectivity. OIDC configuration field shows the Open ID Connect configuration for the cluster. Resource usage section of the tab displays the resources in use with a graph. Advisor recommendations section gives insight in relation to security, performance, availability, and stability. This section requires the use of remote health functionality. See Using Insights to identify issues with the cluster in the Additional resources section. 4.3.2. Access control tab The Access control tab allows the cluster owner to set up an identity provider, grant elevated permissions, and grant roles to other users. Prerequisites You must be the cluster owner or have the correct permissions to grant roles on the cluster. Procedure Select the Grant role button. Enter the Red Hat account login for the user that you wish to grant a role on the cluster. Select the Grant role button on the dialog box. The dialog box closes, and the selected user shows the "Cluster Editor" access. 4.3.3. Add-ons tab 4.3.4. Insights Advisor tab The Insights Advisor tab uses the Remote Health functionality of the OpenShift Container Platform to identify and mitigate risks to security, performance, availability, and stability. See Using Insights to identify issues with your cluster in the OpenShift Container Platform documentation. 4.3.5. Machine pools tab The Machine pools tab allows the cluster owner to create new machine pools if there is enough available quota, or edit an existing machine pool. Selecting the > Edit option opens the "Edit machine pool" dialog. In this dialog, you can change the node count per availability zone, edit node labels and taints, and view any associated AWS security groups. 4.3.6. Support tab In the Support tab, you can add notification contacts for individuals that should receive cluster notifications. The username or email address that you provide must relate to a user account in the Red Hat organization where the cluster is deployed. Also from this tab, you can open a support case to request technical support for your cluster. 4.3.7. Settings tab The Settings tab provides a few options for the cluster owner: Update strategy allows you to determine if the cluster automatically updates on a certain day of the week at a specified time or if all updates are scheduled manually. Update status shows the current version and if there are any updates available. 4.4. Additional resources For the complete documentation for OpenShift Cluster Manager, see OpenShift Cluster Manager documentation .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/architecture/ocm-overview-ocp
Updating
Updating Red Hat OpenShift Service Mesh 3.0.0tp1 Updating OpenShift Service Mesh Red Hat OpenShift Documentation Team
[ "kind: Istio spec: updateStrategy: type: InPlace", "oc label namespace <namespace-name> istio.io/rev=<revision-name>", "apiVersion: apps/v1 kind: Deployment spec: template: metadata: labels: istio.io/rev: <revision-name> spec:", "label namespace <namespace-name> istio-injection=enabled", "kind: Istio spec: version: 1.20.2 updateStrategy: type: InPlace", "oc get istio <control-plane-name>", "oc rollout restart <deployment-name>", "kind: Istio spec: version: 1.20.0 updateStrategy: RevisionBased", "kind: Istio spec: version: 1.20.2 updateStrategy: type: RevisionBased", "oc get istiorevisions", "oc rollout restart <deployment>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_mesh/3.0.0tp1/html-single/updating/index
Chapter 22. Managing self-service rules in IdM using the CLI
Chapter 22. Managing self-service rules in IdM using the CLI Learn about self-service rules in Identity Management (IdM) and how to create and edit self-service access rules in the command-line interface (CLI). 22.1. Self-service access control in IdM Self-service access control rules define which operations an Identity Management (IdM) entity can perform on its IdM Directory Server entry: for example, IdM users have the ability to update their own passwords. This method of control allows an authenticated IdM entity to edit specific attributes within its LDAP entry, but does not allow add or delete operations on the entire entry. Warning Be careful when working with self-service access control rules: configuring access control rules improperly can inadvertently elevate an entity's privileges. 22.2. Creating self-service rules using the CLI Follow this procedure to create self-service access rules in IdM using the command-line interface (CLI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure To add a self-service rule, use the ipa selfservice-add command and specify the following two options: --permissions sets the read and write permissions the Access Control Instruction (ACI) grants. --attrs sets the complete list of attributes to which this ACI grants permission. For example, to create a self-service rule allowing users to modify their own name details: 22.3. Editing self-service rules using the CLI Follow this procedure to edit self-service access rules in IdM using the command-line interface (CLI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Optional: Display existing self-service rules with the ipa selfservice-find command. Optional: Display details for the self-service rule you want to modify with the ipa selfservice-show command. Use the ipa selfservice-mod command to edit a self-service rule. For example: Important Using the ipa selfservice-mod command overwrites the previously defined permissions and attributes, so always include the complete list of existing permissions and attributes along with any new ones you want to define. Verification Use the ipa selfservice-show command to display the self-service rule you edited. 22.4. Deleting self-service rules using the CLI Follow this procedure to delete self-service access rules in IdM using the command-line interface (CLI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. An active Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Use the ipa selfservice-del command to delete a self-service rule. For example: Verification Use the ipa selfservice-find command to display all self-service rules. The rule you just deleted should be missing.
[ "ipa selfservice-add \"Users can manage their own name details\" --permissions=write --attrs=givenname --attrs=displayname --attrs=title --attrs=initials ----------------------------------------------------------- Added selfservice \"Users can manage their own name details\" ----------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials", "ipa selfservice-mod \"Users can manage their own name details\" --attrs=givenname --attrs=displayname --attrs=title --attrs=initials --attrs=surname -------------------------------------------------------------- Modified selfservice \"Users can manage their own name details\" -------------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials", "ipa selfservice-show \"Users can manage their own name details\" -------------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials", "ipa selfservice-del \"Users can manage their own name details\" ----------------------------------------------------------- Deleted selfservice \"Users can manage their own name details\" -----------------------------------------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/managing-self-service-rules-in-idm-using-the-cli_configuring-and-managing-idm
Chapter 4. Staggered upgrade
Chapter 4. Staggered upgrade As a storage administrator, you can upgrade Red Hat Ceph Storage components in phases rather than all at once. The ceph orch upgrade command enables you to specify options to limit which daemons are upgraded by a single upgrade command. Note If you want to upgrade from a version that does not support staggered upgrades, you must first manually upgrade the Ceph Manager ( ceph-mgr ) daemons. For more information on performing a staggered upgrade from releases, see Performing a staggered upgrade from releases . 4.1. Staggered upgrade options The ceph orch upgrade command supports several options to upgrade cluster components in phases. The staggered upgrade options include: --daemon_types : The --daemon_types option takes a comma-separated list of daemon types and will only upgrade daemons of those types. Valid daemon types for this option include mgr , mon , crash , osd , mds , rgw , rbd-mirror , cephfs-mirror , and nfs . --services : The --services option is mutually exclusive with --daemon-types , only takes services of one type at a time, and will only upgrade daemons belonging to those services. For example, you cannot provide an OSD and RGW service simultaneously. --hosts : You can combine the --hosts option with --daemon_types , --services , or use it on its own. The --hosts option parameter follows the same format as the command line options for orchestrator CLI placement specification. --limit : The --limit option takes an integer greater than zero and provides a numerical limit on the number of daemons cephadm will upgrade. You can combine the --limit option with --daemon_types , --services , or --hosts . For example, if you specify to upgrade daemons of type osd on host01 with a limit set to 3 , cephadm will upgrade up to three OSD daemons on host01. 4.1.1. Performing a staggered upgrade As a storage administrator, you can use the ceph orch upgrade options to limit which daemons are upgraded by a single upgrade command. Cephadm strictly enforces an order for the upgrade of daemons that is still present in staggered upgrade scenarios. The current upgrade order is: Ceph Manager nodes Ceph Monitor nodes Ceph-crash daemons Ceph OSD nodes Ceph Metadata Server (MDS) nodes Ceph Object Gateway (RGW) nodes Ceph RBD-mirror node CephFS-mirror node Ceph NFS nodes Note If you specify parameters that upgrade daemons out of order, the upgrade command blocks and notes which daemons you need to upgrade before you proceed. Example Note There is no required order for restarting the instances. Red Hat recommends restarting the instance pointing to the pool with primary images followed by the instance pointing to the mirrored pool. Prerequisites A cluster running Red Hat Ceph Storage 5.3 or 6.1. Root-level access to all the nodes. At least two Ceph Manager nodes in the storage cluster: one active and one standby. Procedure Log into the cephadm shell: Example Ensure all the hosts are online and that the storage cluster is healthy: Example Set the OSD noout , noscrub , and nodeep-scrub flags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster: Example Check service versions and the available target containers: Syntax Example Upgrade the storage cluster: To upgrade specific daemon types on specific hosts: Syntax Example To specify specific services and limit the number of daemons to upgrade: Syntax Example Note In staggered upgrade scenarios, if using a limiting parameter, the monitoring stack daemons, including Prometheus and node-exporter , are refreshed after the upgrade of the Ceph Manager daemons. As a result of the limiting parameter, Ceph Manager upgrades take longer to complete. The versions of monitoring stack daemons might not change between Ceph releases, in which case, they are only redeployed. Note Upgrade commands with limiting parameters validates the options before beginning the upgrade, which can require pulling the new container image. As a result, the upgrade start command might take a while to return when you provide limiting parameters. To see which daemons you still need to upgrade, run the ceph orch upgrade check or ceph versions command: Example To complete the staggered upgrade, verify the upgrade of all remaining services: Syntax Example Verification Verify the new IMAGE_ID and VERSION of the Ceph cluster: Example When the upgrade is complete, unset the noout , noscrub , and nodeep-scrub flags: Example 4.1.2. Performing a staggered upgrade from releases You can perform a staggered upgrade on your storage cluster by providing the necessary arguments You can perform a staggered upgrade on your storage cluster by providing the necessary arguments. If you want to upgrade from a version that does not support staggered upgrades, you must first manually upgrade the Ceph Manager ( ceph-mgr ) daemons. Once you have upgraded the Ceph Manager daemons, you can pass the limiting parameters to complete the staggered upgrade. Important Verify you have at least two running Ceph Manager daemons before attempting this procedure. Prerequisites A cluster running Red Hat Ceph Storage 5.2 or lesser. At least two Ceph Manager nodes in the storage cluster: one active and one standby. Procedure Log into the Cephadm shell: Example Determine which Ceph Manager is active and which are standby: Example Manually upgrade each standby Ceph Manager daemon: Syntax Example Fail over to the upgraded standby Ceph Manager: Example Check that the standby Ceph Manager is now active: Example Verify that the active Ceph Manager is upgraded to the new version: Syntax Example Repeat steps 2 - 6 to upgrade the remaining Ceph Managers to the new version. Check that all Ceph Managers are upgraded to the new version: Example Once you upgrade all your Ceph Managers, you can specify the limiting parameters and complete the remainder of the staggered upgrade. Additional Resources For more information about performing a staggered upgrade and staggered upgrade options, see Performing a staggered upgrade .
[ "ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest --hosts host02 Error EINVAL: Cannot start upgrade. Daemons with types earlier in upgrade order than daemons on given host need upgrading. Please first upgrade mon.ceph-host01", "cephadm shell", "ceph -s", "ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub", "ceph orch upgrade check IMAGE_NAME", "ceph orch upgrade check registry.redhat.io/rhceph/rhceph-8-rhel9:latest", "ceph orch upgrade start --image IMAGE_NAME --daemon-types DAEMON_TYPE1 , DAEMON_TYPE2 --hosts HOST1 , HOST2", "ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest --daemon-types mgr,mon --hosts host02,host03", "ceph orch upgrade start --image IMAGE_NAME --services SERVICE1 , SERVICE2 --limit LIMIT_NUMBER", "ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest --services rgw.example1,rgw1.example2 --limit 2", "ceph orch upgrade check --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest", "ceph orch upgrade start --image IMAGE_NAME", "ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest", "ceph versions ceph orch ps", "ceph osd unset noout ceph osd unset noscrub ceph osd unset nodeep-scrub", "cephadm shell", "ceph -s cluster: id: 266ee7a8-2a05-11eb-b846-5254002d4916 health: HEALTH_OK services: mon: 2 daemons, quorum host01,host02 (age 92s) mgr: host01.ndtpjh(active, since 16h), standbys: host02.pzgrhz", "ceph orch daemon redeploy mgr.ceph- HOST . MANAGER_ID --image IMAGE_ID", "ceph orch daemon redeploy mgr.ceph-host02.pzgrhz --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest", "ceph mgr fail", "ceph -s cluster: id: 266ee7a8-2a05-11eb-b846-5254002d4916 health: HEALTH_OK services: mon: 2 daemons, quorum host01,host02 (age 1h) mgr: host02.pzgrhz(active, since 25s), standbys: host01.ndtpjh", "ceph tell mgr.ceph- HOST . MANAGER_ID version", "ceph tell mgr.host02.pzgrhz version { \"version\": \"18.2.0-128.el8cp\", \"release\": \"reef\", \"release_type\": \"stable\" }", "ceph mgr versions { \"ceph version 18.2.0-128.el8cp (600e227816517e2da53d85f2fab3cd40a7483372) pacific (stable)\": 2 }" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/upgrade_guide/staggered-upgrade
Chapter 3. Language support for Camel
Chapter 3. Language support for Camel Important The VS Code extensions for Apache Camel are listed as development support. For more information about scope of development support, see Development Support Scope of Coverage for Red Hat Build of Apache Camel . 3.1. About language support for Apache Camel extension The Visual Studio Code language support extension adds the language support for Apache Camel for XML DSL and Java DSL code. This extension provides completion, validation and documentation features for Apache Camel URI elements directly in your Visual Studio Code editor. It works as a client using the Microsoft Language Server Protocol which communicates with Camel Language Server to provide all functionalities. 3.1.1. Features of language support for Apache Camel extension The important features of the language support extension are listed below: Language service support for Apache Camel URIs. Quick reference documentation when you hover the cursor over a Camel component. Diagnostics for Camel URIs. Navigation for Java and XML langauges. Creating a Camel Route specified with Yaml DSL using Camel CLI. Create a Camel Quarkus project Create a Camel on SpringBoot project Specific Camel Catalog Version Specific Runtime provider for the Camel Catalog 3.1.2. Requirements The following points must be considered when using the Apache Camel Language Server: Java 17 is currently required to launch the Apache Camel Language Server. The java.home VS Code option is used to use a different version of JDK than the default one installed on the machine. For some features, JBang must be available on a system command line. For an XML DSL files: Use an .xml file extension. Specify the Camel namespace, for reference, see http://camel.apache.org/schema/blueprint or http://camel.apache.org/schema/spring . For a Java DSL files: Use a .java file extension. Specify the Camel package(usually from an imported package), for example, import org.apache.camel.builder.RouteBuilder . To reference the Camel component, use from or to and a string without a space. The string cannot be a variable. For example, from("timer:timerName") works, but from( "timer:timerName") and from(aVariable) do not work. 3.1.3. Installing Language support for Apache Camel extension You can download the Language support for Apache Camel extension from the VS Code Extension Marketplace and the Open VSX Registry. You can also install the Language Support for Apache Camel extension directly in the Microsoft VS Code. Procedure Open the VS Code editor. In the VS Code editor, select View > Extensions . In the search bar, type Camel . Select the Language Support for Apache Camel option from the search results and then click Install. This installs the language support extension in your editor. 3.1.4. Using specific Camel catalog version You can use the specific Camel catalog version. Click File > Preferences > Settings > Apache Camel Tooling > Camel catalog version . For Red Hat productized version that contains redhat in its version identifier, the Maven Red Hat repository is automatically added. Note For the first time a version is used, it takes several seconds/minutes to have it available depending on the time to download the dependencies in the background. 3.1.5. Limitations The Kamelet catalog used is community supported version only. For the list of supported Kamelets, see link: Supported Kamelets Modeline configuration is based on community only. Not all traits and modeline parameters are supported. Additional resources Language Support for Apache Camel by Red Hat
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/tooling_guide_for_red_hat_build_of_apache_camel/camel-tooling-guide-language
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on any platform that you provision including bare metal, virtualized, and cloud environments. Both internal and external OpenShift Data Foundation clusters are supported on these environments. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process based on your requirement: Internal mode Deploy using local storage devices Deploy standalone Multicloud Object Gateway component External mode
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_on_any_platform/preface-agnostic
Chapter 11. Internationalization
Chapter 11. Internationalization 11.1. Red Hat Enterprise Linux 8 international languages Red Hat Enterprise Linux 8 supports the installation of multiple languages and the changing of languages based on your requirements. East Asian Languages - Japanese, Korean, Simplified Chinese, and Traditional Chinese. European Languages - English, German, Spanish, French, Italian, Portuguese, and Russian. The following table lists the fonts and input methods provided for various major languages. Language Default Font (Font Package) Input Methods English dejavu-sans-fonts French dejavu-sans-fonts German dejavu-sans-fonts Italian dejavu-sans-fonts Russian dejavu-sans-fonts Spanish dejavu-sans-fonts Portuguese dejavu-sans-fonts Simplified Chinese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-libpinyin, libpinyin Traditional Chinese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-libzhuyin, libzhuyin Japanese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-kkc, libkkc Korean google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-hangul, libhangul 11.2. Notable changes to internationalization in RHEL 8 RHEL 8 introduces the following changes to internationalization compared to RHEL 7: Support for the Unicode 11 computing industry standard has been added. Internationalization is distributed in multiple packages, which allows for smaller footprint installations. For more information, see Using langpacks . A number of glibc locales have been synchronized with Unicode Common Locale Data Repository (CLDR).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.4_release_notes/internationalization
Chapter 1. System requirements and supported architectures
Chapter 1. System requirements and supported architectures Red Hat Enterprise Linux 8 delivers a stable, secure, consistent foundation across hybrid cloud deployments with the tools needed to deliver workloads faster with less effort. You can deploy RHEL as a guest on supported hypervisors and Cloud provider environments as well as on physical infrastructure, so your applications can take advantage of innovations in the leading hardware architecture platforms. Review the guidelines provided for system, hardware, security, memory, and RAID before installing. If you want to use your system as a virtualization host, review the necessary hardware requirements for virtualization . Red Hat Enterprise Linux supports the following architectures: AMD and Intel 64-bit architectures The 64-bit ARM architecture IBM Power Systems, Little Endian 64-bit IBM Z architectures 1.1. Supported installation targets An installation target is a storage device that stores Red Hat Enterprise Linux and boots the system. Red Hat Enterprise Linux supports the following installation targets for IBMZ , IBM Power, AMD64, Intel 64, and 64-bit ARM systems: Storage connected by a standard internal interface, such as DASD, SCSI, SATA, or SAS BIOS/firmware RAID devices on the Intel64, AMD64 and arm64 architectures NVDIMM devices in sector mode on the Intel64 and AMD64 architectures, supported by the nd_pmem driver. Storage connected via Fibre Channel Host Bus Adapters, such as DASDs (IBM Z architecture only) and SCSI LUNs, including multipath devices. Some might require vendor-provided drivers. Xen block devices on Intel processors in Xen virtual machines. VirtIO block devices on Intel processors in KVM virtual machines. Red Hat does not support installation to USB drives or SD memory cards. For information about support for third-party virtualization technologies, see the Red Hat Hardware Compatibility List . 1.2. System specifications The Red Hat Enterprise Linux installation program automatically detects and installs your system's hardware, so you should not have to supply any specific system information. However, for certain Red Hat Enterprise Linux installation scenarios, it is recommended that you record system specifications for future reference. These scenarios include: Installing RHEL with a customized partition layout Record: The model numbers, sizes, types, and interfaces of the disks attached to the system. For example, Seagate ST3320613AS 320 GB on SATA0, Western Digital WD7500AAKS 750 GB on SATA1. Installing RHEL as an additional operating system on an existing system Record: Partitions used on the system. This information can include file system types, device node names, file system labels, and sizes, and allows you to identify specific partitions during the partitioning process. If one of the operating systems is a Unix operating system, Red Hat Enterprise Linux may report the device names differently. Additional information can be found by executing the equivalent of the mount command and the blkid command, and in the /etc/fstab file. If multiple operating systems are installed, the Red Hat Enterprise Linux installation program attempts to automatically detect them, and to configure boot loader to boot them. You can manually configure additional operating systems if they are not detected automatically. Installing RHEL from an image on a local disk Record: The disk and directory that holds the image. Installing RHEL from a network location If the network has to be configured manually, that is, DHCP is not used. Record: IP address Netmask Gateway IP address Server IP addresses, if required Contact your network administrator if you need assistance with networking requirements. Installing RHEL on an iSCSI target Record: The location of the iSCSI target. Depending on your network, you may need a CHAP user name and password, and a reverse CHAP user name and password. Installing RHEL if the system is part of a domain Verify that the domain name is supplied by the DHCP server. If it is not, enter the domain name during installation. 1.3. Disk and memory requirements If several operating systems are installed, it is important that you verify that the allocated disk space is separate from the disk space required by Red Hat Enterprise Linux. In some cases, it is important to dedicate specific partitions to Red Hat Enterprise Linux, for example, for AMD64, Intel 64, and 64-bit ARM, at least two partitions ( / and swap ) must be dedicated to RHEL and for IBM Power Systems servers, at least three partitions ( / , swap , and a PReP boot partition) must be dedicated to RHEL. Additionally, you must have a minimum of 10 GiB of available disk space. To install Red Hat Enterprise Linux, you must have a minimum of 10 GiB of space in either unpartitioned disk space or in partitions that can be deleted. For more information, see Partitioning reference . Table 1.1. Minimum RAM requirements Installation type Minimum RAM Local media installation (USB, DVD) 1.5 GiB for aarch64, IBM Z and x86_64 architectures 3 GiB for ppc64le architecture NFS network installation 1.5 GiB for aarch64, IBM Z and x86_64 architectures 3 GiB for ppc64le architecture HTTP, HTTPS or FTP network installation 3 GiB for IBM Z and x86_64 architectures 4 GiB for aarch64 and ppc64le architectures It is possible to complete the installation with less memory than the minimum requirements. The exact requirements depend on your environment and installation path. Test various configurations to determine the minimum required RAM for your environment. Installing Red Hat Enterprise Linux using a Kickstart file has the same minimum RAM requirements as a standard installation. However, additional RAM may be required if your Kickstart file includes commands that require additional memory, or write data to the RAM disk. For more information, see Automatically installing RHEL . 1.4. Graphics display resolution requirements Your system must have the following minimum resolution to ensure a smooth and error-free installation of Red Hat Enterprise Linux. Table 1.2. Display resolution Product version Resolution Red Hat Enterprise Linux 8 Minimum : 800 x 600 Recommended : 1026 x 768 1.5. UEFI Secure Boot and Beta release requirements If you plan to install a Beta release of Red Hat Enterprise Linux, on systems having UEFI Secure Boot enabled, then first disable the UEFI Secure Boot option and then begin the installation. UEFI Secure Boot requires that the operating system kernel is signed with a recognized private key, which the system's firmware verifies using the corresponding public key. For Red Hat Enterprise Linux Beta releases, the kernel is signed with a Red Hat Beta-specific public key, which the system fails to recognize by default. As a result, the system fails to even boot the installation media. Additional resources For information about installing RHEL on IBM, see IBM installation documentation Security hardening Composing a customized RHEL system image Red Hat ecosystem catalog RHEL technology capabilities and limits
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/system-requirements-and-supported-architectures_rhel-installer
Chapter 3. Enabling automated deployments of JBoss EAP
Chapter 3. Enabling automated deployments of JBoss EAP The JBoss EAP collection provides a comprehensive set of variables and default values that you can manually update to match your setup requirements. These variable settings provide all the information that the JBoss EAP collection requires to complete an automated and customized installation of Red Hat JBoss Enterprise Application Platform (JBoss EAP) on your target hosts. For a full list of variables that the JBoss EAP collection provides, see the redhat.eap roles in Ansible automation hub . The information page for the role lists the names, descriptions, and default values for all the variables that you can define. Note You can define variables in multiple ways. By default, the JBoss EAP collection includes an example playbook.yml file that links to a vars.yml file in the same playbooks folder. For illustrative purposes, the instructions in this section describe how to define variables in the vars.yml file that the collection provides. You can use a different way to define variables if you prefer. You can define variables to automate the following tasks: Install JBoss EAP from archive files that you can choose to download either automatically or manually from the Red Hat Customer Portal . Ensure that a supported JDK version is installed on your target hosts . Ensure that a product user account and group are created on your target hosts . You can also perform the following automation enablement tasks: Enable automated configuration of JBoss EAP by specifying YAML configuration values and defining a variable to enable the YAML configuration feature, as described in Enabling the automated configuration of JBoss EAP subsystems . Enable automated deployment of JBoss EAP applications by adding customized tasks to the playbook, as described in Enabling the automated deployment of JBoss EAP applications on your target hosts . 3.1. Enablement of automated installations of JBoss EAP from archive files By default, the JBoss EAP collection is configured to install Red Hat JBoss Enterprise Application Platform (JBoss EAP) on each target host from product archive files. Depending on your setup requirements, you can enable the JBoss EAP collection to install a base product release, product patch updates, or both simultaneously from archive files. You can choose to download the archive files manually from the Red Hat Customer Portal or enable the JBoss EAP collection to download the archive files automatically. 3.1.1. Enabling the automated installation of a JBoss EAP base release You can enable the JBoss EAP collection to install the base release of a specified JBoss EAP product version from archive files. A base release is the initial release of a specific product version (for example, 7.4.0 is the base release of version 7.4). The JBoss EAP collection requires that local copies of the appropriate archive files are available on your Ansible control node. If copies of the archive files are not already on your system, you can set variables to specify Red Hat service account credentials to permit automatic file downloads from the Red Hat Customer Portal. Alternatively, you can download the archive files manually. Prerequisites You have installed the JBoss EAP collection . If a copy of the JBoss EAP archive file is already on your system, you have copied this archive file to your Ansible control node. In this situation, you must copy the archive file to the same directory as your custom playbook on the Ansible control node. If you want the JBoss EAP collection to download archive files automatically from the Red Hat Customer Portal, you have created a Red Hat service account. Note Service accounts enable you to securely and automatically connect and authenticate services or applications without requiring end-user credentials or direct interaction. To create a service account, log in to the Service Accounts page in the Red Hat Hybrid Cloud Console, and click Create service account . If you prefer to download the archive file manually, you have downloaded the appropriate archive file to your Ansible control node. In this situation, you must download the archive file to the same directory as your custom playbook on the Ansible control node. For more information about downloading archive files, see the Red Hat JBoss Enterprise Application Platform Installation Guide . Note If you manually download the archive file, you do not need to extract this file on your Ansible control node. In this situation, the JBoss EAP collection extracts the archive file automatically. Procedure On your Ansible control node, open the vars.yml file. To specify the JBoss EAP version that you want to install, set the eap_version variable to the appropriate base release. For example: Note Ensure that the value you specify for the eap_version variable matches the version of the product archive file that you want to install. For example, to install the archive file for JBoss EAP 7.4, specify a value of 7.4.0 . If you do not specify credentials for automatic file downloads as described in Step 3 , ensure that you have copied the archive file for the specified product version to your Ansible control node. If a copy of the JBoss EAP archive file does not exist on your Ansible control node, the collection contacts the Red Hat Customer Portal by default to download the archive file automatically. To ensure successful contact with the Red Hat Customer Portal, set the rhn_username and rhn_password variables to specify your Red Hat service account credentials. For example: In the preceding example, replace <client_ID> and <client_secret> with the client ID and secret that are associated with your Red Hat service account. Note If a copy of the appropriate archive file already exists on your Ansible control node, the collection does not download this archive file again. If you prefer to download the archive file manually or you have already obtained this file in some other way, you can enforce a fully offline installation. For more information about enforcing offline installations, see Enabling the automated installation of JBoss EAP patch updates . If you changed the name of the downloaded archive file on your Ansible control node, set the eap_archive_filename variable to specify the file that you want to install. For example: In the preceding example, replace <application_server_file> with the appropriate archive file name. Note If you did not change the file name, you do not need to set the eap_archive_filename variable. The JBoss EAP collection uses the value of the eap_version variable to determine the default file name automatically. Save your changes to the vars.yml file. By setting these variables, as appropriate, you enable the JBoss EAP collection to install the base product release automatically on your target hosts when you subsequently run the playbook. 3.1.2. Enabling the automated installation of JBoss EAP patch updates If product patch updates are available for the JBoss EAP version that is being installed, you can also enable the JBoss EAP collection to install these patch updates from archive files. Depending on your requirements, you can enable the JBoss EAP collection to install either the latest available patch or a specified patch release. You can use the same steps to enable the automated installation of patch updates regardless of whether you want to install the updates at the same time as the base release or later. The JBoss EAP collection requires that local copies of the appropriate archive files are available on your Ansible control node. If copies of the archive files are not already on your system, you can set variables to specify Red Hat service account credentials to permit automatic file downloads from the Red Hat Customer Portal. Alternatively, you can download the archive files manually. Note Patch updates are cumulative, which means that each patch update automatically includes any earlier patch releases that are available for the same product version. For example, a 7.4.2 patch update would include the 7.4.1 release, a 7.4.3 patch update would include the 7.4.1 and 7.4.2 releases, and so on. Important You cannot use cumulative patch updates to install the base ( X.X. 0) release of a product version. For example, a 7.4.2 patch would include the 7.4.1 release but cannot install the base 7.4.0 release. In this situation, you must ensure that the base release of the appropriate product version (for example, 7.4.0) is also installed either at the same time or previously. Prerequisites You have installed the JBoss EAP collection . If a copy of the archive file for the patch update that you want to install is already on your system, you have copied this archive file to your Ansible control node. In this situation, you must copy the archive file to the same directory as your custom playbook on the Ansible control node. If you want the JBoss EAP collection to download archive files automatically from the Red Hat Customer Portal, you have created a Red Hat service account. Note Service accounts enable you to securely and automatically connect and authenticate services or applications without requiring end-user credentials or direct interaction. To create a service account, log in to the Service Accounts page in the Red Hat Hybrid Cloud Console, and click Create service account . If you prefer to download the archive file manually, you have downloaded the appropriate archive file to your Ansible control node. In this situation, you must download the archive file to the same directory as your custom playbook on the Ansible control node. For more information about downloading archive files, see the Red Hat JBoss Enterprise Application Platform Installation Guide . Note Because patch updates are cumulative, you only need to download the archive file for the patch release that you want to install. You do not need to download any patch updates. If you manually download the archive file, you do not need to extract this file on your Ansible control node. In this situation, the JBoss EAP collection extracts the archive file automatically. Procedure On your Ansible control node, open the vars.yml file. Set the eap_apply_cp variable to True . For example: Note Ensure that the eap_version variable is set to the base release for the appropriate product version (for example, 7.4.0 ). The JBoss EAP collection is configured to install the latest patch update by default. The collection contacts the Red Hat Customer Portal to determine the correct patch to install. If you want the collection to install a specified patch release rather than the latest patch update, set the eap_patch_version variable to the patch release that you want to install. For example: Based on the preceding example, the collection installs the cumulative 7.4.2 patch only, even if later patches are also available. When the eap_apply_cp variable is set to True , the JBoss EAP collection contacts the Red Hat Customer Portal by default to check if new patch updates are available. The collection also downloads patch updates, if necessary. To ensure successful contact with the Red Hat Customer Portal, set the rhn_username and rhn_password variables to specify your Red Hat service account credentials. For example: In the preceding example, replace <client_ID> and <client_secret> with the client ID and secret that are associated with your Red Hat service account. Note If a copy of the appropriate archive file already exists on your Ansible control node, the collection does not download this archive file again. If the eap_patch_version variable is set to a specific patch release, the collection downloads the specified patch release only, even if later patches are also available. If you prefer to download the archive file manually or you have already obtained this file in some other way, you can enforce a fully offline installation, as described in Step 5 . If you want to enforce a fully offline installation and prevent the collection from contacting the Red Hat Customer Portal, set the eap_offline_install variable to True . For example: Note The eap_offline_install variable is useful if your Ansible control node does not have internet access or you want the collection to avoid contacting the Red Hat Customer Portal for file downloads. In this situation, you must set the eap_patch_version variable to the patch release you want to install. Ensure that you have copied the archive file for the appropriate patch update to your Ansible control node. In this situation, you must copy the archive file to the same directory as your custom playbook on the Ansible control node. If you set the eap_offline_install variable to True , the collection does not attempt to contact the Red Hat Customer Portal, even if you have also set the rhn_username and rhn_password variables to permit automatic file downloads. Save your changes to the vars.yml file. By setting these variables, as appropriate, you enable the JBoss EAP collection to install the product patch updates automatically on your target hosts when you subsequently run the playbook. 3.2. Ensuring that a JDK is installed on the target hosts JBoss EAP requires that a Java Development Kit (JDK) is already installed as a prerequisite on your target hosts to ensure that JBoss EAP operates successfully. A JDK includes a Java Runtime Environment (JRE) and Java Virtual Machine (JVM), which must be available on any host where you want to run JBoss EAP. For a full list of JDK versions that JBoss EAP supports, see JBoss EAP 7.4 Supported Configurations . By default, the JBoss EAP collection is configured to install the java-11-openjdk-headless package on each target host, based on the default value of the eap_java_package_name variable. If you want the JBoss EAP collection to install a different OpenJDK package version, you can modify the behavior of the collection to match your setup requirements Consider the following guidelines for installing a JDK when you use the JBoss EAP collection: If you want to install a Red Hat build of OpenJDK package other than java-11-openjdk-headless on your target hosts, you can set the eap_java_package_name variable to the appropriate JDK package name. The JBoss EAP collection automatically installs the specified package on each target host when you subsequently run the playbook. If you want to install a different type of JDK that is listed in the JBoss EAP 7.4 Supported Configurations page, you must install the JDK manually on each target host. Alternatively, you can automate this process by using your own playbook. For more information about installing a different type of JDK, see the appropriate user documentation. In this situation, ensure that you set the eap_java_package_name variable to an empty string. For example: If you already have a supported JDK installed on your target hosts, ensure that you set the eap_java_package_name variable to an empty string, as shown in the preceding example. Note Use the following procedure if you want to enable the JBoss EAP collection to install a Red Hat build of OpenJDK package other than java-11-openjdk-headless on your target hosts. Prerequisites You have installed the JBoss EAP collection . Procedure On your Ansible control node, open the vars.yml file. Set the eap_java_package_name variable to the appropriate OpenJDK package name that you want to install. For example: Based on the preceding example, the JBoss EAP collection automatically installs the java-1.8.0-openjdk-headless package on each target host when you run the playbook. Save your changes to the vars.yml file. 3.3. Ensuring that a product user and group are created on the target hosts JBoss EAP requires that a product user account and user group are already created as a prerequisite on your target hosts. By default, the JBoss EAP collection handles this requirement by creating an eap user account and an eap group automatically on each target host. However, if you want the JBoss EAP collection to create a different user account and group, you can modify the behavior of the JBoss EAP collection to match your setup requirements. The product user account is also assigned ownership of the directories that are required to run the JBoss EAP service. Note Use the following procedure if you want to enable the JBoss EAP collection to create a different user account and group rather than the eap default values. Prerequisites You have installed the JBoss EAP collection . Procedure On your Ansible control node, open the vars.yml file. Set the eap_user and eap_group variables to the appropriate product user name and group name that you want to create. For example: Based on the preceding example, the JBoss EAP collection automatically creates a myuser user account and group instead of creating the default eap user account and group. Save your changes to the vars.yml file. 3.4. Enabling the automated configuration of JBoss EAP subsystems You can enable the JBoss EAP collection to configure JBoss EAP subsystems with customized settings that match your setup requirements. In this situation, the JBoss EAP collection uses a YAML configuration feature for JBoss EAP. If you want to enable the automated configuration of JBoss EAP, you must specify the appropriate configuration values in YAML format in a Jinja2 template file. Provided that you also set a variable to enable the YAML configuration feature, the JBoss EAP collection automatically creates a YAML configuration file based on the Jinja2 template settings. Prerequisites You have installed the JBoss EAP collection . Procedure Create a Jinja2 template that contains YAML configuration values for the JBoss EAP subsystems: On your Ansible control node, create a Jinja2 template named, for example, jms_configuration.yml.j2 . Add the appropriate configuration values to the template. For example, the following content shows configuration values for the Java Message Service (JMS) queue: Note Because the Jinja2 file is a template, you can use placeholders for the subsystem configuration values, as shown in the preceding example. If you use placeholders in the Jinja2 template, you must specify details of these placeholders in your playbook, as described in Step 2 . Save your changes to the Jinja2 template. For more information about the types of YAML configuration settings that you can specify for the JBoss EAP subsystems, see Update standalone server configuration using YAML files . Update the playbook to include variables for the name of the Jinja2 template and any placeholders that you specified in the template: On your Ansible control node, open your custom playbook. In the vars: section of the playbook, add a variable to specify the name of the Jinja2 template that you created in Step 1 . For example, add a variable named eap_yml_configs with a value of jms.configuration.yml.j2 : If you specified placeholders for the configuration values in the Jinja2 template, update the vars: section of your playbook with the appropriate placeholder details. For example, add variables for the queue.name and queue.entry placeholders that you specified in Step 1 : Save your changes to the custom playbook. Enable the YAML configuration feature. On your Ansible control node, open the vars.yml file. Set the eap_enable_yml_config variable to True . For example: Save your changes to the vars.yml file. Note The eap_enable_yml_config variable is set to False by default. If you want to enable the automated configuration of JBoss EAP subsystems, you must set the eap_enable_yml_config variable to True . By performing all the steps in this procedure, as appropriate, you enable the JBoss EAP collection to create YAML configuration files based on your Jinja2 template settings when you subsequently run the playbook. 3.5. Enabling the automated deployment of JBoss EAP applications on your target hosts You can also automate the deployment of web applications on your target JBoss EAP hosts by adding customized tasks to the playbook. If you want to deploy a new or updated application, the JBoss EAP collection provides a reusable task for this purpose. Note The following procedure assumes that you have created a custom playbook. Prerequisites You have installed the JBoss EAP collection . You are familar with general Ansible concepts and creating Ansible playbooks. For more information, see the Ansible documentation . Procedure On your Ansible control node, open your custom playbook. In the tasks: section of the playbook, add a task to deploy the appropriate web application. For example: Save your changes to the playbook. Additional resources Files modules Net Tools modules
[ "[...] eap_version: 7.4.0", "[...] rhn_username: <client_ID> rhn_password: <client_secret>", "[...] eap_archive_filename: <application_server_file>", "[...] eap_version: 7.4.0 [...] eap_apply_cp: True", "[...] eap_apply_cp: True eap_patch_version: 7.4.2", "[...] rhn_username: <client_ID> rhn_password: <client_secret>", "[...] eap_offline_install: True", "[...] eap_java_package_name: \"\"", "[...] eap_java_package_name: java-1.8.0-openjdk-headless", "[...] eap_user: myuser eap_group: myuser", "jms_configuration.yml.j2: wildfly-configuration: subsystem: messaging-activemq: server: default: jms-queue: {{ queue.name }}: entries: - '{{ queue.entry }}'", "--- - name: \"JBoss EAP installation and configuration\" hosts: all become: true vars: eap_yml_configs: - jms_configuration.yml.j2", "--- [...] vars: queue: name: MyQueue entry: 'java:/jms/queue/MyQueue' eap_yml_configs: - jms_configuration.yml.j2", "[...] eap_enable_yml_config: True", "[...] post_tasks: [...] - name: \"Deploy webapp\" ansible.builtin.include_role: name: eap_utils tasks_from: jboss_cli.yaml vars: jboss_cli_query: \"'deploy --force {{ path_to_warfile }}'\" [...]" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/installing_jboss_eap_by_using_the_red_hat_ansible_certified_content_collection/define_variables
Chapter 70. Expression syntax in test scenarios
Chapter 70. Expression syntax in test scenarios The test scenarios designer supports different expression languages for both rule-based and DMN-based test scenarios. While rule-based test scenarios support the MVFLEX Expression Language (MVEL) and DMN-based test scenarios support the Friendly Enough Expression Language (FEEL). 70.1. Expression syntax in rule-based test scenarios Rule-based test scenario supports the following built-in data types: String Boolean Integer Long Double Float Character Byte Short LocalDate Note For any other data types, use the MVEL expression with the prefix # . Follow the BigDecimal example in the test scenario designer to use the # prefix to set the java expression: Enter # java.math.BigDecimal.valueOf(10) for the GIVEN column value. Enter # actualValue.intValue() == 10 for the EXPECT column value. You can refer to the actual value of the EXPECT column in the java expression to execute a condition. The following rule-based test scenario definition expressions are supported by the test scenarios designer: Table 70.1. Description of expressions syntax Operator Description = Specifies equal to a value. This is default for all columns and is the only operator supported by the GIVEN column. =, =!, <> Specifies inequality of a value. This operator can be combined with other operators. <, >, <=, >= Specifies a comparison: less than, greater than, less or equals than, and greater or equals than. # This operator is used to set the java expression value to a property header cell which can be executed as a java method. [value1, value2, value3] Specifies a list of values. If one or more values are valid, the scenario definition is evaluated as true. expression1; expression2; expression3 Specifies a list of expressions. If all expressions are valid, the scenario definition is evaluated as true. Note When evaluating a rule-based test scenario, an empty cell is skipped from the evaluation. To define an empty string, use = , [] , or ; and to define a null value, use null . Table 70.2. Example expressions Expression Description -1 The actual value is equal to -1. < 0 The actual value is less than 0. ! > 0 The actual value is not greater than 0. [-1, 0, 1] The actual value is equal to either -1 or 0 or 1. <> [1, -1] The actual value is neither equal to 1 nor -1. ! 100; 0 The actual value is not equal to 100 but is equal to 0. != < 0; <> > 1 The actual value is neither less than 0 nor greater than 1. <> <= 0; >= 1 The actual value is neither less than 0 nor equal to 0 but is greater than or equal to 1. You can refer to the supported commands and syntax in the Scenario Cheatsheet tab on the right of the rule-based test scenarios designer. 70.2. Expression syntax in DMN-based test scenarios The following data types are supported by the DMN-based test scenarios in the test scenarios designer: Table 70.3. Data types supported by DMN-based scenarios Supported data types Description numbers & strings Strings must be delimited by quotation marks, for example, "John Doe" , "Brno" or "" . boolean values true , false , and null . dates and time For example, date("2019-05-13") or time("14:10:00+02:00") . functions Supports built-in math functions, for example, avg , max . contexts For example, {x : 5, y : 3} . ranges and lists For example, [1 .. 10] or [2, 3, 4, 5] . Note When evaluating a DMN-based test scenario, an empty cell is skipped from the evaluation. To define an empty string in DMN-based test scenarios, use " " and to define a null value, use null . You can refer to the supported commands and syntax in the Scenario Cheatsheet tab on the right of the DMN-based test scenarios designer.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/test-designer-expressions-syntax-intro-ref
API Documentation
API Documentation Red Hat build of Keycloak 24.0 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/api_documentation/index
Part I. Configuration tasks
Part I. Configuration tasks
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/configuration_tasks
Chapter 6. Run Red Hat JBoss Data Grid JAR Files with Maven
Chapter 6. Run Red Hat JBoss Data Grid JAR Files with Maven 6.1. Run JBoss Data Grid (Remote Client-Server Mode) Use the following instructions to run Red Hat JBoss Data Grid JAR files with Maven in Remote Client-Server mode. Hot Rod Client with Querying Add the following dependencies to the pom.xml file: Add infinispan-remote dependency: For instances where a Remote Cache Store is in use also add the infinispan-embedded dependency as shown below: For instances where JSR-107 is in use, ensure that the cache-api packages are available at runtime. Having these packages available can be accomplished by any of the following methods: Option 1: If JBoss EAP is in use, then add the JBoss Data Grid modules to this instance as described in Section 4.2, "Deploy JBoss Data Grid in JBoss EAP (Remote Client-Server Mode)" . Add the javax.cache.api module to the application's jboss-deployment-structure.xml . An example is shown below: Option 2: Download the jboss-datagrid-USD{jdg.version}-library file from the customer portal. Extract the downloaded archive. Embed the jboss-datagrid-USD{jdg.version}-library/lib/cache-api-USD{jcache.version}.jar file into the desired application. Option 3: If the JBoss Data Grid Maven repository is available then add an explicit dependency to the pom.xml of the project as seen below: Warning The Infinispan query API directly exposes the Hibernate Search and the Lucene APIs and cannot be embedded within the infinispan-embedded-query.jar file. Do not include other versions of Hibernate Search and Lucene in the same deployment as infinispan-embedded-query . This action will cause classpath conflicts and result in unexpected behavior. Report a bug
[ "<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-remote</artifactId> <version>USD{infinispan.version}</version> </dependency>", "<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-embedded</artifactId> <version>USD{infinispan.version}</version> </dependency>", "<jboss-deployment-structure xmlns=\"urn:jboss:deployment-structure:1.2\"> <deployment> <dependencies> <module name=\"javax.cache.api\" slot=\"jdg-6.6\" services=\"export\"/> </dependencies> </deployment> </jboss-deployment-structure>", "<dependency> <groupId>javax.cache</groupId> <artifactId>cache-api</artifactId> <version>USD{jcache.version}</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/chap-run_red_hat_jboss_data_grid_jar_files_with_maven
15.2.7. Verifying
15.2.7. Verifying Verifying a package compares information about files installed from a package with the same information from the original package. Among other things, verifying compares the size, MD5 sum, permissions, type, owner, and group of each file. The command rpm -V verifies a package. You can use any of the Package Verify Options listed for querying to specify the packages you wish to verify. A simple use of verifying is rpm -V foo , which verifies that all the files in the foo package are as they were when they were originally installed. For example: To verify a package containing a particular file: To verify ALL installed packages: To verify an installed package against an RPM package file: This command can be useful if you suspect that your RPM databases are corrupt. If everything verified properly, there is no output. If there are any discrepancies, they are displayed. The format of the output is a string of eight characters (a c denotes a configuration file) and then the file name. Each of the eight characters denotes the result of a comparison of one attribute of the file to the value of that attribute recorded in the RPM database. A single period ( . ) means the test passed. The following characters denote failure of certain tests: 5 - MD5 checksum S - file size L - symbolic link T - file modification time D - device U - user G - group M - mode (includes permissions and file type) ? - unreadable file If you see any output, use your best judgment to determine if you should remove or reinstall the package, or fix the problem in another way.
[ "-Vf /usr/bin/vim", "-Va", "-Vp foo-1.0-1.i386.rpm" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Using_RPM-Verifying
Chapter 2. The LVM Logical Volume Manager
Chapter 2. The LVM Logical Volume Manager This chapter provides a summary of the features of the LVM logical volume manager that are new for the initial and subsequent releases of Red Hat Enterprise Linux 6. Following that, this chapter provides a high-level overview of the components of the Logical Volume Manager (LVM). 2.1. New and Changed Features This section lists new and changed features of the LVM logical volume manager that are included with the initial and subsequent releases of Red Hat Enterprise Linux 6. 2.1.1. New and Changed Features for Red Hat Enterprise Linux 6.0 Red Hat Enterprise Linux 6.0 includes the following documentation and feature updates and changes. You can define how a mirrored logical volume behaves in the event of a device failure with the mirror_image_fault_policy and mirror_log_fault_policy parameters in the activation section of the lvm.conf file. When this parameter is set to remove , the system attempts to remove the faulty device and run without it. When this parameter is set to allocate , the system attempts to remove the faulty device and tries to allocate space on a new device to be a replacement for the failed device; this policy acts like the remove policy if no suitable device and space can be allocated for the replacement. For information on the LVM mirror failure policies, see Section 5.4.3.1, "Mirrored Logical Volume Failure Policy" . For the Red Hat Enterprise Linux 6 release, the Linux I/O stack has been enhanced to process vendor-provided I/O limit information. This allows storage management tools, including LVM, to optimize data placement and access. This support can be disabled by changing the default values of data_alignment_detection and data_alignment_offset_detection in the lvm.conf file, although disabling this support is not recommended. For information on data alignment in LVM as well as information on changing the default values of data_alignment_detection and data_alignment_offset_detection , see the inline documentation for the /etc/lvm/lvm.conf file, which is also documented in Appendix B, The LVM Configuration Files . For general information on support for the I/O Stack and I/O limits in Red Hat Enterprise Linux 6, see the Storage Administration Guide . In Red Hat Enterprise Linux 6, the Device Mapper provides direct support for udev integration. This synchronizes the Device Mapper with all udev processing related to Device Mapper devices, including LVM devices. For information on Device Mapper support for the udev device manager, see Section A.3, "Device Mapper Support for the udev Device Manager" . For the Red Hat Enterprise Linux 6 release, you can use the lvconvert --repair command to repair a mirror after disk failure. This brings the mirror back into a consistent state. For information on the lvconvert --repair command, see Section 5.4.3.3, "Repairing a Mirrored Logical Device" . As of the Red Hat Enterprise Linux 6 release, you can use the --merge option of the lvconvert command to merge a snapshot into its origin volume. For information on merging snapshots, see Section 5.4.8, "Merging Snapshot Volumes" . As of the Red Hat Enterprise Linux 6 release, you can use the --splitmirrors argument of the lvconvert command to split off a redundant image of a mirrored logical volume to form a new logical volume. For information on using this option, see Section 5.4.3.2, "Splitting Off a Redundant Image of a Mirrored Logical Volume" . You can now create a mirror log for a mirrored logical device that is itself mirrored by using the --mirrorlog mirrored argument of the lvcreate command when creating a mirrored logical device. For information on using this option, see Section 5.4.3, "Creating Mirrored Volumes" . 2.1.2. New and Changed Features for Red Hat Enterprise Linux 6.1 Red Hat Enterprise Linux 6.1 includes the following documentation and feature updates and changes. The Red Hat Enterprise Linux 6.1 release supports the creation of snapshot logical volumes of mirrored logical volumes. You create a snapshot of a mirrored volume just as you would create a snapshot of a linear or striped logical volume. For information on creating snapshot volumes, see Section 5.4.5, "Creating Snapshot Volumes" . When extending an LVM volume, you can now use the --alloc cling option of the lvextend command to specify the cling allocation policy. This policy will choose space on the same physical volumes as the last segment of the existing logical volume. If there is insufficient space on the physical volumes and a list of tags is defined in the lvm.conf file, LVM will check whether any of the tags are attached to the physical volumes and seek to match those physical volume tags between existing extents and new extents. For information on extending LVM mirrored volumes with the --alloc cling option of the lvextend command, see Section 5.4.14.3, "Extending a Logical Volume with the cling Allocation Policy" . You can now specify multiple --addtag and --deltag arguments within a single pvchange , vgchange , or lvchange command. For information on adding and removing object tags, see Section D.1, "Adding and Removing Object Tags" . The list of allowed characters in LVM object tags has been extended, and tags can contain the "/", "=", "!", ":", "#", and "&" characters. For information on LVM object tags, see Appendix D, LVM Object Tags . You can now combine RAID0 (striping) and RAID1 (mirroring) in a single logical volume. Creating a logical volume while simultaneously specifying the number of mirrors ( --mirrors X ) and the number of stripes ( --stripes Y ) results in a mirror device whose constituent devices are striped. For information on creating mirrored logical volumes, see Section 5.4.3, "Creating Mirrored Volumes" . As of the Red Hat Enterprise Linux 6.1 release, if you need to create a consistent backup of data on a clustered logical volume you can activate the volume exclusively and then create the snapshot. For information on activating logical volumes exclusively on one node, see Section 5.7, "Activating Logical Volumes on Individual Nodes in a Cluster" . 2.1.3. New and Changed Features for Red Hat Enterprise Linux 6.2 Red Hat Enterprise Linux 6.2 includes the following documentation and feature updates and changes. The Red Hat Enterprise Linux 6.2 release supports the issue_discards parameter in the lvm.conf configuration file. When this parameter is set, LVM will issue discards to a logical volume's underlying physical volumes when the logical volume is no longer using the space on the physical volumes. For information on this parameter, see the inline documentation for the /etc/lvm/lvm.conf file, which is also documented in Appendix B, The LVM Configuration Files . 2.1.4. New and Changed Features for Red Hat Enterprise Linux 6.3 Red Hat Enterprise Linux 6.3 includes the following documentation and feature updates and changes. As of the Red Hat Enterprise Linux 6.3 release, LVM supports RAID4/5/6 and a new implementation of mirroring. For information on RAID logical volumes, see Section 5.4.16, "RAID Logical Volumes" . When you are creating a new mirror that does not need to be revived, you can specify the --nosync argument to indicate that an initial synchronization from the first device is not required. For information on creating mirrored volumes, see Section 5.4.3, "Creating Mirrored Volumes" . This manual now documents the snapshot autoextend feature. For information on creating snapshot volumes, see Section 5.4.5, "Creating Snapshot Volumes" . 2.1.5. New and Changed Features for Red Hat Enterprise Linux 6.4 Red Hat Enterprise Linux 6.4 includes the following documentation and feature updates and changes. Logical volumes can now be thinly provisioned. This allows you to create logical volumes that are larger than the available extents. Using thin provisioning, you can manage a storage pool of free space, known as a thin pool, to be allocated to an arbitrary number of devices when needed by applications. You can then create devices that can be bound to the thin pool for later allocation when an application actually writes to the logical volume. The thin pool can be expanded dynamically when needed for cost-effective allocation of storage space. For general information on thinly-provisioned logical volumes, see Section 3.3.5, "Thinly-Provisioned Logical Volumes (Thin Volumes)" . For information on creating thin volumes, see Section 5.4.4, "Creating Thinly-Provisioned Logical Volumes" . The Red Hat Enterprise Linux release 6.4 version of LVM provides support for thinly-provisioned snapshot volumes. Thin snapshot volumes allow many virtual devices to be stored on the same data volume. This simplifies administration and allows for the sharing of data between snapshot volumes. For general information on thinly-provisioned snapshot volumes, see Section 3.3.7, "Thinly-Provisioned Snapshot Volumes" . For information on creating thin snapshot volumes, see Section 5.4.6, "Creating Thinly-Provisioned Snapshot Volumes" . This document includes a new section detailing LVM allocation policy, Section 5.3.2, "LVM Allocation" . LVM now provides support for raid10 logical volumes. For information on RAID logical volumes, see Section 5.4.16, "RAID Logical Volumes" . The LVM metadata daemon, lvmetad , is supported in Red Hat Enterprise Linux release 6.4. Enabling this daemon reduces the amount of scanning on systems with many block devices. The lvmetad daemon is not currently supported across the nodes of a cluster, and requires that the locking type be local file-based locking. For information on the metadata daemon, see Section 4.6, "The Metadata Daemon (lvmetad)" . In addition, small technical corrections and clarifications have been made throughout the document. 2.1.6. New and Changed Features for Red Hat Enterprise Linux 6.5 Red Hat Enterprise Linux 6.5 includes the following documentation and feature updates and changes. You can control I/O operations on a RAID1 logical volume with the --writemostly and --writebehind parameters of the lvchange command. For information on these parameters, see Section 5.4.16.11, "Controlling I/O Operations on a RAID1 Logical Volume" . The lvchange command now supports a --refresh parameter that allows you to restore a transiently failed device without having to reactivate the device. This feature is described in Section 5.4.16.8.1, "The allocate RAID Fault Policy" . LVM provides scrubbing support for RAID logical volumes. For information on this feature, see Section 5.4.16.10, "Scrubbing a RAID Logical Volume" . The fields that the lvs command supports have been updated. For information on the lvs command, see Table 5.4, "lvs Display Fields" . The lvchange command supports the new --maxrecoveryrate and --minrecoveryrate parameters, which allow you to control the rate at which sync operations are performed. For information on these parameters, see Section 5.4.16.10, "Scrubbing a RAID Logical Volume" . You can control the rate at which a RAID logical volume is initialized by implementing recovery throttling. You control the rate at which sync operations are performed by setting the minimum and maximum I/O rate for those operations with the --minrecoveryrate and --maxrecoveryrate options of the lvcreate command, as described in Section 5.4.16.1, "Creating a RAID Logical Volume" . You can now create a thinly-provisioned snapshot of a non-thinly-provisioned logical volume. For information on creating these volumes, known as external volumes, see Section 3.3.7, "Thinly-Provisioned Snapshot Volumes" . In addition, small technical corrections and clarifications have been made throughout the document. 2.1.7. New and Changed Features for Red Hat Enterprise Linux 6.6 Red Hat Enterprise Linux 6.6 includes the following documentation and feature updates and changes. The documentation for thinly-provisioned volumes and thinly-provisioned snapshots has been clarified. Additional information about LVM thin provisioning is now provided in the lvmthin (7) man page. For general information on thinly-provisioned logical volumes, see Section 3.3.5, "Thinly-Provisioned Logical Volumes (Thin Volumes)" . For information on thinly-provisioned snapshot volumes, see Section 3.3.7, "Thinly-Provisioned Snapshot Volumes" . This manual now documents the lvm dumpconfig command, in Section B.2, "The lvmconfig Command" . Note that as of the Red Hat Enterprise Linux 6.8 release, this command was renamed lvmconf , although the old format continues to work. This manual now documents LVM profiles, in Section B.3, "LVM Profiles" . This manual now documents the lvm command in Section 4.7, "Displaying LVM Information with the lvm Command" . In the Red Hat Enterprise Linux 6.6 release, you can control activation of thin pool snapshots with the -k and -K options of the lvcreate and lvchange command, as documented in Section 5.4.17, "Controlling Logical Volume Activation" . This manual documents the --force argument of the vgimport command. This allows you to import volume groups that are missing physical volumes and subsequently run the vgreduce --removemissing command. For information on the vgimport command, see Section 5.3.15, "Moving a Volume Group to Another System" . In addition, small technical corrections and clarifications have been made throughout the document. 2.1.8. New and Changed Features for Red Hat Enterprise Linux 6.7 Red Hat Enterprise Linux 6.7 includes the following documentation and feature updates and changes. As of Red Hat Enterprise Linux release 6.7, many LVM processing commands accept the -S or --select option to define selection criteria for those commands. LVM selection criteria are documented in the new appendix Appendix C, LVM Selection Criteria . This document provides basic procedures for creating cache logical volumes in Section 5.4.7, "Creating LVM Cache Logical Volumes" . The troubleshooting chapter of this document includes a new section, Section 7.8, "Duplicate PV Warnings for Multipathed Devices" . 2.1.9. New and Changed Features for Red Hat Enterprise Linux 6.8 Red Hat Enterprise Linux 6.8 includes the following documentation and feature updates and changes. When defining selection criteria for LVM commands, you can now specify time values as selection criteria for fields with a field type of time . For information on specifying time values as selection criteria, see Section C.3.1, "Specifying Time Values" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/LVM_overview
Chapter 7. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation
Chapter 7. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 7.3, "Manual creation of infrastructure nodes" section for more information. 7.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: https://access.redhat.com/solutions/6592171 . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 7.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 7.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements.
[ "spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"", "adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_and_allocating_storage_resources/how-to-use-dedicated-worker-nodes-for-openshift-data-foundation_rhodf
17.2.2.2. Access Control
17.2.2.2. Access Control Option fields also allow administrators to explicitly allow or deny hosts in a single rule by adding the allow or deny directive as the final option. For instance, the following two rules allow SSH connections from client-1.example.com , but deny connections from client-2.example.com : By allowing access control on a per-rule basis, the option field allows administrators to consolidate all access rules into a single file: either hosts.allow or hosts.deny . Some consider this an easier way of organizing access rules.
[ "sshd : client-1.example.com : allow sshd : client-2.example.com : deny" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-tcpwrappers-access-rules-access
Building applications
Building applications OpenShift Container Platform 4.16 Creating and managing applications on OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"", "oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"", "oc get projects", "oc project <project_name>", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: projectAccess: availableClusterRoles: - admin - edit - view", "oc project <project_name> 1", "oc status", "oc delete project <project_name> 1", "oc new-project <project> --as=<user> --as-group=system:authenticated --as-group=system:authenticated:oauth", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc describe clusterrolebinding.rbac self-provisioners", "Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth", "oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'", "oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth", "oc edit clusterrolebinding.rbac self-provisioners", "apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"", "oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'", "oc new-project test", "Error from server (Forbidden): You may not request a new project via this API.", "You may not request a new project via this API.", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected].", "oc create -f <filename>", "oc create -f <filename> -n <project>", "kind: \"ImageStream\" apiVersion: \"image.openshift.io/v1\" metadata: name: \"ruby\" creationTimestamp: null spec: tags: - name: \"2.6\" annotations: description: \"Build and run Ruby 2.6 applications\" iconClass: \"icon-ruby\" tags: \"builder,ruby\" 1 supports: \"ruby:2.6,ruby\" version: \"2.6\"", "oc process -f <filename> -l name=otherLabel", "oc process --parameters -f <filename>", "oc process --parameters -n <project> <template_name>", "oc process --parameters -n openshift rails-postgresql-example", "NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB", "oc process -f <filename>", "oc process <template_name>", "oc process -f <filename> | oc create -f -", "oc process <template> | oc create -f -", "oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase", "oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase | oc create -f -", "cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase", "oc process -f my-rails-postgresql --param-file=postgres.env", "sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=-", "oc edit template <template>", "oc get templates -n openshift", "apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: \"Description\" iconClass: \"icon-redis\" tags: \"database,nosql\" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: USD{REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master", "kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: \"CakePHP MySQL Example (Ephemeral)\" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing.\" 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: \"quickstart,php,cakephp\" 5 iconClass: icon-php 6 openshift.io/provider-display-name: \"Red Hat, Inc.\" 7 openshift.io/documentation-url: \"https://github.com/sclorg/cakephp-ex\" 8 openshift.io/support-url: \"https://access.redhat.com\" 9 message: \"Your admin credentials are USD{ADMIN_USERNAME}:USD{ADMIN_PASSWORD}\" 10", "kind: \"Template\" apiVersion: \"v1\" labels: template: \"cakephp-mysql-example\" 1 app: \"USD{NAME}\" 2", "parameters: - name: USERNAME description: \"The user name for Joe\" value: joe", "parameters: - name: PASSWORD description: \"The random user password\" generate: expression from: \"[a-zA-Z0-9]{12}\"", "parameters: - name: singlequoted_example generate: expression from: '[\\A]{10}' - name: doublequoted_example generate: expression from: \"[\\\\A]{10}\"", "{ \"parameters\": [ { \"name\": \"json_example\", \"generate\": \"expression\", \"from\": \"[\\\\A]{10}\" } ] }", "kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: \"USD{SOURCE_REPOSITORY_URL}\" 1 ref: \"USD{SOURCE_REPOSITORY_REF}\" contextDir: \"USD{CONTEXT_DIR}\" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: \"USD{{REPLICA_COUNT}}\" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: \"[a-zA-Z0-9]{40}\" 9 - name: REPLICA_COUNT description: Number of replicas to run value: \"2\" required: true message: \"... The GitHub webhook secret is USD{GITHUB_WEBHOOK_SECRET} ...\" 10", "kind: \"Template\" apiVersion: \"v1\" metadata: name: my-template objects: - kind: \"Service\" 1 apiVersion: \"v1\" metadata: name: \"cakephp-mysql-example\" annotations: description: \"Exposes and load balances the application pods\" spec: ports: - name: \"web\" port: 8080 targetPort: 8080 selector: name: \"cakephp-mysql-example\"", "kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: \"{.data['my\\\\.username']}\" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: \"{.data['password']}\" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: \"{.spec.clusterIP}:{.spec.ports[?(.name==\\\"web\\\")].port}\" spec: ports: - name: \"web\" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: \"http://{.spec.host}{.spec.path}\" spec: path: mypath", "{ \"credentials\": { \"username\": \"foo\", \"password\": \"YmFy\", \"service_ip_port\": \"172.30.12.34:8080\", \"uri\": \"http://route-test.router.default.svc.cluster.local/mypath\" } }", "\"template.alpha.openshift.io/wait-for-ready\": \"true\"", "kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: annotations: template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: Service apiVersion: v1 metadata: name: spec:", "oc get -o yaml all > <yaml_filename>", "oc get csv", "oc policy add-role-to-user edit <user> -n <target_project>", "oc new-app /<path to source code>", "oc new-app https://github.com/sclorg/cakephp-ex", "oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret", "oc new-app https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.0/test/puma-test-app", "oc new-app https://github.com/openshift/ruby-hello-world.git#beta4", "oc new-app /home/user/code/myapp --strategy=docker", "oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git", "oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app", "oc new-app mysql", "oc new-app myregistry:5000/example/myimage", "oc new-app my-stream:v1", "oc create -f examples/sample-app/application-template-stibuild.json", "oc new-app ruby-helloworld-sample", "oc new-app -f examples/sample-app/application-template-stibuild.json", "oc new-app ruby-helloworld-sample -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword", "ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword", "oc new-app ruby-helloworld-sample --param-file=helloworld.params", "oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password", "POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password", "oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env", "cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-", "oc new-app openshift/ruby-23-centos7 --build-env HTTP_PROXY=http://myproxy.net:1337/ --build-env GEM_HOME=~/.gem", "HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem", "oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env", "cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-", "oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world", "oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml", "vi myapp.yaml", "oc create -f myapp.yaml", "oc new-app https://github.com/openshift/ruby-hello-world --name=myapp", "oc new-app https://github.com/openshift/ruby-hello-world -n myproject", "oc new-app https://github.com/openshift/ruby-hello-world mysql", "oc new-app ruby+mysql", "oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql", "oc new-app --search php", "oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test", "oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test", "sudo yum install -y postgresql postgresql-server postgresql-devel", "sudo postgresql-setup initdb", "sudo systemctl start postgresql.service", "sudo -u postgres createuser -s rails", "gem install rails", "Successfully installed rails-4.3.0 1 gem installed", "rails new rails-app --database=postgresql", "cd rails-app", "gem 'pg'", "bundle install", "default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password>", "rake db:create", "rails generate controller welcome index", "root 'welcome#index'", "rails server", "<% user = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? \"root\" : ENV[\"POSTGRESQL_USER\"] %> <% password = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? ENV[\"POSTGRESQL_ADMIN_PASSWORD\"] : ENV[\"POSTGRESQL_PASSWORD\"] %> <% db_service = ENV.fetch(\"DATABASE_SERVICE_NAME\",\"\").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV[\"POSTGRESQL_MAX_CONNECTIONS\"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV[\"#{db_service}_SERVICE_HOST\"] %> port: <%= ENV[\"#{db_service}_SERVICE_PORT\"] %> database: <%= ENV[\"POSTGRESQL_DATABASE\"] %>", "ls -1", "app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor", "git init", "git add .", "git commit -m \"initial commit\"", "git remote add origin [email protected]:<namespace/repository-name>.git", "git push", "oc new-project rails-app --description=\"My Rails application\" --display-name=\"Rails Application\"", "oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password", "-e POSTGRESQL_ADMIN_PASSWORD=admin_pw", "oc get pods --watch", "oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql", "oc get dc rails-app -o json", "env\": [ { \"name\": \"POSTGRESQL_USER\", \"value\": \"username\" }, { \"name\": \"POSTGRESQL_PASSWORD\", \"value\": \"password\" }, { \"name\": \"POSTGRESQL_DATABASE\", \"value\": \"db_name\" }, { \"name\": \"DATABASE_SERVICE_NAME\", \"value\": \"postgresql\" } ],", "oc logs -f build/rails-app-1", "oc get pods", "oc rsh <frontend_pod_id>", "RAILS_ENV=production bundle exec rake db:migrate", "oc expose service rails-app --hostname=www.example.com", "curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm", "curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm", "curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm", "chmod +x /usr/local/bin/helm", "helm version", "version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}", "curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm", "chmod +x /usr/local/bin/helm", "helm version", "version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}", "oc new-project vault", "helm repo add openshift-helm-charts https://charts.openshift.io/", "\"openshift-helm-charts\" has been added to your repositories", "helm repo update", "helm install example-vault openshift-helm-charts/hashicorp-vault", "NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault!", "helm list", "NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2", "oc new-project nodejs-ex-k", "git clone https://github.com/redhat-developer/redhat-helm-charts", "cd redhat-helm-charts/alpha/nodejs-ex-k/", "apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5", "helm lint", "[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed", "cd ..", "helm install nodejs-chart nodejs-ex-k", "helm list", "NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0", "apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url>", "cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF", "apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: <name> spec: url: https://my.chart-repo.org/stable # optional name that might be used by console name: <chart-repo-display-name> # optional and only needed for UI purposes description: <My private chart repo> # required: chart repository URL connectionConfig: url: <helm-chart-repository-url>", "cat <<EOF | oc apply --namespace my-namespace -f - apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF", "projecthelmchartrepository.helm.openshift.io/azure-sample-repo created", "oc get projecthelmchartrepositories --namespace my-namespace", "NAME AGE azure-sample-repo 1m", "oc create configmap helm-ca-cert --from-file=ca-bundle.crt=/path/to/certs/ca.crt -n openshift-config", "oc create secret tls helm-tls-configs --cert=/path/to/certs/client.crt --key=/path/to/certs/client.key -n openshift-config", "cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF", "cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"helm-ca-cert\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"secrets\"] resourceNames: [\"helm-tls-configs\"] verbs: [\"get\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF", "cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: connectionConfig: url:https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs disabled: true EOF", "spec: connectionConfig: url: <url-of-the-repositoru-to-be-disabled> disabled: true", "apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always", "apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always", "apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80", "apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3", "oc rollout pause deployments/<name>", "oc rollout latest dc/<name>", "oc rollout history dc/<name>", "oc rollout history dc/<name> --revision=1", "oc describe dc <name>", "oc rollout retry dc/<name>", "oc rollout undo dc/<name>", "oc set triggers dc/<name> --auto", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar", "oc logs -f dc/<name>", "oc logs --version=1 dc/<name>", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ConfigChange\"", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"", "oc set triggers dc/<dc_name> --from-image=<project>/<image>:<tag> -c <container_name>", "kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2 ephemeral-storage: \"1Gi\" 3", "kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: requests: 1 cpu: \"100m\" memory: \"256Mi\" ephemeral-storage: \"1Gi\"", "oc scale dc frontend --replicas=3", "apiVersion: v1 kind: Pod metadata: name: my-pod spec: nodeSelector: disktype: ssd", "oc edit dc/<deployment_config>", "apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: \"20%\" 4 maxUnavailable: \"10%\" 5 pre: {} 6 post: {}", "oc new-app quay.io/openshifttest/deployment-example:latest", "oc expose svc/deployment-example", "oc scale dc/deployment-example --replicas=3", "oc tag deployment-example:v2 deployment-example:latest", "oc describe dc deployment-example", "kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Custom customParams: image: organization/strategy command: [ \"command\", \"arg1\" ] environment: - name: ENV_1 value: VALUE_1", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete", "Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete", "pre: failurePolicy: Abort execNewPod: {} 1", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ \"/usr/bin/command\", \"arg1\", \"arg2\" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4", "oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2", "oc new-app openshift/deployment-example:v1 --name=example-blue", "oc new-app openshift/deployment-example:v2 --name=example-green", "oc expose svc/example-blue --name=bluegreen-example", "oc patch route/bluegreen-example -p '{\"spec\":{\"to\":{\"name\":\"example-green\"}}}'", "oc new-app openshift/deployment-example --name=ab-example-a", "oc new-app openshift/deployment-example:v2 --name=ab-example-b", "oc expose svc/ab-example-a", "oc edit route <route_name>", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15", "oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]", "oc set route-backends ab-example ab-example-a=198 ab-example-b=2", "oc set route-backends ab-example", "NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)", "oc annotate routes/<route-name> haproxy.router.openshift.io/balance=roundrobin", "oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10", "oc set route-backends ab-example --adjust ab-example-b=5%", "oc set route-backends ab-example --adjust ab-example-b=+15%", "oc set route-backends ab-example --equal", "oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\\=shardA", "oc delete svc/ab-example-a", "oc expose deployment ab-example-a --name=ab-example --selector=ab-example\\=true", "oc expose service ab-example", "oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE=\"shard B\" COLOR=\"red\" --as-deployment-config=true", "oc delete svc/ab-example-b", "oc scale dc/ab-example-a --replicas=0", "oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0", "oc edit dc/ab-example-a", "oc edit dc/ab-example-b", "apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5 services.loadbalancers: \"2\" 6", "apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 limits.cpu: \"2\" 4 limits.memory: 2Gi 5", "apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 scopes: - NotTerminating 4", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 scopes: - Terminating 4", "apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9", "oc create -f <file> [-n <project_name>]", "oc create -f core-object-counts.yaml -n demoproject", "oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1", "oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4", "resourcequota \"test\" created", "oc describe quota test", "Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4", "oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'", "openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0", "apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1", "oc create -f gpu-quota.yaml", "resourcequota/gpu-quota created", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1", "apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1", "oc create -f gpu-pod.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1", "oc create -f gpu-pod.yaml", "Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1", "oc get quota -n demoproject", "NAME AGE REQUEST LIMIT besteffort 4s pods: 1/2 compute-resources-time-bound 10m pods: 0/2 limits.cpu: 0/1, limits.memory: 0/1Gi core-object-counts 109s configmaps: 2/10, persistentvolumeclaims: 1/4, replicationcontrollers: 1/20, secrets: 9/10, services: 2/10", "oc describe quota core-object-counts -n demoproject", "Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "- apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7", "oc create -f template.yaml -n openshift-config", "oc get templates -n openshift-config", "oc edit template <project_request_template> -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: project-request", "oc new-project <project_name>", "oc get resourcequotas", "oc describe resourcequotas <resource_quota_name>", "oc create clusterquota for-user --project-annotation-selector openshift.io/requester=<user_name> --hard pods=10 --hard secrets=20", "apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: \"10\" secrets: \"20\" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\" total: 5 hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\"", "oc create clusterresourcequota for-name \\ 1 --project-label-selector=name=frontend \\ 2 --hard=pods=10 --hard=secrets=20", "apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: \"10\" secrets: \"20\" selector: annotations: null labels: matchLabels: name: frontend", "oc describe AppliedClusterResourceQuota", "Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20", "kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2", "apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4", "apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "SPECIAL_LEVEL_KEY=very log_level=INFO", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "very charm", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never", "very", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never", "very", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8", "kind: Deployment apiVersion: apps/v1 metadata: labels: test: health-check name: my-application spec: template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19", "oc create -f <file-name>.yaml", "oc describe pod my-application", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image \"registry.k8s.io/liveness\" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image \"registry.k8s.io/liveness\" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container", "oc describe pod pod1", ". Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image \"registry.k8s.io/liveness\" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 244.116568ms", "oc adm prune <object_type> <options>", "oc adm prune groups --sync-config=path/to/sync/config [<options>]", "oc adm prune groups --sync-config=ldap-sync-config.yaml", "oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm", "oc adm prune deployments [<options>]", "oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m", "oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm", "oc adm prune builds [<options>]", "oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m", "oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm", "spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: \"Periodic image pruner has been created.\" - type: Scheduled status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: \"Image pruner job has been scheduled.\" - type: Failed staus: \"False\" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: \"Most recent image pruning job succeeded.\"", "oc create -f <filename>.yaml", "kind: List apiVersion: v1 items: - apiVersion: v1 kind: ServiceAccount metadata: name: pruner namespace: openshift-image-registry - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: openshift-image-registry-pruner roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-pruner subjects: - kind: ServiceAccount name: pruner namespace: openshift-image-registry - apiVersion: batch/v1 kind: CronJob metadata: name: image-pruner namespace: openshift-image-registry spec: schedule: \"0 0 * * *\" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 3 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - image: \"quay.io/openshift/origin-cli:4.1\" resources: requests: cpu: 1 memory: 1Gi terminationMessagePolicy: FallbackToLogsOnError command: - oc args: - adm - prune - images - --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - --keep-tag-revisions=5 - --keep-younger-than=96h - --confirm=true name: image-pruner serviceAccountName: pruner", "oc adm prune images [<options>]", "oc rollout restart deployment/image-registry -n openshift-image-registry", "oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m", "oc adm prune images --prune-over-size-limit", "oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm", "oc adm prune images --prune-over-size-limit --confirm", "oc get is -n <namespace> -o go-template='{{range USDisi, USDis := .items}}{{range USDti, USDtag := USDis.status.tags}}' '{{range USDii, USDitem := USDtag.items}}{{if eq USDitem.image \"sha256:<hash>\"}}{{USDis.metadata.name}}:{{USDtag.tag}} at position {{USDii}} out of {{len USDtag.items}}\\n' '{{end}}{{end}}{{end}}{{end}}'", "myapp:v2 at position 4 out of 5 myapp:v2.1 at position 2 out of 2 myapp:v2.1-may-2016 at position 0 out of 1", "error: error communicating with registry: Get https://172.30.30.30:5000/healthz: http: server gave HTTP response to HTTPS client", "error: error communicating with registry: Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\" error: error communicating with registry: [Get https://172.30.30.30:5000/healthz: x509: certificate signed by unknown authority, Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\"]", "error: error communicating with registry: Get https://172.30.30.30:5000/: x509: certificate signed by unknown authority", "oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":true}}' --type=merge", "service_account=USD(oc get -n openshift-image-registry -o jsonpath='{.spec.template.spec.serviceAccountName}' deploy/image-registry)", "oc adm policy add-cluster-role-to-user system:image-pruner -z USD{service_account} -n openshift-image-registry", "oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=check'", "oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c 'REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check'", "time=\"2017-06-22T11:50:25.066156047Z\" level=info msg=\"start prune (dry-run mode)\" distribution_version=\"v2.4.1+unknown\" kubernetes_version=v1.6.1+USDFormat:%hUSD openshift_version=unknown time=\"2017-06-22T11:50:25.092257421Z\" level=info msg=\"Would delete blob: sha256:00043a2a5e384f6b59ab17e2c3d3a3d0a7de01b2cabeb606243e468acc663fa5\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092395621Z\" level=info msg=\"Would delete blob: sha256:0022d49612807cb348cabc562c072ef34d756adfe0100a61952cbcb87ee6578a\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092492183Z\" level=info msg=\"Would delete blob: sha256:0029dd4228961086707e53b881e25eba0564fa80033fbbb2e27847a28d16a37c\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.673946639Z\" level=info msg=\"Would delete blob: sha256:ff7664dfc213d6cc60fd5c5f5bb00a7bf4a687e18e1df12d349a1d07b2cf7663\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674024531Z\" level=info msg=\"Would delete blob: sha256:ff7a933178ccd931f4b5f40f9f19a65be5eeeec207e4fad2a5bafd28afbef57e\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674675469Z\" level=info msg=\"Would delete blob: sha256:ff9b8956794b426cc80bb49a604a0b24a1553aae96b930c6919a6675db3d5e06\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 Would delete 13374 blobs Would free up 2.835 GiB of disk space Use -prune=delete to actually delete the data", "oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=delete'", "Deleted 13374 blobs Freed up 2.835 GiB of disk space", "oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":false}}' --type=merge", "oc idle <service>", "oc idle --resource-names-file <filename>", "oc scale --replicas=1 dc <dc_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/building_applications/index
Chapter 3. Managing partitions using the web console
Chapter 3. Managing partitions using the web console Learn how to manage file systems on RHEL 8 using the web console. 3.1. Displaying partitions formatted with file systems in the web console The Storage section in the web console displays all available file systems in the Filesystems table. Besides the list of partitions formatted with file systems, you can also use the page for creating new storage. Prerequisites The cockpit-storaged package is installed on your system. You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click the Storage tab. In the Storage table, you can see all available partitions formatted with file systems, their ID, types, locations, sizes, and how much space is available on each partition. You can also use the drop-down menu in the top-right corner to create new local or networked storage. 3.2. Creating partitions in the web console To create a new partition: Use an existing partition table Create a partition Prerequisites The cockpit-storaged package is installed on your system. The web console must be installed and accessible. For details, see Installing the web console . An unformatted volume connected to the system is visible in the Storage table of the Storage tab. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click the Storage tab. In the Storage table, click the device which you want to partition to open the page and options for that device. On the device page, click the menu button, ... , and select Create partition table . In the Initialize disk dialog box, select the following: Partitioning : Compatible with all systems and devices (MBR) Compatible with modern system and hard disks > 2TB (GPT) No partitioning Overwrite : Select the Overwrite existing data with zeros checkbox if you want the RHEL web console to rewrite the whole disk with zeros. This option is slower because the program has to go through the whole disk, but it is more secure. Use this option if the disk includes any data and you need to overwrite it. If you do not select the Overwrite existing data with zeros checkbox, the RHEL web console rewrites only the disk header. This increases the speed of formatting. Click Initialize . Click the menu button, ... , to the partition table you created. It is named Free space by default. Click Create partition . In the Create partition dialog box, enter a Name for the file system. Add a Mount point . In the Type drop-down menu, select a file system: XFS file system supports large logical volumes, switching physical drives online without outage, and growing an existing file system. Leave this file system selected if you do not have a different strong preference. ext4 file system supports: Logical volumes Switching physical drives online without outage Growing a file system Shrinking a file system Additional option is to enable encryption of partition done by LUKS (Linux Unified Key Setup), which allows you to encrypt the volume with a passphrase. Enter the Size of the volume you want to create. Select the Overwrite existing data with zeros checkbox if you want the RHEL web console to rewrite the whole disk with zeros. This option is slower because the program has to go through the whole disk, but it is more secure. Use this option if the disk includes any data and you need to overwrite it. If you do not select the Overwrite existing data with zeros checkbox, the RHEL web console rewrites only the disk header. This increases the speed of formatting. If you want to encrypt the volume, select the type of encryption in the Encryption drop-down menu. If you do not want to encrypt the volume, select No encryption . In the At boot drop-down menu, select when you want to mount the volume. In Mount options section: Select the Mount read only checkbox if you want the to mount the volume as a read-only logical volume. Select the Custom mount options checkbox and add the mount options if you want to change the default mount option. Create the partition: If you want to create and mount the partition, click the Create and mount button. If you want to only create the partition, click the Create only button. Formatting can take several minutes depending on the volume size and which formatting options are selected. Verification To verify that the partition has been successfully added, switch to the Storage tab and check the Storage table and verify whether the new partition is listed. 3.3. Deleting partitions in the web console You can remove partitions in the web console interface. Prerequisites The cockpit-storaged package is installed on your system. You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click the Storage tab. Click the device from which you want to delete a partition. On the device page and in the GPT partitions section, click the menu button, ... to the partition you want to delete. From the drop-down menu, select Delete . The RHEL web console terminates all processes that are currently using the partition and unmount the partition before deleting it. Verification To verify that the partition has been successfully removed, switch to the Storage tab and check the Storage table.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/managing-partitions-using-the-web-console_managing-file-systems
Chapter 3. Installing a user-provisioned bare metal cluster with network customizations
Chapter 3. Installing a user-provisioned bare metal cluster with network customizations In OpenShift Container Platform 4.16, you can install a cluster on bare metal infrastructure that you provision with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. When you customize OpenShift Container Platform networking, you must set most of the network configuration parameters during installation. You can modify only kubeProxy network configuration parameters in a running cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. Additional resources See Installing a user-provisioned bare metal cluster on a restricted network for more information about performing a restricted network installation on bare metal infrastructure that you provision. 3.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 3.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 3.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Note As an exception, you can run zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. Running one compute machine is not supported. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 3.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.2. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Configuring a three-node cluster for details about deploying three-node clusters in bare metal environments. See Approving the certificate signing requests for your machines for more information about approving cluster certificate signing requests after installation. 3.3.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 3.3.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.3.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 3.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 3.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 3.3.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 3.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 3.3.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 3.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 3.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. Validating DNS resolution for user-provisioned infrastructure 3.3.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.3.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.4. Creating a manifest object that includes a customized br-ex bridge As an alternative to using the configure-ovs.sh shell script to set a br-ex bridge on a bare-metal platform, you can create a MachineConfig object that includes an NMState configuration file. The NMState configuration file creates a customized br-ex bridge network configuration on each node in your cluster. Consider the following use cases for creating a manifest object that includes a customized br-ex bridge: You want to make postinstallation changes to the bridge, such as changing the Open vSwitch (OVS) or OVN-Kubernetes br-ex bridge network. The configure-ovs.sh shell script does not support making postinstallation changes to the bridge. You want to deploy the bridge on a different interface than the interface available on a host or server IP address. You want to make advanced configurations to the bridge that are not possible with the configure-ovs.sh shell script. Using the script for these configurations might result in the bridge failing to connect multiple network interfaces and facilitating data forwarding between the interfaces. Note If you require an environment with a single network interface controller (NIC) and default network settings, use the configure-ovs.sh shell script. After you install Red Hat Enterprise Linux CoreOS (RHCOS) and the system reboots, the Machine Config Operator injects Ignition configuration files into each node in your cluster, so that each node received the br-ex bridge network configuration. To prevent configuration conflicts, the configure-ovs.sh shell script receives a signal to not configure the br-ex bridge. Prerequisites Optional: You have installed the nmstate API so that you can validate the NMState configuration. Procedure Create a NMState configuration file that has decoded base64 information for your customized br-ex bridge network: Example of an NMState configuration for a customized br-ex bridge network interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false # ... 1 Name of the interface. 2 The type of ethernet. 3 The requested state for the interface after creation. 4 Disables IPv4 and IPv6 in this example. 5 The node NIC to which the bridge attaches. Use the cat command to base64-encode the contents of the NMState configuration: USD cat <nmstate_configuration>.yaml | base64 1 1 Replace <nmstate_configuration> with the name of your NMState resource YAML file. Create a MachineConfig manifest file and define a customized br-ex bridge network configuration analogous to the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml # ... 1 For each node in your cluster, specify the hostname path to your node and the base-64 encoded Ignition configuration file data for the machine type. If you have a single global configuration specified in an /etc/nmstate/openshift/cluster.yml configuration file that you want to apply to all nodes in your cluster, you do not need to specify the hostname path for each node. The worker role is the default role for nodes in your cluster. The .yaml extension does not work when specifying the hostname path for each node or all nodes in the MachineConfig manifest file. 2 The name of the policy. 3 Writes the encoded base64 information to the specified path. 3.4.1. Optional: Scaling each machine set to compute nodes To apply a customized br-ex bridge configuration to all compute nodes in your OpenShift Container Platform cluster, you must edit your MachineConfig custom resource (CR) and modify its roles. Additionally, you must create a BareMetalHost CR that defines information for your bare-metal machine, such as hostname, credentials, and so on. After you configure these resources, you must scale machine sets, so that the machine sets can apply the resource configuration to each compute node and reboot the nodes. Prerequisites You created a MachineConfig manifest object that includes a customized br-ex bridge configuration. Procedure Edit the MachineConfig CR by entering the following command: USD oc edit mc <machineconfig_custom_resource_name> Add each compute node configuration to the CR, so that the CR can manage roles for each defined compute node in your cluster. Create a Secret object named extraworker-secret that has a minimal static IP configuration. Apply the extraworker-secret secret to each node in your cluster by entering the following command. This step provides each compute node access to the Ignition config file. USD oc apply -f ./extraworker-secret.yaml Create a BareMetalHost resource and specify the network secret in the preprovisioningNetworkDataName parameter: Example BareMetalHost resource with an attached network secret apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: # ... preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret # ... To manage the BareMetalHost object within the openshift-machine-api namespace of your cluster, change to the namespace by entering the following command: USD oc project openshift-machine-api Get the machine sets: USD oc get machinesets Scale each machine set by entering the following command. You must run this command for each machine set. USD oc scale machineset <machineset_name> --replicas=<n> 1 1 Where <machineset_name> is the name of the machine set and <n> is the number of compute nodes. 3.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. Additional resources Requirements for a cluster with user-provisioned infrastructure Installing RHCOS and starting the OpenShift Container Platform bootstrap process Setting the cluster node hostnames through DHCP Advanced RHCOS installation configuration Networking requirements for user-provisioned infrastructure User-provisioned DNS requirements Validating DNS resolution for user-provisioned infrastructure Load balancing requirements for user-provisioned infrastructure 3.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. Additional resources User-provisioned DNS requirements Load balancing requirements for user-provisioned infrastructure 3.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. Additional resources Verifying node health 3.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.9. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.10. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for bare metal 3.10.1. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources See Load balancing requirements for user-provisioned infrastructure for more information on the API and application ingress load balancing requirements. 3.11. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 3.12. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute MachineSets : USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 3.13. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 3.13.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 3.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 3.10. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 3.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 3.12. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 3.13. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 3.14. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 3.15. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 3.16. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 3.17. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 3.18. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 3.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 3.14. Creating the Ignition config files Because you must manually start the cluster machines, you must generate the Ignition config files that the cluster needs to make its machines. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Obtain the Ignition config files: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important If you created an install-config.yaml file, specify the directory that contains it. Otherwise, specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. The following files are generated in the directory: 3.15. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 3.15.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 3.15.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.16-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 3.15.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 3.15.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 3.15.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 3.15.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 3.15.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number> The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/disk/by-id/scsi-<serial_number> This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number> In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 3.15.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 3.15.3.4. Default console configuration Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.16 boot image use a default console that is meant to accomodate most virtualized and bare metal setups. Different cloud and virtualization platforms may use different default settings depending on the chosen architecture. Bare metal installations use the kernel default settings which typically means the graphical console is the primary console and the serial console is disabled. The default consoles may not match your specific hardware configuration or you might have specific needs that require you to adjust the default console. For example: You want to access the emergency shell on the console for debugging purposes. Your cloud platform does not provide interactive access to the graphical console, but provides a serial console. You want to enable multiple consoles. Console configuration is inherited from the boot image. This means that new nodes in existing clusters are unaffected by changes to the default console. You can configure the console for bare metal installations in the following ways: Using coreos-installer manually on the command line. Using the coreos-installer iso customize or coreos-installer pxe customize subcommands with the --dest-console option to create a custom image that automates the process. Note For advanced customization, perform console configuration using the coreos-installer iso or coreos-installer pxe subcommands, and not kernel arguments. 3.15.3.5. Enabling the serial console for PXE and ISO installations By default, the Red Hat Enterprise Linux CoreOS (RHCOS) serial console is disabled and all output is written to the graphical console. You can enable the serial console for an ISO installation and reconfigure the bootloader so that output is sent to both the serial console and the graphical console. Procedure Boot the ISO installer. Run the coreos-installer command to install the system, adding the --console option once to specify the graphical console, and a second time to specify the serial console: USD coreos-installer install \ --console=tty0 \ 1 --console=ttyS0,<options> \ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> 1 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 2 The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 11520n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation. Reboot into the installed system. Note A similar outcome can be obtained by using the coreos-installer install --append-karg option, and specifying the console with console= . However, this will only set the console for the kernel and not the bootloader. To configure a PXE installation, make sure the coreos.inst.install_dev kernel command line option is omitted, and use the shell prompt to run coreos-installer manually using the above ISO installation procedure. 3.15.3.6. Customizing a live RHCOS ISO or PXE install You can use the live ISO image or PXE environment to install RHCOS by injecting an Ignition config file directly into the image. This creates a customized image that you can use to provision your system. For an ISO image, the mechanism to do this is the coreos-installer iso customize subcommand, which modifies the .iso file with your configuration. Similarly, the mechanism for a PXE environment is the coreos-installer pxe customize subcommand, which creates a new initramfs file that includes your customizations. The customize subcommand is a general purpose tool that can embed other types of customizations as well. The following tasks are examples of some of the more common customizations: Inject custom CA certificates for when corporate security policy requires their use. Configure network settings without the need for kernel arguments. Embed arbitrary preinstall and post-install scripts or binaries. 3.15.3.7. Customizing a live RHCOS ISO image You can customize a live RHCOS ISO image directly with the coreos-installer iso customize subcommand. When you boot the ISO image, the customizations are applied automatically. You can use this feature to configure the ISO image to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and the Ignition config file, and then run the following command to inject the Ignition config directly into the ISO image: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2 1 The Ignition config file that is generated from the openshift-installer installation program. 2 When you specify this option, the ISO image automatically runs an installation. Otherwise, the image remains configured for installation, but does not install automatically unless you specify the coreos.inst.install_dev kernel argument. Optional: To remove the ISO image customizations and return the image to its pristine state, run: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now re-customize the live ISO image or use it in its pristine state. Applying your customizations affects every subsequent boot of RHCOS. 3.15.3.7.1. Modifying a live install ISO image to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image to enable the serial console to receive output: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the ISO image automatically runs the installation program which will fail unless you also specify the coreos.inst.install_dev kernel argument. Note The --dest-console option affects the installed system and not the live ISO system. To modify the console for a live ISO system, use the --live-karg-append option and specify the console with console= . Your customizations are applied and affect every subsequent boot of the ISO image. Optional: To remove the ISO image customizations and return the image to its original state, run the following command: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now recustomize the live ISO image or use it in its original state. 3.15.3.7.2. Modifying a live install ISO image to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image for use with a custom CA: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 3.15.3.7.3. Modifying a live install ISO image with customized network settings You can embed a NetworkManager keyfile into the live ISO image and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with your configured networking: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection Network settings are applied to the live system and are carried over to the destination system. 3.15.3.7.4. Customizing a live install ISO image for an iSCSI boot device You can set the iSCSI target and initiator values for automatic mounting, booting and configuration using a customized version of the live RHCOS image. Prerequisites You have an iSCSI target you want to install RHCOS on. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with the following information: USD coreos-installer iso customize \ --pre-install mount-iscsi.sh \ 1 --post-install unmount-iscsi.sh \ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \ 3 --dest-ignition config.ign \ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \ 5 --dest-karg-append netroot=<target_iqn> \ 6 -o custom.iso rhcos-<version>-live.x86_64.iso 1 The script that gets run before installation. It should contain the iscsiadm commands for mounting the iSCSI target and any commands enabling multipathing. 2 The script that gets run after installation. It should contain the command iscsiadm --mode node --logout=all . 3 The location of the destination system. You must provide the IP address of the target portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI logical unit number (LUN). 4 The Ignition configuration for the destination system. 5 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect to the iSCSI target. 6 The the iSCSI target, or server, name in IQN format. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . 3.15.3.7.5. Customizing a live install ISO image for an iSCSI boot device with iBFT You can set the iSCSI target and initiator values for automatic mounting, booting and configuration using a customized version of the live RHCOS image. Prerequisites You have an iSCSI target you want to install RHCOS on. Optional: you have multipathed your iSCSI target. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with the following information: USD coreos-installer iso customize \ --pre-install mount-iscsi.sh \ 1 --post-install unmount-iscsi.sh \ 2 --dest-device /dev/mapper/mpatha \ 3 --dest-ignition config.ign \ 4 --dest-karg-append rd.iscsi.firmware=1 \ 5 --dest-karg-append rd.multipath=default \ 6 -o custom.iso rhcos-<version>-live.x86_64.iso 1 The script that gets run before installation. It should contain the iscsiadm commands for mounting the iSCSI target and any commands enabling multipathing. 2 The script that gets run after installation. It should contain the command iscsiadm --mode node --logout=all . 3 The path to the device. If you are using multipath, the multipath device, /dev/mapper/mpatha , If there are multiple multipath devices connected, or to be explicit, you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path . 4 The Ignition configuration for the destination system. 5 The iSCSI parameter is read from the BIOS firmware. 6 Optional: include this parameter if you are enabling multipathing. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . 3.15.3.8. Customizing a live RHCOS PXE environment You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize subcommand. When you boot the PXE environment, the customizations are applied automatically. You can use this feature to configure the PXE environment to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new initramfs file that contains the customizations from your Ignition config: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3 1 The Ignition config file that is generated from openshift-installer . 2 When you specify this option, the PXE environment automatically runs an install. Otherwise, the image remains configured for installing, but does not do so automatically unless you specify the coreos.inst.install_dev kernel argument. 3 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Applying your customizations affects every subsequent boot of RHCOS. 3.15.3.8.1. Modifying a live install PXE environment to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new customized initramfs file that enables the serial console to receive output: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the PXE environment automatically runs the installer which will fail unless you also specify the coreos.inst.install_dev kernel argument. 5 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Your customizations are applied and affect every subsequent boot of the PXE environment. 3.15.3.8.2. Modifying a live install PXE environment to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file for use with a custom CA: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --ignition-ca cert.pem \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 3.15.3.8.3. Modifying a live install PXE environment with customized network settings You can embed a NetworkManager keyfile into the live PXE environment and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file that contains your configured networking: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Network settings are applied to the live system and are carried over to the destination system. 3.15.3.8.4. Customizing a live install PXE environment for an iSCSI boot device You can set the iSCSI target and initiator values for automatic mounting, booting and configuration using a customized version of the live RHCOS image. Prerequisites You have an iSCSI target you want to install RHCOS on. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file with the following information: USD coreos-installer pxe customize \ --pre-install mount-iscsi.sh \ 1 --post-install unmount-iscsi.sh \ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \ 3 --dest-ignition config.ign \ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \ 5 --dest-karg-append netroot=<target_iqn> \ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img 1 The script that gets run before installation. It should contain the iscsiadm commands for mounting the iSCSI target and any commands enabling multipathing. 2 The script that gets run after installation. It should contain the command iscsiadm --mode node --logout=all . 3 The location of the destination system. You must provide the IP address of the target portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI logical unit number (LUN). 4 The Ignition configuration for the destination system. 5 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect to the iSCSI target. 6 The the iSCSI target, or server, name in IQN format. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . 3.15.3.8.5. Customizing a live install PXE environment for an iSCSI boot device with iBFT You can set the iSCSI target and initiator values for automatic mounting, booting and configuration using a customized version of the live RHCOS image. Prerequisites You have an iSCSI target you want to install RHCOS on. Optional: you have multipathed your iSCSI target. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file with the following information: USD coreos-installer pxe customize \ --pre-install mount-iscsi.sh \ 1 --post-install unmount-iscsi.sh \ 2 --dest-device /dev/mapper/mpatha \ 3 --dest-ignition config.ign \ 4 --dest-karg-append rd.iscsi.firmware=1 \ 5 --dest-karg-append rd.multipath=default \ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img 1 The script that gets run before installation. It should contain the iscsiadm commands for mounting the iSCSI target. 2 The script that gets run after installation. It should contain the command iscsiadm --mode node --logout=all . 3 The path to the device. If you are using multipath, the multipath device, /dev/mapper/mpatha , If there are multiple multipath devices connected, or to be explicit, you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path . 4 The Ignition configuration for the destination system. 5 The iSCSI parameter is read from the BIOS firmware. 6 Optional: include this parameter if you are enabling multipathing. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . 3.15.3.9. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 3.15.3.9.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 3.15.3.9.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 3.20. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 3.15.3.9.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 3.21. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 3.15.4. Enabling multipathing with kernel arguments on RHCOS RHCOS supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container Platform 4.8 or later. While postinstallation support is available by activating multipathing via the machine config, enabling multipathing during installation is recommended. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. Important On IBM Z(R) and IBM(R) LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE . The following procedure enables multipath at installation time and appends kernel arguments to the coreos-installer install command so that the installed system itself will use multipath beginning from the first boot. Note OpenShift Container Platform does not support enabling multipathing as a day-2 activity on nodes that have been upgraded from 4.6 or earlier. Prerequisites You have created the Ignition config files for your cluster. You have reviewed Installing RHCOS and starting the OpenShift Container Platform bootstrap process . Procedure To enable multipath and start the multipathd daemon, run the following command on the installation host: USD mpathconf --enable && systemctl start multipathd.service Optional: If booting the PXE or ISO, you can instead enable multipath by adding rd.multipath=default from the kernel command line. Append the kernel arguments by invoking the coreos-installer program: If there is only one multipath device connected to the machine, it should be available at path /dev/mapper/mpatha . For example: USD coreos-installer install /dev/mapper/mpatha \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the path of the single multipathed device. If there are multiple multipath devices connected to the machine, or to be more explicit, instead of using /dev/mapper/mpatha , it is recommended to use the World Wide Name (WWN) symlink available in /dev/disk/by-id . For example: USD coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the WWN ID of the target multipathed device. For example, 0xx194e957fcedb4841 . This symlink can also be used as the coreos.inst.install_dev kernel argument when using special coreos.inst.* arguments to direct the live installer. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process". Reboot into the installed system. Check that the kernel arguments worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. 3.15.4.1. Enabling multipathing on secondary disks RHCOS also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to enable multipathing for the secondary disk at installation time. Prerequisites You have read the section Disk partitioning . You have read Enabling multipathing with kernel arguments on RHCOS . You have installed the Butane utility. Procedure Create a Butane config with information similar to the following: Example multipath-config.bu variant: openshift version: 4.16.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target 1 The configuration must be set before launching the multipath daemon. 2 Starts the mpathconf utility. 3 This field must be set to the value true . 4 Creates the filesystem and directory /var/lib/containers . 5 The device must be mounted before starting any nodes. 6 Mounts the device to the /var/lib/containers mount point. This location cannot be a symlink. Create the Ignition configuration by running the following command: USD butane --pretty --strict multipath-config.bu > multipath-config.ign Continue with the rest of the first boot RHCOS installation process. Important Do not add the rd.multipath or root kernel arguments on the command-line during installation unless the primary disk is also multipathed. 3.15.5. Installing RHCOS manually on an iSCSI boot device You can manually install RHCOS on an iSCSI target. Prerequisites You are in the RHCOS live environment. You have an iSCSI target that you want to install RHCOS on. Procedure Mount the iSCSI target from the live environment by running the following command: USD iscsiadm \ --mode discovery \ --type sendtargets --portal <IP_address> \ 1 --login 1 The IP address of the target portal. Install RHCOS onto the iSCSI target by running the following command and using the necessary kernel arguments, for example: USD coreos-installer install \ /dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \ 1 --append-karg rd.iscsi.initiator=<initiator_iqn> \ 2 --append.karg netroot=<target_iqn> \ 3 --console ttyS0,115200n8 --ignition-file <path_to_file> 1 The location you are installing to. You must provide the IP address of the target portal, the associated port number, the target iSCSI node in IQN format, and the iSCSI logical unit number (LUN). 2 The iSCSI initiator, or client, name in IQN format. The initiator forms a session to connect to the iSCSI target. 3 The the iSCSI target, or server, name in IQN format. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . Unmount the iSCSI disk with the following command: USD iscsiadm --mode node --logoutall=all This procedure can also be performed using the coreos-installer iso customize or coreos-installer pxe customize subcommands. 3.15.6. Installing RHCOS on an iSCSI boot device using iBFT On a completely diskless machine, the iSCSI target and initiator values can be passed through iBFT. iSCSI multipathing is also supported. Prerequisites You are in the RHCOS live environment. You have an iSCSI target you want to install RHCOS on. Optional: you have multipathed your iSCSI target. Procedure Mount the iSCSI target from the live environment by running the following command: USD iscsiadm \ --mode discovery \ --type sendtargets --portal <IP_address> \ 1 --login 1 The IP address of the target portal. Optional: enable multipathing and start the daemon with the following command: USD mpathconf --enable && systemctl start multipathd.service Install RHCOS onto the iSCSI target by running the following command and using the necessary kernel arguments, for example: USD coreos-installer install \ /dev/mapper/mpatha \ 1 --append-karg rd.iscsi.firmware=1 \ 2 --append-karg rd.multipath=default \ 3 --console ttyS0 \ --ignition-file <path_to_file> 1 The path of a single multipathed device. If there are multiple multipath devices connected, or to be explicit, you can use the World Wide Name (WWN) symlink available in /dev/disk/by-path . 2 The iSCSI parameter is read from the BIOS firmware. 3 Optional: include this parameter if you are enabling multipathing. For more information about the iSCSI options supported by dracut , see the dracut.cmdline manual page . Unmount the iSCSI disk: USD iscsiadm --mode node --logout=all This procedure can also be performed using the coreos-installer iso customize or coreos-installer pxe customize subcommands. 3.16. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. Additional resources See Monitoring installation progress for more information about monitoring the installation logs and retrieving diagnostic data if installation issues arise. 3.17. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.18. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 3.19. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Configure the Operators that are not available. Additional resources See Gathering logs from a failed installation for details about gathering data in the event of a failed OpenShift Container Platform installation. See Troubleshooting Operator issues for steps to check Operator pod health across the cluster and gather Operator logs for diagnosis. 3.19.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.19.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.19.3. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 3.20. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 3.21. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.22. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false", "cat <nmstate_configuration>.yaml | base64 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml", "oc edit mc <machineconfig_custom_resource_name>", "oc apply -f ./extraworker-secret.yaml", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret", "oc project openshift-machine-api", "oc get machinesets", "oc scale machineset <machineset_name> --replicas=<n> 1", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection", "coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img", "coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img", "coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "team=team0:em1,em2 ip=team0:dhcp", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "variant: openshift version: 4.16.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target", "butane --pretty --strict multipath-config.bu > multipath-config.ign", "iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login", "coreos-installer install /dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 1 --append-karg rd.iscsi.initiator=<initiator_iqn> \\ 2 --append.karg netroot=<target_iqn> \\ 3 --console ttyS0,115200n8 --ignition-file <path_to_file>", "iscsiadm --mode node --logoutall=all", "iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --append-karg rd.iscsi.firmware=1 \\ 2 --append-karg rd.multipath=default \\ 3 --console ttyS0 --ignition-file <path_to_file>", "iscsiadm --mode node --logout=all", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_bare_metal/installing-bare-metal-network-customizations
8.2. Memory Tuning on Virtual Machines
8.2. Memory Tuning on Virtual Machines 8.2.1. Memory Monitoring Tools Memory usage can be monitored in virtual machines using tools used in bare metal environments. Tools useful for monitoring memory usage and diagnosing memory-related problems include: top vmstat numastat /proc/ Note For details on using these performance tools, see the Red Hat Enterprise Linux 7 Performance Tuning Guide and the man pages for these commands. 8.2.2. Memory Tuning with virsh The optional <memtune> element in the guest XML configuration allows administrators to configure guest virtual machine memory settings manually. If <memtune> is omitted, the VM uses memory based on how it was allocated and assigned during the VM creation. Display or set memory parameters in the <memtune> element in a virtual machine with the virsh memtune command, replacing values according to your environment: Optional parameters include: hard_limit The maximum memory the virtual machine can use, in kibibytes (blocks of 1024 bytes). Warning Setting this limit too low can result in the virtual machine being killed by the kernel. soft_limit The memory limit to enforce during memory contention, in kibibytes (blocks of 1024 bytes). swap_hard_limit The maximum memory plus swap the virtual machine can use, in kibibytes (blocks of 1024 bytes). The swap_hard_limit value must be more than the hard_limit value. min_guarantee The guaranteed minimum memory allocation for the virtual machine, in kibibytes (blocks of 1024 bytes). Note See # virsh help memtune for more information on using the virsh memtune command. The optional <memoryBacking> element may contain several elements that influence how virtual memory pages are backed by host pages. Setting locked prevents the host from swapping out memory pages belonging to the guest. Add the following to the guest XML to lock the virtual memory pages in the host's memory: Important When setting locked , a hard_limit must be set in the <memtune> element to the maximum memory configured for the guest, plus any memory consumed by the process itself. Setting nosharepages prevents the host from merging the same memory used among guests. To instruct the hypervisor to disable share pages for a guest, add the following to the guest's XML: 8.2.3. Huge Pages and Transparent Huge Pages AMD64 and Intel 64 CPUs usually address memory in 4kB pages, but they are capable of using larger 2MB or 1GB pages known as huge pages . KVM guests can be deployed with huge page memory support in order to improve performance by increasing CPU cache hits against the Transaction Lookaside Buffer (TLB). A kernel feature enabled by default in Red Hat Enterprise Linux 7, huge pages can significantly increase performance, particularly for large memory and memory-intensive workloads. Red Hat Enterprise Linux 7 is able to manage large amounts of memory more effectively by increasing the page size through the use of huge pages. To increase the effectiveness and convenience of managing huge pages, Red Hat Enterprise Linux 7 uses Transparent Huge Pages (THP) by default. For more information on huge pages and THP, see the Performance Tuning Guide . Red Hat Enterprise Linux 7 systems support 2MB and 1GB huge pages, which can be allocated at boot or at runtime. See Section 8.2.3.3, "Enabling 1 GB huge pages for guests at boot or runtime" for instructions on enabling multiple huge page sizes. 8.2.3.1. Configuring Transparent Huge Pages Transparent huge pages (THP) are an abstraction layer that automates most aspects of creating, managing, and using huge pages. By default, they automatically optimize system settings for performance. Note Using KSM can reduce the occurrence of transparent huge pages, so it is recommended to disable KSM before enabling THP. For more information, see Section 8.3.4, "Deactivating KSM" . Transparent huge pages are enabled by default. To check the current status, run: To enable transparent huge pages to be used by default, run: This will set /sys/kernel/mm/transparent_hugepage/enabled to always . To disable transparent huge pages: Transparent Huge Page support does not prevent the use of static huge pages. However, when static huge pages are not used, KVM will use transparent huge pages instead of the regular 4kB page size. 8.2.3.2. Configuring Static Huge Pages In some cases, greater control of huge pages is preferable. To use static huge pages on guests, add the following to the guest XML configuration using virsh edit : This instructs the host to allocate memory to the guest using huge pages, instead of using the default page size. View the current huge pages value by running the following command: Procedure 8.1. Setting huge pages The following example procedure shows the commands to set huge pages. View the current huge pages value: Huge pages are set in increments of 2MB. To set the number of huge pages to 25000, use the following command: Note To make the setting persistent, add the following lines to the /etc/sysctl.conf file on the guest machine, with X being the intended number of huge pages: Afterwards, add transparent_hugepage=never to the kernel boot parameters by appending it to the end of the /kernel line in the /etc/grub2.cfg file on the guest. Mount the huge pages: Add the following lines to the memoryBacking section in the virtual machine's XML configuration: Restart libvirtd : Start the VM: Restart the VM if it is already running: Verify the changes in /proc/meminfo : Huge pages can benefit not only the host but also guests, however, their total huge pages value must be less than what is available in the host. 8.2.3.3. Enabling 1 GB huge pages for guests at boot or runtime Red Hat Enterprise Linux 7 systems support 2MB and 1GB huge pages, which can be allocated at boot or at runtime. Procedure 8.2. Allocating 1GB huge pages at boot time To allocate different sizes of huge pages at boot time, use the following command, specifying the number of huge pages. This example allocates four 1GB huge pages and 1024 2MB huge pages: Change this command line to specify a different number of huge pages to be allocated at boot. Note The two steps must also be completed the first time you allocate 1GB huge pages at boot time. Mount the 2MB and 1GB huge pages on the host: Add the following lines to the memoryBacking section in the virtual machine's XML configuration: Restart libvirtd to enable the use of 1GB huge pages on guests: Procedure 8.3. Allocating 1GB huge pages at runtime 1GB huge pages can also be allocated at runtime. Runtime allocation allows the system administrator to choose which NUMA node to allocate those pages from. However, runtime page allocation can be more prone to allocation failure than boot time allocation due to memory fragmentation. To allocate different sizes of huge pages at runtime, use the following command, replacing values for the number of huge pages, the NUMA node to allocate them from, and the huge page size: This example command allocates four 1GB huge pages from node1 and 1024 2MB huge pages from node3 . These huge page settings can be changed at any time with the above command, depending on the amount of free memory on the host system. Note The two steps must also be completed the first time you allocate 1GB huge pages at runtime. Mount the 2MB and 1GB huge pages on the host: Add the following lines to the memoryBacking section in the virtual machine's XML configuration: Restart libvirtd to enable the use of 1GB huge pages on guests:
[ "virsh memtune virtual_machine --parameter size", "<memoryBacking> <locked/> </memoryBacking>", "<memoryBacking> <nosharepages/> </memoryBacking>", "cat /sys/kernel/mm/transparent_hugepage/enabled", "echo always > /sys/kernel/mm/transparent_hugepage/enabled", "echo never > /sys/kernel/mm/transparent_hugepage/enabled", "<memoryBacking> <hugepages/> </memoryBacking>", "cat /proc/sys/vm/nr_hugepages", "cat /proc/meminfo | grep Huge AnonHugePages: 2048 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB", "echo 25000 > /proc/sys/vm/nr_hugepages", "echo 'vm.nr_hugepages = X' >> /etc/sysctl.conf sysctl -p", "mount -t hugetlbfs hugetlbfs /dev/hugepages", "<hugepages> <page size='1' unit='GiB'/> </hugepages>", "systemctl restart libvirtd", "virsh start virtual_machine", "virsh reset virtual_machine", "cat /proc/meminfo | grep Huge AnonHugePages: 0 kB HugePages_Total: 25000 HugePages_Free: 23425 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB", "'default_hugepagesz=1G hugepagesz=1G hugepages=4 hugepagesz=2M hugepages=1024'", "mkdir /dev/hugepages1G mount -t hugetlbfs -o pagesize=1G none /dev/hugepages1G mkdir /dev/hugepages2M mount -t hugetlbfs -o pagesize=2M none /dev/hugepages2M", "<hugepages> <page size='1' unit='GiB'/> </hugepages>", "systemctl restart libvirtd", "echo 4 > /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages echo 1024 > /sys/devices/system/node/node3/hugepages/hugepages-2048kB/nr_hugepages", "mkdir /dev/hugepages1G mount -t hugetlbfs -o pagesize=1G none /dev/hugepages1G mkdir /dev/hugepages2M mount -t hugetlbfs -o pagesize=2M none /dev/hugepages2M", "<hugepages> <page size='1' unit='GiB'/> </hugepages>", "systemctl restart libvirtd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-memory-tuning
Chapter 26. Virtualization
Chapter 26. Virtualization Coolkey does not load on Windows 7 guests Loading the Coolkey module on Windows 7 guest virtual machines currently fails, which prevents smart card redirection from working properly on these guests. (BZ# 1331471 ) Disabling vCPUs on Hyper-V guests fails Currently, it is not possible to disable CPUs on guest virtual machines running on Microsoft Hyper-V, including Microsoft Azure cloud, due to the lack of support from the host side. However, it is possible to reduce the number of online CPUs by booting guests with the nr_cpus=XX parameter passed on the kernel command line, where XX is the number of online CPUs required. For more information, see https://access.redhat.com/solutions/2790331 . (BZ#1396336) Hot plugging hard disks as a batch on the VMware ESXi hypervisor does not work reliably When hot plugging multiple hard disks at the same time to a Red Hat Enterprise Linux 6 guest virtual machine running on the VMware ESXi hypervisor, the host currently does not inform the guest about all of the added disks, and some of the disks thus cannot be used. To work around this problem, hot plug one hard disk at a time in the described scenario. (BZ#1224673) Guests cannot access floppy disks larger than 1.44 MB Guest virtual machines are currently unable to access floppy drive images larger than 1.44 MB if they are inserted while the guest is running. To work around the problem, insert the floppy drive image prior to booting the guest. (BZ# 1209362 ) Hyper-V guest integration services stop working after they are disabled and re-enabled Currently, Red Hat Enterprise Linux 6 guest virtual machines running on the Microsoft Hyper-V hypervisor do not automatically restart the hyperv-daemons suite after Hyper-V guest integration services, such as data exchange and backup, are disabled and then re-enabled. As a consequence, these integration services stop working after they are disabled and re-enabled in the Hyper-V Manager interface. To work around this problem, restart the hypervkvpd , hypervvssd , and hypervfcopyd services in the guest after re-enabling the integration services from Hyper-V Manager, or do not change the status of the integration services while the guest is running. (BZ#1121888) Booting virtual machines with the fsgsbase and smep flags on older host CPUs fails The fsgsbase and smep CPU flags are not properly emulated on certain older CPU models, such as the early Intel Xeon E processors. As a consequence, using fsgsbase or smep when booting a guest virtual machine on a host with such a CPU causes the boot to fail. To work around this problem, do not use fsgsbase and smep if the CPU does not support them. (BZ# 1371765 ) Guests with recent Windows systems in some cases fail to boot if hv_relaxed is used Attempting to boot KVM guests with the following operating systems currently fails with an error code: 0x0000001E message if the value of the -cpu option is SandyBridge or Opteron_G4 and the hv_relaxed option is used. 64-bit Windows 8 or later 64-bit Windows Server 2012 or later To work around this problem, do not use hv_relaxed . (BZ# 1063124 ) Limited CPU support for Windows 10 and Windows Server 2016 guests On a Red Hat Enterprise 6 host, Windows 10 and Windows Server 2016 guests can only be created when using the following CPU models: the Intel Xeon E series the Intel Xeon E7 family Intel Xeon v2, v3, and v4 Opteron G2, G3, G4, G5, and G6 For these CPU models, also make sure to set the CPU model of the guest to match the CPU model detected by running the virsh capabilities command on the host. Using the application default or hypervisor default prevents the guests from booting properly. To be able to use Windows 10 guests on Legacy Intel Core 2 processors (also known as Penryn) or Intel Xeon 55xx and 75xx processor families (also known as Nehalem), add the following flag to the Domain XML file, with either Penryn or Nehalem as MODELNAME: Other CPU models are not supported, and both Windows 10 guests and Windows Server 2016 guests created on them are likely to become unresponsive during the boot process. (BZ# 1346153 ) Network connectivity not restored when vnic is enabled If the netdev(tap) link is set to off and the vnic(virtio-net/e1000) link is set to on, network connectivity does not resume. However, if the vnic(virtio-net/e1000) link is set to off and the netdev(tap) link is set to on, network connectivity resumes. To resolve the issue, consistently use the same device to control the link. If netdev(tap) link was set to off, using it to turn the link back on will work correctly. (BZ#1198956) KVM guests fail to properly read physical DVD/CD-ROM media Several problems may occur when using physical DVD/CD-ROMs with KVM guest virtual machines. To work around this problem, you can create ISO files from the physical media and use them with the virtual machines. It is recommended that you do not use physical DVD/CD-ROMs. For more information, see https://access.redhat.com/solutions/2543131 . (BZ#1360581)
[ "<cpu mode='custom' match='exact'> <model>MODELNAME</model> <feature name='erms' policy='require'/> </cpu>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_release_notes/known_issues_virtualization
Chapter 1. Installation methods
Chapter 1. Installation methods You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program, by using the following method: Installing a cluster : You can install OpenShift Container Platform on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program. 1.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that you provision, by using the following method: Installing a cluster on Azure Stack Hub using ARM templates : You can install OpenShift Container Platform on Azure Stack Hub by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation. 1.3. Additional resources Configuring an Azure Stack Hub account
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_azure_stack_hub/preparing-to-install-on-azure-stack-hub
Chapter 4. In-place Upgrades
Chapter 4. In-place Upgrades An in-place upgrade provides a way of upgrading a system to a new major release of Red Hat Enterprise Linux by replacing the existing operating system. For a list of currently supported upgrade paths, see Supported in-place upgrade paths for Red Hat Enterprise Linux . In-place upgrade from RHEL 6 to RHEL 7 To perform an in-place upgrade from RHEL 6 to RHEL 7, use the Preupgrade Assistant , a utility that checks the system for upgrade issues before running the actual upgrade, and that also provides additional scripts for the Red Hat Upgrade Tool . When you have solved all the problems reported by the Preupgrade Assistant , use the Red Hat Upgrade Tool to upgrade the system. For details regarding procedures and supported scenarios, see the Upgrading from RHEL 6 to RHEL 7 guide. Note that the Preupgrade Assistant and the Red Hat Upgrade Tool are available in the RHEL 6 Extras repository . If you are using CentOS Linux 6 or Oracle Linux 6, you can convert your operating system to RHEL 6 using the convert2rhel utility prior to upgrading to RHEL 7. For instructions, see How to convert from CentOS Linux or Oracle Linux to RHEL . In-place upgrade from RHEL 7 to RHEL 8 To perform an in-place upgrade from RHEL 7 to RHEL 8, use the Leapp utility. For instructions, see the Upgrading from RHEL 7 to RHEL 8 document. Major differences between RHEL 7 and RHEL 8 are listed in Considerations in adopting RHEL 8 . Note that the Leapp utility is available in the RHEL 7 Extras repository . If you are using CentOS Linux 7 or Oracle Linux 7, you can convert your operating system to RHEL 7 using the convert2rhel utility prior to upgrading to RHEL 8. For instructions, see How to convert from CentOS Linux or Oracle Linux to RHEL .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/new_features_general_updates
Managing and monitoring security updates
Managing and monitoring security updates Red Hat Enterprise Linux 8 Update RHEL 8 system security to prevent attackers from exploiting known flaws Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_and_monitoring_security_updates/index
Chapter 4. Upgrading Red Hat JBoss Web Server using this Service Pack
Chapter 4. Upgrading Red Hat JBoss Web Server using this Service Pack To install this service pack: Go to the Software Downloads page for Red Hat JBoss Web Server 6.0 . Note You require a Red Hat subscription to access the Software Downloads page. Download the Red Hat JBoss Web Server 6.0 Service Pack 3 archive file that is appropriate to your platform. Extract the archive file to the Red Hat JBoss Web Server installation directory. If you have installed Red Hat JBoss Web Server from RPM packages on Red Hat Enterprise Linux, you can use the following yum command to upgrade to the latest service pack:
[ "yum upgrade" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_3_release_notes/upgrading_red_hat_jboss_web_server_using_this_service_pack
Support
Support Red Hat Advanced Cluster Security for Kubernetes 4.6 Getting support for Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team
[ "export ROX_PASSWORD= <rox_password> && export ROX_CENTRAL_ADDRESS= <address>:<port_number> 1", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" -p \"USDROX_PASSWORD\" central debug download-diagnostics", "export ROX_API_TOKEN= <api_token>", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central debug download-diagnostics" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html-single/support/index
Performance Tuning Guide
Performance Tuning Guide Red Hat Enterprise Linux 7 Monitoring and optimizing subsystem throughput in RHEL 7 Edited by Marek Suchanek Red Hat Customer Content Services [email protected] Milan Navratil Red Hat Customer Content Services Laura Bailey Red Hat Customer Content Services Charlie Boyle Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/index
Chapter 6. Ceph Object Storage Daemon (OSD) configuration
Chapter 6. Ceph Object Storage Daemon (OSD) configuration As a storage administrator, you can configure the Ceph Object Storage Daemon (OSD) to be redundant and optimized based on the intended workload. Prerequisites Installation of the Red Hat Ceph Storage software. 6.1. Ceph OSD configuration All Ceph clusters have a configuration, which defines: Cluster identity Authentication settings Ceph daemon membership in the cluster Network configuration Host names and addresses Paths to keyrings Paths to OSD log files Other runtime options A deployment tool, such as cephadm , will typically create an initial Ceph configuration file for you. However, you can create one yourself if you prefer to bootstrap a cluster without using a deployment tool. For your convenience, each daemon has a series of default values. Many are set by the ceph/src/common/config_opts.h script. You can override these settings with a Ceph configuration file or at runtime by using the monitor tell command or connecting directly to a daemon socket on a Ceph node. Important Red Hat does not recommend changing the default paths, as it makes it more difficult to troubleshoot Ceph later. Additional Resources For more information about cephadm and the Ceph orchestrator, see the Red Hat Ceph Storage Operations Guide . 6.2. Scrubbing the OSD In addition to making multiple copies of objects, Ceph ensures data integrity by scrubbing placement groups. Ceph scrubbing is analogous to the fsck command on the object storage layer. For each placement group, Ceph generates a catalog of all objects and compares each primary object and its replicas to ensure that no objects are missing or mismatched. Light scrubbing (daily) checks the object size and attributes. Deep scrubbing (weekly) reads the data and uses checksums to ensure data integrity. Scrubbing is important for maintaining data integrity, but it can reduce performance. Adjust the following settings to increase or decrease scrubbing operations. Additional resources See Ceph scrubbing options in the appendix of the Red Hat Ceph Storage Configuration Guide for more details. 6.3. Backfilling an OSD When you add Ceph OSDs to a cluster or remove them from the cluster, the CRUSH algorithm rebalances the cluster by moving placement groups to or from Ceph OSDs to restore the balance. The process of migrating placement groups and the objects they contain can reduce the cluster operational performance considerably. To maintain operational performance, Ceph performs this migration with the 'backfill' process, which allows Ceph to set backfill operations to a lower priority than requests to read or write data. 6.4. OSD recovery When the cluster starts or when a Ceph OSD terminates unexpectedly and restarts, the OSD begins peering with other Ceph OSDs before a write operation can occur. If a Ceph OSD crashes and comes back online, usually it will be out of sync with other Ceph OSDs containing more recent versions of objects in the placement groups. When this happens, the Ceph OSD goes into recovery mode and seeks to get the latest copy of the data and bring its map back up to date. Depending upon how long the Ceph OSD was down, the OSD's objects and placement groups may be significantly out of date. Also, if a failure domain went down, for example, a rack, more than one Ceph OSD might come back online at the same time. This can make the recovery process time consuming and resource intensive. To maintain operational performance, Ceph performs recovery with limitations on the number of recovery requests, threads, and object chunk sizes which allows Ceph to perform well in a degraded state. Additional resources See all the Red Hat Ceph Storage Ceph OSD configuration options in OSD object daemon storage configuration options for specific option descriptions and usage.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/configuration_guide/ceph-object-storage-daemon-configuration
Chapter 2. Installing .NET image streams
Chapter 2. Installing .NET image streams To install .NET image streams, use image stream definitions from s2i-dotnetcore with the OpenShift Client ( oc ) binary. Image streams can be installed from Linux, Mac, and Windows. You can define .NET image streams in the global openshift namespace or locally in a project namespace. Sufficient permissions are required to update the openshift namespace definitions. Procedure Install (or update) the image streams:
[ "oc apply [-n namespace ] -f https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/main/dotnet_imagestreams.json" ]
https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_openshift_container_platform/installing-image-streams-using-oc_getting-started-with-dotnet-on-openshift
Chapter 1. Introduction
Chapter 1. Introduction Hammer is a command-line tool provided with Red Hat Satellite 6. You can use Hammer to configure and manage a Red Hat Satellite Server by using either CLI commands or shell script automation. The following cheat sheet provides a condensed overview of essential Hammer commands. For more information about Hammer, see the Red Hat Hammer CLI Guide .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/hammer_cheat_sheet/introduction
Chapter 87. Mock
Chapter 87. Mock Only producer is supported Testing of distributed and asynchronous processing is notoriously difficult. The Mock , Test and Dataset endpoints work great with the Camel Testing Framework to simplify your unit and integration testing using Enterprise Integration Patterns and Camel's large range of Components together with the powerful Bean Integration. The Mock component provides a powerful declarative testing mechanism, which is similar to jMock in that it allows declarative expectations to be created on any Mock endpoint before a test begins. Then the test is run, which typically fires messages to one or more endpoints, and finally the expectations can be asserted in a test case to ensure the system worked as expected. This allows you to test various things like: The correct number of messages are received on each endpoint, The correct payloads are received, in the right order, Messages arrive on an endpoint in order, using some Expression to create an order testing function, Messages arrive match some kind of Predicate such as that specific headers have certain values, or that messages match some predicate, such as by evaluating an XPath or XQuery Expression. Note There is also the Test endpoint which is a Mock endpoint, but which uses a second endpoint to provide the list of expected message bodies and automatically sets up the Mock endpoint assertions. In other words, it's a Mock endpoint that automatically sets up its assertions from some sample messages in a File or database , for example. Note Mock endpoints keep received Exchanges in memory indefinitely. Remember that Mock is designed for testing. When you add Mock endpoints to a route, each Exchange sent to the endpoint will be stored (to allow for later validation) in memory until explicitly reset or the JVM is restarted. If you are sending high volume and/or large messages, this may cause excessive memory use. If your goal is to test deployable routes inline, consider using NotifyBuilder or AdviceWith in your tests instead of adding Mock endpoints to routes directly. There are two new options retainFirst, and retainLast that can be used to limit the number of messages the Mock endpoints keep in memory. 87.1. Dependencies When using mock with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mock-starter</artifactId> </dependency> 87.2. URI format Where someName can be any string that uniquely identifies the endpoint. 87.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 87.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 87.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 87.4. Component Options The Mock component supports 4 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean log (producer) To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean exchangeFormatter (advanced) Autowired Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. ExchangeFormatter 87.5. Endpoint Options The Mock endpoint is configured using URI syntax: with the following path and query parameters: 87.5.1. Path Parameters (1 parameters) Name Description Default Type name (producer) Required Name of mock endpoint. String 87.5.2. Query Parameters (12 parameters) Name Description Default Type assertPeriod (producer) Sets a grace period after which the mock endpoint will re-assert to ensure the preliminary assertion is still valid. This is used for example to assert that exactly a number of messages arrives. For example if expectedMessageCount(int) was set to 5, then the assertion is satisfied when 5 or more message arrives. To ensure that exactly 5 messages arrives, then you would need to wait a little period to ensure no further message arrives. This is what you can use this method for. By default this period is disabled. long expectedCount (producer) Specifies the expected number of message exchanges that should be received by this endpoint. Beware: If you want to expect that 0 messages, then take extra care, as 0 matches when the tests starts, so you need to set a assert period time to let the test run for a while to make sure there are still no messages arrived; for that use setAssertPeriod(long). An alternative is to use NotifyBuilder, and use the notifier to know when Camel is done routing some messages, before you call the assertIsSatisfied() method on the mocks. This allows you to not use a fixed assert period, to speedup testing times. If you want to assert that exactly n'th message arrives to this mock endpoint, then see also the setAssertPeriod(long) method for further details. -1 int failFast (producer) Sets whether assertIsSatisfied() should fail fast at the first detected failed expectation while it may otherwise wait for all expected messages to arrive before performing expectations verifications. Is by default true. Set to false to use behavior as in Camel 2.x. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean log (producer) To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. false boolean reportGroup (producer) A number that is used to turn on throughput logging based on groups of the size. int resultMinimumWaitTime (producer) Sets the minimum expected amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied. long resultWaitTime (producer) Sets the maximum amount of time (in millis) the assertIsSatisfied() will wait on a latch until it is satisfied. long retainFirst (producer) Specifies to only retain the first n'th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the first 10 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the first 10 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object... ) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. -1 int retainLast (producer) Specifies to only retain the last n'th number of received Exchanges. This is used when testing with big data, to reduce memory consumption by not storing copies of every Exchange this mock endpoint receives. Important: When using this limitation, then the getReceivedCounter() will still return the actual number of received Exchanges. For example if we have received 5000 Exchanges, and have configured to only retain the last 20 Exchanges, then the getReceivedCounter() will still return 5000 but there is only the last 20 Exchanges in the getExchanges() and getReceivedExchanges() methods. When using this method, then some of the other expectation methods is not supported, for example the expectedBodiesReceived(Object... ) sets a expectation on the first number of bodies received. You can configure both setRetainFirst(int) and setRetainLast(int) methods, to limit both the first and last received. -1 int sleepForEmptyTest (producer) Allows a sleep to be specified to wait to check that this endpoint really is empty when expectedMessageCount(int) is called with zero. long copyOnExchange (producer (advanced)) Sets whether to make a deep copy of the incoming Exchange when received at this mock endpoint. Is by default true. true boolean 87.6. Simple Example Here's a simple example of Mock endpoint in use. First, the endpoint is resolved on the context. Then we set an expectation, and then, after the test has run, we assert that our expectations have been met: MockEndpoint resultEndpoint = context.getEndpoint("mock:foo", MockEndpoint.class); // set expectations resultEndpoint.expectedMessageCount(2); // send some messages // now lets assert that the mock:foo endpoint received 2 messages resultEndpoint.assertIsSatisfied(); You typically always call the method to test that the expectations were met after running a test. Camel will by default wait 10 seconds when the assertIsSatisfied() is invoked. This can be configured by setting the setResultWaitTime(millis) method. 87.7. Using assertPeriod When the assertion is satisfied then Camel will stop waiting and continue from the assertIsSatisfied method. That means if a new message arrives on the mock endpoint, just a bit later, that arrival will not affect the outcome of the assertion. Suppose you do want to test that no new messages arrives after a period thereafter, then you can do that by setting the setAssertPeriod method, for example: MockEndpoint resultEndpoint = context.getEndpoint("mock:foo", MockEndpoint.class); resultEndpoint.setAssertPeriod(5000); resultEndpoint.expectedMessageCount(2); // send some messages // now lets assert that the mock:foo endpoint received 2 messages resultEndpoint.assertIsSatisfied(); 87.8. Setting expectations You can see from the Javadoc of MockEndpoint the various helper methods you can use to set expectations. The main methods are as follows: Method Description expectedMessageCount(int) To define the expected message count on the endpoint. expectedMinimumMessageCount(int) To define the minimum number of expected messages on the endpoint. expectedBodiesReceived(... ) To define the expected bodies that should be received (in order). expectedHeaderReceived(... ) To define the expected header that should be received expectsAscending(Expression) To add an expectation that messages are received in order, using the given Expression to compare messages. expectsDescending(Expression) To add an expectation that messages are received in order, using the given Expression to compare messages. expectsNoDuplicates(Expression) To add an expectation that no duplicate messages are received; using an Expression to calculate a unique identifier for each message. This could be something like the JMSMessageID if using JMS, or some unique reference number within the message. Here's another example: resultEndpoint.expectedBodiesReceived("firstMessageBody", "secondMessageBody", "thirdMessageBody"); 87.9. Adding expectations to specific messages In addition, you can use the message(int messageIndex) method to add assertions about a specific message that is received. For example, to add expectations of the headers or body of the first message (using zero-based indexing like java.util.List ), you can use the following code: resultEndpoint.message(0).header("foo").isEqualTo("bar"); There are some examples of the Mock endpoint in use in the camel-core processor tests . 87.10. Mocking existing endpoints Camel now allows you to automatically mock existing endpoints in your Camel routes. Note How it works The endpoints are still in action. What happens differently is that a Mock endpoint is injected and receives the message first and then delegates the message to the target endpoint. You can view this as a kind of intercept and delegate or endpoint listener. Suppose you have the given route below: Route @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").routeId("start") .to("direct:foo").to("log:foo").to("mock:result"); from("direct:foo").routeId("foo") .transform(constant("Bye World")); } }; } You can then use the adviceWith feature in Camel to mock all the endpoints in a given route from your unit test, as shown below: adviceWith mocking all endpoints @Test public void testAdvisedMockEndpoints() throws Exception { // advice the start route using the inlined AdviceWith lambda style route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context, "start", a -> // mock all endpoints a.mockEndpoints()); getMockEndpoint("mock:direct:start").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:log:foo").expectedBodiesReceived("Bye World"); getMockEndpoint("mock:result").expectedBodiesReceived("Bye World"); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint("direct:start")); assertNotNull(context.hasEndpoint("direct:foo")); assertNotNull(context.hasEndpoint("log:foo")); assertNotNull(context.hasEndpoint("mock:result")); // all the endpoints was mocked assertNotNull(context.hasEndpoint("mock:direct:start")); assertNotNull(context.hasEndpoint("mock:direct:foo")); assertNotNull(context.hasEndpoint("mock:log:foo")); } Notice that the mock endpoints is given the URI mock:<endpoint> , for example mock:direct:foo . Camel logs at INFO level the endpoints being mocked: Note Mocked endpoints are without parameters Endpoints which are mocked will have their parameters stripped off. For example the endpoint log:foo?showAll=true will be mocked to the following endpoint mock:log:foo . Notice the parameters have been removed. Its also possible to only mock certain endpoints using a pattern. For example to mock all log endpoints you do as shown: adviceWith mocking only log endpoints using a pattern @Test public void testAdvisedMockEndpointsWithPattern() throws Exception { // advice the start route using the inlined AdviceWith lambda style route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context, "start", a -> // mock only log endpoints a.mockEndpoints("log*")); // now we can refer to log:foo as a mock and set our expectations getMockEndpoint("mock:log:foo").expectedBodiesReceived("Bye World"); getMockEndpoint("mock:result").expectedBodiesReceived("Bye World"); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint("direct:start")); assertNotNull(context.hasEndpoint("direct:foo")); assertNotNull(context.hasEndpoint("log:foo")); assertNotNull(context.hasEndpoint("mock:result")); // only the log:foo endpoint was mocked assertNotNull(context.hasEndpoint("mock:log:foo")); assertNull(context.hasEndpoint("mock:direct:start")); assertNull(context.hasEndpoint("mock:direct:foo")); } The pattern supported can be a wildcard or a regular expression. See more details about this at Intercept as its the same matching function used by Camel. Note Mind that mocking endpoints causes the messages to be copied when they arrive on the mock. That means Camel will use more memory. This may not be suitable when you send in a lot of messages. 87.11. Mocking existing endpoints using the camel-test component Instead of using the adviceWith to instruct Camel to mock endpoints, you can easily enable this behavior when using the camel-test Test Kit. The same route can be tested as follows. Notice that we return "*" from the isMockEndpoints method, which tells Camel to mock all endpoints. If you only want to mock all log endpoints you can return "log*" instead. isMockEndpoints using camel-test kit public class IsMockEndpointsJUnit4Test extends CamelTestSupport { @Override public String isMockEndpoints() { // override this method and return the pattern for which endpoints to mock. // use * to indicate all return "*"; } @Test public void testMockAllEndpoints() throws Exception { // notice we have automatic mocked all endpoints and the name of the endpoints is "mock:uri" getMockEndpoint("mock:direct:start").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:log:foo").expectedBodiesReceived("Bye World"); getMockEndpoint("mock:result").expectedBodiesReceived("Bye World"); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint("direct:start")); assertNotNull(context.hasEndpoint("direct:foo")); assertNotNull(context.hasEndpoint("log:foo")); assertNotNull(context.hasEndpoint("mock:result")); // all the endpoints was mocked assertNotNull(context.hasEndpoint("mock:direct:start")); assertNotNull(context.hasEndpoint("mock:direct:foo")); assertNotNull(context.hasEndpoint("mock:log:foo")); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").to("direct:foo").to("log:foo").to("mock:result"); from("direct:foo").transform(constant("Bye World")); } }; } } 87.12. Mocking existing endpoints with XML DSL If you do not use the camel-test component for unit testing (as shown above) you can use a different approach when using XML files for routes. The solution is to create a new XML file used by the unit test and then include the intended XML file which has the route you want to test. Suppose we have the route in the camel-route.xml file: camel-route.xml <!-- this camel route is in the camel-route.xml file --> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <to uri="direct:foo"/> <to uri="log:foo"/> <to uri="mock:result"/> </route> <route> <from uri="direct:foo"/> <transform> <constant>Bye World</constant> </transform> </route> </camelContext> Then we create a new XML file as follows, where we include the camel-route.xml file and define a spring bean with the class org.apache.camel.impl.InterceptSendToMockEndpointStrategy which tells Camel to mock all endpoints: test-camel-route.xml <!-- the Camel route is defined in another XML file --> <import resource="camel-route.xml"/> <!-- bean which enables mocking all endpoints --> <bean id="mockAllEndpoints" class="org.apache.camel.component.mock.InterceptSendToMockEndpointStrategy"/> Then in your unit test you load the new XML file ( test-camel-route.xml ) instead of camel-route.xml . To only mock all Log endpoints you can define the pattern in the constructor for the bean: <bean id="mockAllEndpoints" class="org.apache.camel.impl.InterceptSendToMockEndpointStrategy"> <constructor-arg index="0" value="log*"/> </bean> 87.13. Mocking endpoints and skip sending to original endpoint Sometimes you want to easily mock and skip sending to a certain endpoints. So the message is detoured and send to the mock endpoint only. You can now use the mockEndpointsAndSkip method using AdviceWith. The example below will skip sending to the two endpoints "direct:foo" , and "direct:bar" . adviceWith mock and skip sending to endpoints @Test public void testAdvisedMockEndpointsWithSkip() throws Exception { // advice the first route using the inlined AdviceWith route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context.getRouteDefinitions().get(0), context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // mock sending to direct:foo and direct:bar and skip send to it mockEndpointsAndSkip("direct:foo", "direct:bar"); } }); getMockEndpoint("mock:result").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedMessageCount(1); getMockEndpoint("mock:direct:bar").expectedMessageCount(1); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // the message was not send to the direct:foo route and thus not sent to // the seda endpoint SedaEndpoint seda = context.getEndpoint("seda:foo", SedaEndpoint.class); assertEquals(0, seda.getCurrentQueueSize()); } The same example using the Test Kit isMockEndpointsAndSkip using camel-test kit public class IsMockEndpointsAndSkipJUnit4Test extends CamelTestSupport { @Override public String isMockEndpointsAndSkip() { // override this method and return the pattern for which endpoints to mock, // and skip sending to the original endpoint. return "direct:foo"; } @Test public void testMockEndpointAndSkip() throws Exception { // notice we have automatic mocked the direct:foo endpoints and the name of the endpoints is "mock:uri" getMockEndpoint("mock:result").expectedBodiesReceived("Hello World"); getMockEndpoint("mock:direct:foo").expectedMessageCount(1); template.sendBody("direct:start", "Hello World"); assertMockEndpointsSatisfied(); // the message was not send to the direct:foo route and thus not sent to the seda endpoint SedaEndpoint seda = context.getEndpoint("seda:foo", SedaEndpoint.class); assertEquals(0, seda.getCurrentQueueSize()); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").to("direct:foo").to("mock:result"); from("direct:foo").transform(constant("Bye World")).to("seda:foo"); } }; } } 87.14. Limiting the number of messages to keep The Mock endpoints will by default keep a copy of every Exchange that it received. So if you test with a lot of messages, then it will consume memory. We have introduced two options retainFirst and retainLast that can be used to specify to only keep N'th of the first and/or last Exchanges. For example in the code below, we only want to retain a copy of the first 5 and last 5 Exchanges the mock receives. MockEndpoint mock = getMockEndpoint("mock:data"); mock.setRetainFirst(5); mock.setRetainLast(5); mock.expectedMessageCount(2000); mock.assertIsSatisfied(); Using this has some limitations. The getExchanges() and getReceivedExchanges() methods on the MockEndpoint will return only the retained copies of the Exchanges. So in the example above, the list will contain 10 Exchanges; the first five, and the last five. The retainFirst and retainLast options also have limitations on which expectation methods you can use. For example the expectedXXX methods that work on message bodies, headers, etc. will only operate on the retained messages. In the example above they can test only the expectations on the 10 retained messages. 87.15. Testing with arrival times The Mock endpoint stores the arrival time of the message as a property on the Exchange Date time = exchange.getProperty(Exchange.RECEIVED_TIMESTAMP, Date.class); You can use this information to know when the message arrived on the mock. But it also provides foundation to know the time interval between the and message arrived on the mock. You can use this to set expectations using the arrives DSL on the Mock endpoint. For example to say that the first message should arrive between 0-2 seconds before the you can do: mock.message(0).arrives().noLaterThan(2).seconds().beforeNext(); You can also define this as that 2nd message (0 index based) should arrive no later than 0-2 seconds after the : mock.message(1).arrives().noLaterThan(2).seconds().afterPrevious(); You can also use between to set a lower bound. For example suppose that it should be between 1-4 seconds: mock.message(1).arrives().between(1, 4).seconds().afterPrevious(); You can also set the expectation on all messages, for example to say that the gap between them should be at most 1 second: mock.allMessages().arrives().noLaterThan(1).seconds().beforeNext(); Note Time units In the example above we use seconds as the time unit, but Camel offers milliseconds , and minutes as well. 87.16. Spring Boot Auto-Configuration The component supports 5 options, which are listed below. Name Description Default Type camel.component.mock.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mock.enabled Whether to enable auto configuration of the mock component. This is enabled by default. Boolean camel.component.mock.exchange-formatter Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. The option is a org.apache.camel.spi.ExchangeFormatter type. ExchangeFormatter camel.component.mock.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mock.log To turn on logging when the mock receives an incoming message. This will log only one time at INFO level for the incoming message. For more detailed logging then set the logger to DEBUG level for the org.apache.camel.component.mock.MockEndpoint class. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mock-starter</artifactId> </dependency>", "mock:someName[?options]", "mock:name", "MockEndpoint resultEndpoint = context.getEndpoint(\"mock:foo\", MockEndpoint.class); // set expectations resultEndpoint.expectedMessageCount(2); // send some messages // now lets assert that the mock:foo endpoint received 2 messages resultEndpoint.assertIsSatisfied();", "MockEndpoint resultEndpoint = context.getEndpoint(\"mock:foo\", MockEndpoint.class); resultEndpoint.setAssertPeriod(5000); resultEndpoint.expectedMessageCount(2); // send some messages // now lets assert that the mock:foo endpoint received 2 messages resultEndpoint.assertIsSatisfied();", "resultEndpoint.expectedBodiesReceived(\"firstMessageBody\", \"secondMessageBody\", \"thirdMessageBody\");", "resultEndpoint.message(0).header(\"foo\").isEqualTo(\"bar\");", "@Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\").routeId(\"start\") .to(\"direct:foo\").to(\"log:foo\").to(\"mock:result\"); from(\"direct:foo\").routeId(\"foo\") .transform(constant(\"Bye World\")); } }; }", "@Test public void testAdvisedMockEndpoints() throws Exception { // advice the start route using the inlined AdviceWith lambda style route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context, \"start\", a -> // mock all endpoints a.mockEndpoints()); getMockEndpoint(\"mock:direct:start\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:direct:foo\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:log:foo\").expectedBodiesReceived(\"Bye World\"); getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Bye World\"); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint(\"direct:start\")); assertNotNull(context.hasEndpoint(\"direct:foo\")); assertNotNull(context.hasEndpoint(\"log:foo\")); assertNotNull(context.hasEndpoint(\"mock:result\")); // all the endpoints was mocked assertNotNull(context.hasEndpoint(\"mock:direct:start\")); assertNotNull(context.hasEndpoint(\"mock:direct:foo\")); assertNotNull(context.hasEndpoint(\"mock:log:foo\")); }", "INFO Adviced endpoint [direct://foo] with mock endpoint [mock:direct:foo]", "@Test public void testAdvisedMockEndpointsWithPattern() throws Exception { // advice the start route using the inlined AdviceWith lambda style route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context, \"start\", a -> // mock only log endpoints a.mockEndpoints(\"log*\")); // now we can refer to log:foo as a mock and set our expectations getMockEndpoint(\"mock:log:foo\").expectedBodiesReceived(\"Bye World\"); getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Bye World\"); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint(\"direct:start\")); assertNotNull(context.hasEndpoint(\"direct:foo\")); assertNotNull(context.hasEndpoint(\"log:foo\")); assertNotNull(context.hasEndpoint(\"mock:result\")); // only the log:foo endpoint was mocked assertNotNull(context.hasEndpoint(\"mock:log:foo\")); assertNull(context.hasEndpoint(\"mock:direct:start\")); assertNull(context.hasEndpoint(\"mock:direct:foo\")); }", "public class IsMockEndpointsJUnit4Test extends CamelTestSupport { @Override public String isMockEndpoints() { // override this method and return the pattern for which endpoints to mock. // use * to indicate all return \"*\"; } @Test public void testMockAllEndpoints() throws Exception { // notice we have automatic mocked all endpoints and the name of the endpoints is \"mock:uri\" getMockEndpoint(\"mock:direct:start\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:direct:foo\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:log:foo\").expectedBodiesReceived(\"Bye World\"); getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Bye World\"); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // additional test to ensure correct endpoints in registry assertNotNull(context.hasEndpoint(\"direct:start\")); assertNotNull(context.hasEndpoint(\"direct:foo\")); assertNotNull(context.hasEndpoint(\"log:foo\")); assertNotNull(context.hasEndpoint(\"mock:result\")); // all the endpoints was mocked assertNotNull(context.hasEndpoint(\"mock:direct:start\")); assertNotNull(context.hasEndpoint(\"mock:direct:foo\")); assertNotNull(context.hasEndpoint(\"mock:log:foo\")); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\").to(\"direct:foo\").to(\"log:foo\").to(\"mock:result\"); from(\"direct:foo\").transform(constant(\"Bye World\")); } }; } }", "<!-- this camel route is in the camel-route.xml file --> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <to uri=\"direct:foo\"/> <to uri=\"log:foo\"/> <to uri=\"mock:result\"/> </route> <route> <from uri=\"direct:foo\"/> <transform> <constant>Bye World</constant> </transform> </route> </camelContext>", "<!-- the Camel route is defined in another XML file --> <import resource=\"camel-route.xml\"/> <!-- bean which enables mocking all endpoints --> <bean id=\"mockAllEndpoints\" class=\"org.apache.camel.component.mock.InterceptSendToMockEndpointStrategy\"/>", "<bean id=\"mockAllEndpoints\" class=\"org.apache.camel.impl.InterceptSendToMockEndpointStrategy\"> <constructor-arg index=\"0\" value=\"log*\"/> </bean>", "@Test public void testAdvisedMockEndpointsWithSkip() throws Exception { // advice the first route using the inlined AdviceWith route builder // which has extended capabilities than the regular route builder AdviceWith.adviceWith(context.getRouteDefinitions().get(0), context, new AdviceWithRouteBuilder() { @Override public void configure() throws Exception { // mock sending to direct:foo and direct:bar and skip send to it mockEndpointsAndSkip(\"direct:foo\", \"direct:bar\"); } }); getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:direct:foo\").expectedMessageCount(1); getMockEndpoint(\"mock:direct:bar\").expectedMessageCount(1); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // the message was not send to the direct:foo route and thus not sent to // the seda endpoint SedaEndpoint seda = context.getEndpoint(\"seda:foo\", SedaEndpoint.class); assertEquals(0, seda.getCurrentQueueSize()); }", "public class IsMockEndpointsAndSkipJUnit4Test extends CamelTestSupport { @Override public String isMockEndpointsAndSkip() { // override this method and return the pattern for which endpoints to mock, // and skip sending to the original endpoint. return \"direct:foo\"; } @Test public void testMockEndpointAndSkip() throws Exception { // notice we have automatic mocked the direct:foo endpoints and the name of the endpoints is \"mock:uri\" getMockEndpoint(\"mock:result\").expectedBodiesReceived(\"Hello World\"); getMockEndpoint(\"mock:direct:foo\").expectedMessageCount(1); template.sendBody(\"direct:start\", \"Hello World\"); assertMockEndpointsSatisfied(); // the message was not send to the direct:foo route and thus not sent to the seda endpoint SedaEndpoint seda = context.getEndpoint(\"seda:foo\", SedaEndpoint.class); assertEquals(0, seda.getCurrentQueueSize()); } @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\").to(\"direct:foo\").to(\"mock:result\"); from(\"direct:foo\").transform(constant(\"Bye World\")).to(\"seda:foo\"); } }; } }", "MockEndpoint mock = getMockEndpoint(\"mock:data\"); mock.setRetainFirst(5); mock.setRetainLast(5); mock.expectedMessageCount(2000); mock.assertIsSatisfied();", "Date time = exchange.getProperty(Exchange.RECEIVED_TIMESTAMP, Date.class);", "mock.message(0).arrives().noLaterThan(2).seconds().beforeNext();", "mock.message(1).arrives().noLaterThan(2).seconds().afterPrevious();", "mock.message(1).arrives().between(1, 4).seconds().afterPrevious();", "mock.allMessages().arrives().noLaterThan(1).seconds().beforeNext();" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-mock-component-starter
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/release_notes/making-open-source-more-inclusive
Chapter 23. Configuring screen rotation
Chapter 23. Configuring screen rotation 23.1. Configuring screen rotation for a single user This procedure sets screen rotation for the current user. Procedure Go to the system menu , which is accessible from the top-right screen corner, and click the Settings icon. In the Settings Devices section, choose Displays . Configure the rotation using the Orientation field. Confirm your choice by clicking Apply . If you are satisfied with the new setup preview, click on Keep changes . The setting persists to your login. Additional resources For information about rotating the screen for all users on a system, see Configuring screen rotation for all users . 23.2. Configuring screen rotation for all users This procedure sets a default screen rotation for all users on a system and is suitable for mass deployment of homogenized display configuration. Procedure Prepare the preferable setup for a single user as in Configuring the screen rotation for a single user . Copy the transform section of the ~/.config/monitors.xml configuration file, which configures the screen rotation. An example portrait orientation: <?xml version="1.0" encoding="UTF-8"?> <transform> <rotation>left</rotation> <flipped>no</flipped> </transform> Paste the content in the /etc/xdg/monitors.xml file that stores system-wide configuration. Save the changes. The new setup takes effect for all the users the time they log in in the system. Additional resources Configuring screen rotation for a single user 23.3. Configuring screen rotation for multiple monitors In a multi-monitor setup, you can configure individual monitors with different screen rotation so that you can adjust monitor layout to your display needs. Procedure In the Settings application, go to Displays . Identify the monitor that you want to rotate from the visual representation of your connected monitors. Select the monitor whose orientation you want to configure. Select orientation: Landscape: Default orientation. Portrait Right: Rotates the screen by 90 degrees to the right. Portrait Left: Rotates the screen by 90 degrees to the left. Landscape (flipped): Rotates the screen by 180 degrees upside down. Click Apply to display preview. If you are satisfied with the preview, click Keep Changes . Alternatively, go back to the original orientation by clicking Revert Changes .
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <transform> <rotation>left</rotation> <flipped>no</flipped> </transform>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/configuring-screen-rotation_using-the-desktop-environment-in-rhel-8
20.16.9.5. Generic Ethernet connection
20.16.9.5. Generic Ethernet connection Provides a means for the administrator to execute an arbitrary script to connect the guest virtual machine's network to the LAN. The guest virtual machine will have a tun device created with a name of vnetN , which can also be overridden with the target element. After creating the tun device a shell script will be run which is expected to do whatever host physical machine network integration is required. By default this script is called /etc/qemu-ifup but can be overridden (refer to Section 20.16.9.11, "Overriding the target element" ). The generic Ethernet connection parameters are defined in the following part of the domain XML: ... <devices> <interface type='ethernet'/> ... <interface type='ethernet'> <target dev='vnet7'/> <script path='/etc/qemu-ifup-mynet'/> </interface> </devices> ... Figure 20.40. Devices - network interfaces- generic Ethernet connection
[ "<devices> <interface type='ethernet'/> <interface type='ethernet'> <target dev='vnet7'/> <script path='/etc/qemu-ifup-mynet'/> </interface> </devices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-section-libvirt-dom-xml-devices-network-interfaces-generic-ethernet-connection
8.228. subscription-manager
8.228. subscription-manager 8.228.1. RHBA-2014:1384 - subscription-manager bug fix and enhancement update Updated subscription-manager packages that fix numerous bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The subscription-manager packages provide programs and libraries to allow users to manage subscriptions and yum repositories from the Red Hat entitlement platform. Bug Fixes BZ# 1096734 When python-rhsm made a request to the entitlement server for data in a JSON format, but got the response in a different format, such as HTML, the following attribute error was presented to the user: With this python-rhsm fix applied, the error message presented to the user is more accurate and informative concerning the problem: BZ# 1128658 Previously, the Subscription Manager tool made calls to Red Hat Network even when the system was not registered. As a consequence, some yum commands contacted Red Hat Network unexpectedly. With this update, Subscription Manager only makes contact if the server is registered for managing subscriptions, and Red Hat Network is contacted only when needed. BZ# 1118755 This update fixes typographical errors in the subscription-manager(8) manual page. BZ# 1062353 Previously, the rhn-migrate-classic-to-rhsm tool provided a confusing prompt, and it was unclear if the "Red Hat account:" prompt required the user the account number or the login. With this update, the user is prompted for Red Hat login. BZ# 1070388 Prior to this update, the Subscription Manager tool did not accept valid passwords that contained special characters for accounts on the Customer Portal. As a consequence, registration of some accounts failed. With this update, all valid passwords are accepted, and registration is no longer blocked for passwords with special characters. BZ# 1129480 Previously, the Subscription Manager tool inspected the environments URL when an activation key was provided. Consequently, Subscription Manager failed to provide authentication to environments. With this update, environments are not inspected when an activation key is given, and the activation key sequence is properly executed. BZ# 1107810 Previously, the help message for the "subscription-manager identity --force" command contained ambiguous information. With this update, the help message provides accurate information on how the "--force" option should be used. BZ# 1131213 Prior to this update, the "--serveurl" option was ignored. Consequently, a URL could not be migrated from a server without being specified in the rhsm.conf file. A patch has been applied to recognize the "--serveurl" option during migrations, and the user can now specify a server on the command line as expected. BZ# 1112326 This update fixes a typographical error in a path included in the rhsm.log file, which caused an error when loading facts from a file. BZ# 1126724 Previously, the help text contained hard-coded mention of port 443. As a consequence, help incorrectly displayed 443 port regardless of what was configured as the port. This update removes the 443 value, and appropriate values for both the host name and port are now displayed. BZ# 1135621 Previously, the GUI of the Subscription Manager tool showed both the default product certificates and the installed product certificates. As a consequence, duplicate certificates were displayed. The underlying source code has been modified to prefer the installed certificates over the default product certificates. As a result, duplicates are no longer shown in the GUI of Subscription Manager. BZ# 1122772 Previously, running the "yum repolist" command did not inform the user in the output if the system was not yet registered. With this update, the user is properly notified if the system is not registered after running "yum repolist". Enhancements BZ# 1035115 This update adds support for updating the installed product ID certificates to the later versions. BZ# 1132071 With this update, the rhsm-debug tool collects more directories. A new directory that contains the default product certificates has been added, and rhsm-debug now collects the /etc/pki/product-default/ directory to help support personnel identify subscription problems. BZ# 1031755 With this update, subscription-manager and subscription-manager-plugin honor the http_proxy and https_proxy environment variables. BZ# 1115499 With this update, the user can enable X and disable on the same line, which reduces the number of necessary steps to perform the commands and makes disabling and enabling repositories more convenient. Users of subscription-manager are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
[ "AttributeError: 'exceptions.ValueError' object has no attribute 'msg' error.", "Network error. Please check the connection details, or see /var/log/rhsm/rhsm.log for more information." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/subscription-manager
Chapter 83. ExternalConfigurationEnvVarSource schema reference
Chapter 83. ExternalConfigurationEnvVarSource schema reference Used in: ExternalConfigurationEnv Property Description configMapKeyRef Reference to a key in a ConfigMap. For more information, see the external documentation for core/v1 configmapkeyselector . ConfigMapKeySelector secretKeyRef Reference to a key in a Secret. For more information, see the external documentation for core/v1 secretkeyselector . SecretKeySelector
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-ExternalConfigurationEnvVarSource-reference
Chapter 13. PodMetrics [metrics.k8s.io/v1beta1]
Chapter 13. PodMetrics [metrics.k8s.io/v1beta1] Description PodMetrics sets resource usage metrics of a pod. Type object Required timestamp window containers 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources containers array Metrics for all containers are collected within the same time window. containers[] object ContainerMetrics sets resource usage metrics of a container. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata timestamp Time The following fields define time interval from which metrics were collected from the interval [Timestamp-Window, Timestamp]. window Duration 13.1.1. .containers Description Metrics for all containers are collected within the same time window. Type array 13.1.2. .containers[] Description ContainerMetrics sets resource usage metrics of a container. Type object Required name usage Property Type Description name string Container name corresponding to the one from pod.spec.containers. usage object (Quantity) The memory usage is the memory working set. 13.2. API endpoints The following API endpoints are available: /apis/metrics.k8s.io/v1beta1/pods GET : list objects of kind PodMetrics /apis/metrics.k8s.io/v1beta1/namespaces/{namespace}/pods GET : list objects of kind PodMetrics /apis/metrics.k8s.io/v1beta1/namespaces/{namespace}/pods/{name} GET : read the specified PodMetrics 13.2.1. /apis/metrics.k8s.io/v1beta1/pods HTTP method GET Description list objects of kind PodMetrics Table 13.1. HTTP responses HTTP code Reponse body 200 - OK PodMetricsList schema 13.2.2. /apis/metrics.k8s.io/v1beta1/namespaces/{namespace}/pods HTTP method GET Description list objects of kind PodMetrics Table 13.2. HTTP responses HTTP code Reponse body 200 - OK PodMetricsList schema 13.2.3. /apis/metrics.k8s.io/v1beta1/namespaces/{namespace}/pods/{name} Table 13.3. Global path parameters Parameter Type Description name string name of the PodMetrics HTTP method GET Description read the specified PodMetrics Table 13.4. HTTP responses HTTP code Reponse body 200 - OK PodMetrics schema
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring_apis/podmetrics-metrics-k8s-io-v1beta1
Chapter 118. KafkaMirrorMakerProducerSpec schema reference
Chapter 118. KafkaMirrorMakerProducerSpec schema reference Used in: KafkaMirrorMakerSpec Full list of KafkaMirrorMakerProducerSpec schema properties Configures a MirrorMaker producer. 118.1. abortOnSendFailure Use the producer.abortOnSendFailure property to configure how to handle message send failure from the producer. By default, if an error occurs when sending a message from Kafka MirrorMaker to a Kafka cluster: The Kafka MirrorMaker container is terminated in OpenShift. The container is then recreated. If the abortOnSendFailure option is set to false , message sending errors are ignored. 118.2. config Use the producer.config properties to configure Kafka options for the producer as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for producers . However, AMQ Streams takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Interceptors Properties with the following prefixes cannot be set: bootstrap.servers interceptor.classes sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to MirrorMaker, including the following exceptions to the options configured by AMQ Streams: Any ssl configuration for supported TLS versions and cipher suites Important The Cluster Operator does not validate keys or values in the config object provided. If an invalid configuration is provided, the MirrorMaker cluster might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all MirrorMaker nodes. 118.3. KafkaMirrorMakerProducerSpec schema properties Property Description bootstrapServers A list of host:port pairs for establishing the initial connection to the Kafka cluster. string abortOnSendFailure Flag to set the MirrorMaker to exit on a failed send. Default value is true . boolean authentication Authentication configuration for connecting to the cluster. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, scram-sha-256, scram-sha-512, plain, oauth]. KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth config The MirrorMaker producer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map tls TLS configuration for connecting MirrorMaker to the cluster. ClientTls
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkamirrormakerproducerspec-reference
Chapter 5. Planning your CA services
Chapter 5. Planning your CA services Identity Management (IdM) in Red Hat Enterprise Linux provides different types of certificate authority (CA) configurations. The following sections describe different scenarios and provide advice to help you determine which configuration is best for your use case. CA subject DN The Certificate Authority (CA) subject distinguished name (DN) is the name of the CA. It must be globally unique in the Identity Management (IdM) CA infrastructure and cannot be changed after the installation. In case you need the IdM CA to be externally signed, you might need to consult the administrator of the external CA about the form your IdM CA Subject DN should take. 5.1. CA Services available in an IdM server You can install an Identity Management (IdM) server with an integrated IdM certificate authority (CA) or without a CA. Table 5.1. Comparing IdM with integrated CA and without a CA Integrated CA Without a CA Overview: IdM uses its own public key infrastructure (PKI) service with a CA signing certificate to create and sign the certificates in the IdM domain. If the root CA is the integrated CA, IdM uses a self-signed CA certificate. If the root CA is an external CA, the integrated IdM CA is subordinate to the external CA. The CA certificate used by IdM is signed by the external CA, but all certificates for the IdM domain are issued by the integrated Certificate System instance. Integrated CA is also able to issue certificates for users, hosts, or services. The external CA can be a corporate CA or a third-party CA. IdM does not set up its own CA, but uses signed host certificates from an external CA. Installing a server without a CA requires you to request the following certificates from a third-party authority: An LDAP server certificate An Apache server certificate A PKINIT certificate Full CA certificate chain of the CA that issued the LDAP and Apache server certificates Limitations: If the integrated CA is subordinate to an external CA, the certificates issued within the IdM domain are potentially subject to restrictions set by the external CA for various certificate attributes, such as: The validity period. Constraints on what subject names can appear on certificates issued by the IDM CA or its subordinates.. Constraints on whether the IDM CA can itself, issue subordinate CA certificates, or how "deep" the chain of subordinate certificates can go. Managing certificates outside of IdM causes many additional activities, such as : Creating, uploading, and renewing certificates is a manual process. The certmonger service does not track the IPA certificates (LDAP server, Apache server, and PKINIT certificates) and does not notify you when the certificates are about to expire. The administrators must manually set up notifications for externally issued certificates, or set tracking requests for those certificates if they want certmonger to track them. Works best for: Environments that allow you to create and use your own certificate infrastructure. Very rare cases when restrictions within the infrastructure do not allow you to install certificate services integrated with the server. Note Switching from the self-signed CA to an externally-signed CA, or the other way around, as well as changing which external CA issues the IdM CA certificate, is possible even after the installation. It is also possible to configure an integrated CA even after an installation without a CA. For more details, see Installing an IdM server: With integrated DNS, without a CA . Additional resources Understanding the certificates used internally by IdM 5.2. Guidelines for distribution of CA services The following steps provide guidelines for the distribution of your certificate authority (CA) services. Procedure Install the CA services on more than one server in the topology. Replicas configured without a CA forward all certificate operation requests to the CA servers in your topology. Warning If you lose all servers with a CA, you lose all the CA configuration without any chance of recovery. In this case you must configure a new CA and issue and install new certificates. Maintain a sufficient number of CA servers to handle the CA requests in your deployment. See the following table for further recommendations on appropriate number of CA servers: Table 5.2. Guidelines for setting up appropriate number of CA servers Description of the deployment Suggested number of CA servers A deployment with a very large number of certificates issued Three or four CA servers A deployment with bandwidth or availability problems between multiple regions One CA server per region, with a minimum of three servers total for the deployment All other deployments Two CA servers Important Four CA servers in the topology are usually enough if the number of concurrent certificate requests is not high. The replication processes between more than four CA servers can increase processor usage and lead to performance degradation. 5.3. Random serial numbers in IdM As of RHEL 9.1, Identity Management (IdM) includes dogtagpki 11.2.0 , which allows you to use Random Serial Numbers version 3 (RSNv3). The ansible-freeipa ipaserver role includes the ipaserver_random_serial_numbers variable with the RHEL 9.3 update. With RSNv3 enabled, IdM generates fully random serial numbers for certificates and requests in PKI without range management. RSNv3 also prevents collisions in case you reinstall IdM. The size of each certificate serial number is up to 40-digit decimal values as RSNv3 uses a 128-bit random value for the serial number. This makes the number effectively random. Note Previously, the Dogtag upstream project used range-based serial numbers in order to ensure uniqueness across multiple clones. However, based on this experience, the Dogtag team determined that range-based serial numbers would not fit well into cloud environments with short-lived certificates. RSNv3 is supported only for new IdM CA installations. By default, you install the first IdM CA when you install the primary IdM server by using the ipa-server-install command. However, if you originally installed your IdM environment without a CA, you can add the CA service later by using the ipa-ca-install command. To enable RSNv3, use the ipa-server-install or ipa-ca-install command with the --random-serial-numbers option. If enabled, it is required to use RSNv3 on all public-key infrastructure (PKI) services in the deployment, including the CA and Key Recovery Authority (KRA). A check is performed when KRA is installed to automatically enable RSNv3 if it is enabled on the underlying CA. Additional resources Random Serial Numbers v3 (RSNv3)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/planning_identity_management/planning-your-ca-services_planning-identity-management
Chapter 6. Customizing the web console in OpenShift Container Platform
Chapter 6. Customizing the web console in OpenShift Container Platform You can customize the OpenShift Container Platform web console to set a custom logo, product name, links, notifications, and command line downloads. This is especially helpful if you need to tailor the web console to meet specific corporate or government requirements. 6.1. Adding a custom logo and product name You can create custom branding by adding a custom logo or custom product name. You can set both or one without the other, as these settings are independent of each other. Prerequisites You must have administrator privileges. Create a file of the logo that you want to use. The logo can be a file in any common image format, including GIF, JPG, PNG, or SVG, and is constrained to a max-height of 60px . Procedure Import your logo file into a config map in the openshift-config namespace: USD oc create configmap console-custom-logo --from-file /path/to/console-custom-logo.png -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: console-custom-logo namespace: openshift-config data: console-custom-logo.png: <base64-encoded_logo> ... 1 1 Provide a valid base64-encoded logo. Edit the web console's Operator configuration to include customLogoFile and customProductName : USD oc edit consoles.operator.openshift.io cluster apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: customLogoFile: key: console-custom-logo.png name: console-custom-logo customProductName: My Console Once the Operator configuration is updated, it will sync the custom logo config map into the console namespace, mount it to the console pod, and redeploy. Check for success. If there are any issues, the console cluster Operator will report a Degraded status, and the console Operator configuration will also report a CustomLogoDegraded status, but with reasons like KeyOrFilenameInvalid or NoImageProvided . To check the clusteroperator , run: USD oc get clusteroperator console -o yaml To check the console Operator configuration, run: USD oc get consoles.operator.openshift.io -o yaml 6.2. Creating custom links in the web console Prerequisites You must have administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleLink . Select Instances tab Click Create Console Link and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: example spec: href: 'https://www.example.com' location: HelpMenu 1 text: Link 1 1 Valid location settings are HelpMenu , UserMenu , ApplicationMenu , and NamespaceDashboard . To make the custom link appear in all namespaces, follow this example: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-link-for-all-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard text: This appears in all namespaces To make the custom link appear in only some namespaces, follow this example: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-for-some-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard # This text will appear in a box called "Launcher" under "namespace" or "project" in the web console text: Custom Link Text namespaceDashboard: namespaces: # for these specific namespaces - my-namespace - your-namespace - other-namespace To make the custom link appear in the application menu, follow this example: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: application-menu-link-1 spec: href: 'https://www.example.com' location: ApplicationMenu text: Link 1 applicationMenu: section: My New Section # image that is 24x24 in size imageURL: https://via.placeholder.com/24 Click Save to apply your changes. 6.3. Customizing console routes For console and downloads routes, custom routes functionality uses the ingress config route configuration API. If the console custom route is set up in both the ingress config and console-operator config, then the new ingress config custom route configuration takes precedent. The route configuration with the console-operator config is deprecated. 6.3.1. Customizing the console route You can customize the console route by setting the custom hostname and TLS certificate in the spec.componentRoutes field of the cluster Ingress configuration. Prerequisites You have logged in to the cluster as a user with administrative privileges. You have created a secret in the openshift-config namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Tip You can create a TLS secret by using the oc create secret tls command. Procedure Edit the cluster Ingress configuration: USD oc edit ingress.config.openshift.io cluster Set the custom hostname and optionally the serving certificate and key: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: console namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2 1 The custom hostname. 2 Reference to a secret in the openshift-config namespace that contains a TLS certificate ( tls.crt ) and key ( tls.key ). This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Save the file to apply the changes. 6.3.2. Customizing the download route You can customize the download route by setting the custom hostname and TLS certificate in the spec.componentRoutes field of the cluster Ingress configuration. Prerequisites You have logged in to the cluster as a user with administrative privileges. You have created a secret in the openshift-config namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Tip You can create a TLS secret by using the oc create secret tls command. Procedure Edit the cluster Ingress configuration: USD oc edit ingress.config.openshift.io cluster Set the custom hostname and optionally the serving certificate and key: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: downloads namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2 1 The custom hostname. 2 Reference to a secret in the openshift-config namespace that contains a TLS certificate ( tls.crt ) and key ( tls.key ). This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Save the file to apply the changes. 6.4. Customizing the login page Create Terms of Service information with custom login pages. Custom login pages can also be helpful if you use a third-party login provider, such as GitHub or Google, to show users a branded page that they trust and expect before being redirected to the authentication provider. You can also render custom error pages during the authentication process. Note Customizing the error template is limited to identity providers (IDPs) that use redirects, such as request header and OIDC-based IDPs. It does not have an effect on IDPs that use direct password authentication, such as LDAP and htpasswd. Prerequisites You must have administrator privileges. Procedure Run the following commands to create templates you can modify: USD oc adm create-login-template > login.html USD oc adm create-provider-selection-template > providers.html USD oc adm create-error-template > errors.html Create the secrets: USD oc create secret generic login-template --from-file=login.html -n openshift-config USD oc create secret generic providers-template --from-file=providers.html -n openshift-config USD oc create secret generic error-template --from-file=errors.html -n openshift-config Run: USD oc edit oauths cluster Update the specification: spec: templates: error: name: error-template login: name: login-template providerSelection: name: providers-template Run oc explain oauths.spec.templates to understand the options. 6.5. Defining a template for an external log link If you are connected to a service that helps you browse your logs, but you need to generate URLs in a particular way, then you can define a template for your link. Prerequisites You must have administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleExternalLogLink . Select Instances tab Click Create Console External Log Link and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleExternalLogLink metadata: name: example spec: hrefTemplate: >- https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} text: Example Logs 6.6. Creating custom notification banners Prerequisites You must have administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleNotification . Select Instances tab Click Create Console Notification and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleNotification metadata: name: example spec: text: This is an example notification message with an optional link. location: BannerTop 1 link: href: 'https://www.example.com' text: Optional link text color: '#fff' backgroundColor: '#0088ce' 1 Valid location settings are BannerTop , BannerBottom , and BannerTopBottom . Click Create to apply your changes. 6.7. Customizing CLI downloads You can configure links for downloading the CLI with custom link text and URLs, which can point directly to file packages or to an external page that provides the packages. Prerequisites You must have administrator privileges. Procedure Navigate to Administration Custom Resource Definitions . Select ConsoleCLIDownload from the list of Custom Resource Definitions (CRDs). Click the YAML tab, and then make your edits: apiVersion: console.openshift.io/v1 kind: ConsoleCLIDownload metadata: name: example-cli-download-links-for-foo spec: description: | This is an example of download links for foo displayName: example-foo links: - href: 'https://www.example.com/public/foo.tar' text: foo for linux - href: 'https://www.example.com/public/foo.mac.zip' text: foo for mac - href: 'https://www.example.com/public/foo.win.zip' text: foo for windows Click the Save button. 6.8. Adding YAML examples to Kubernetes resources You can dynamically add YAML examples to any Kubernetes resources at any time. Prerequisites You must have cluster administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleYAMLSample . Click YAML and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleYAMLSample metadata: name: example spec: targetResource: apiVersion: batch/v1 kind: Job title: Example Job description: An example Job YAML sample yaml: | apiVersion: batch/v1 kind: Job metadata: name: countdown spec: template: metadata: name: countdown spec: containers: - name: counter image: centos:7 command: - "bin/bash" - "-c" - "for i in 9 8 7 6 5 4 3 2 1 ; do echo USDi ; done" restartPolicy: Never Use spec.snippet to indicate that the YAML sample is not the full YAML resource definition, but a fragment that can be inserted into the existing YAML document at the user's cursor. Click Save .
[ "oc create configmap console-custom-logo --from-file /path/to/console-custom-logo.png -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: console-custom-logo namespace: openshift-config data: console-custom-logo.png: <base64-encoded_logo> ... 1", "oc edit consoles.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: customLogoFile: key: console-custom-logo.png name: console-custom-logo customProductName: My Console", "oc get clusteroperator console -o yaml", "oc get consoles.operator.openshift.io -o yaml", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: example spec: href: 'https://www.example.com' location: HelpMenu 1 text: Link 1", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-link-for-all-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard text: This appears in all namespaces", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-for-some-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard # This text will appear in a box called \"Launcher\" under \"namespace\" or \"project\" in the web console text: Custom Link Text namespaceDashboard: namespaces: # for these specific namespaces - my-namespace - your-namespace - other-namespace", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: application-menu-link-1 spec: href: 'https://www.example.com' location: ApplicationMenu text: Link 1 applicationMenu: section: My New Section # image that is 24x24 in size imageURL: https://via.placeholder.com/24", "oc edit ingress.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: console namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2", "oc edit ingress.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: downloads namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2", "oc adm create-login-template > login.html", "oc adm create-provider-selection-template > providers.html", "oc adm create-error-template > errors.html", "oc create secret generic login-template --from-file=login.html -n openshift-config", "oc create secret generic providers-template --from-file=providers.html -n openshift-config", "oc create secret generic error-template --from-file=errors.html -n openshift-config", "oc edit oauths cluster", "spec: templates: error: name: error-template login: name: login-template providerSelection: name: providers-template", "apiVersion: console.openshift.io/v1 kind: ConsoleExternalLogLink metadata: name: example spec: hrefTemplate: >- https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} text: Example Logs", "apiVersion: console.openshift.io/v1 kind: ConsoleNotification metadata: name: example spec: text: This is an example notification message with an optional link. location: BannerTop 1 link: href: 'https://www.example.com' text: Optional link text color: '#fff' backgroundColor: '#0088ce'", "apiVersion: console.openshift.io/v1 kind: ConsoleCLIDownload metadata: name: example-cli-download-links-for-foo spec: description: | This is an example of download links for foo displayName: example-foo links: - href: 'https://www.example.com/public/foo.tar' text: foo for linux - href: 'https://www.example.com/public/foo.mac.zip' text: foo for mac - href: 'https://www.example.com/public/foo.win.zip' text: foo for windows", "apiVersion: console.openshift.io/v1 kind: ConsoleYAMLSample metadata: name: example spec: targetResource: apiVersion: batch/v1 kind: Job title: Example Job description: An example Job YAML sample yaml: | apiVersion: batch/v1 kind: Job metadata: name: countdown spec: template: metadata: name: countdown spec: containers: - name: counter image: centos:7 command: - \"bin/bash\" - \"-c\" - \"for i in 9 8 7 6 5 4 3 2 1 ; do echo USDi ; done\" restartPolicy: Never" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/web_console/customizing-web-console
Part IV. Device Drivers
Part IV. Device Drivers This part provides a comprehensive listing of all device drivers that are new or have been updated in Red Hat Enterprise Linux 7.4.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/part-red_hat_enterprise_linux-7.4_release_notes-device_drivers