title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 9. Enforcing Puppet configuration on hosts
Chapter 9. Enforcing Puppet configuration on hosts You can enforce configuration from Satellite either manually on demand (run once) or automatically in configurable intervals. 9.1. Running Puppet once using SSH Assign the proper job template to the Run Puppet Once feature to run Puppet on hosts. Procedure In the Satellite web UI, navigate to Administer > Remote Execution Features . Select the puppet_run_host remote execution feature. Assign the Run Puppet Once - SSH Default job template. Run Puppet on hosts by running a job and selecting category Puppet and template Run Puppet Once - SSH Default . Alternatively, click Run Puppet Once in the Schedule Remote Job drop down menu on the host details page. 9.2. Understanding intervals of automatic enforcement Satellite considers hosts to be out of sync if the last Puppet report is older than the combined values of outofsync_interval and puppet_interval set in minutes. By default, the Puppet agent on your hosts runs every 30 minutes, the puppet_interval is set to 35 minutes and the global outofsync_interval is set to 30 minutes. The effective time after which hosts are considered out of sync is the sum of outofsync_interval and puppet_interval . For example, setting the global outofsync_interval to 30 and the puppet_interval to 60 results in a total of 90 minutes after which the host status changes to out of sync . 9.3. Setting the Puppet agent run interval on a host Set the interval when the Puppet agent runs and sends reports to Satellite. Procedure Connect to your host using SSH. Add the Puppet agent run interval to /etc/puppetlabs/puppet/puppet.conf , for example runinterval = 1h . 9.4. Setting the global out-of-sync interval Procedure In the Satellite web UI, navigate to Administer > Settings . On the General tab, edit Out of sync interval . Set a duration, in minutes, after which hosts are considered to be out of sync. You can also override this interval on host groups or individual hosts by adding the outofsync_interval parameter. 9.5. Setting the Puppet out-of-sync interval Procedure In the Satellite web UI, navigate to Administer > Settings , and click the Config Management tab. In the Puppet interval field, set the value to the duration, in minutes, after which hosts reporting using Puppet are considered to be out of sync. 9.6. Overriding out-of-sync interval for a host group Procedure In the Satellite web UI, navigate to Configure > Host Groups . Select a host group. On the Parameters tab, click Add Parameter . In the Name field, enter outofsync_interval . From the Type dropdown menu, select integer . In the Value field, enter the new interval in minutes. Click Submit . 9.7. Overriding out-of-sync interval for an individual host Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click Edit for a selected host. On the Parameters tab, click Add Parameter . In the Name field, enter outofsync_interval . From the Type dropdown menu, select integer . In the Value field, enter the new interval in minutes. Click Submit .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_configurations_by_using_puppet_integration/enforcing-puppet-configuration-on-hosts_managing-configurations-puppet
Chapter 1. Preparing to install on Azure Stack Hub
Chapter 1. Preparing to install on Azure Stack Hub 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You have installed Azure Stack Hub version 2008 or later. 1.2. Requirements for installing OpenShift Container Platform on Azure Stack Hub Before installing OpenShift Container Platform on Microsoft Azure Stack Hub, you must configure an Azure account. See Configuring an Azure Stack Hub account for details about account configuration, account limits, DNS zone configuration, required roles, and creating service principals. 1.3. Choosing a method to install OpenShift Container Platform on Azure Stack Hub You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program, by using the following method: Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure : You can install OpenShift Container Platform on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program. 1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that you provision, by using the following method: Installing a cluster on Azure Stack Hub using ARM templates : You can install OpenShift Container Platform on Azure Stack Hub by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation. 1.4. steps Configuring an Azure Stack Hub account
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_azure_stack_hub/preparing-to-install-on-azure-stack-hub
6.14.2. Multicast Configuration
6.14.2. Multicast Configuration If you do not specify a multicast address in the cluster configuration file, the Red Hat High Availability Add-On software creates one based on the cluster ID. It generates the lower 16 bits of the address and appends them to the upper portion of the address according to whether the IP protocol is IPv4 or IPv6: For IPv4 - The address formed is 239.192. plus the lower 16 bits generated by Red Hat High Availability Add-On software. For IPv6 - The address formed is FF15:: plus the lower 16 bits generated by Red Hat High Availability Add-On software. Note The cluster ID is a unique identifier that cman generates for each cluster. To view the cluster ID, run the cman_tool status command on a cluster node. You can manually specify a multicast address in the cluster configuration file with the following command: Note that this command resets all other properties that you can set with the --setmulticast option to their default values, as described in Section 6.1.5, "Commands that Overwrite Settings" . If you specify a multicast address, you should use the 239.192.x.x series (or FF15:: for IPv6) that cman uses. Otherwise, using a multicast address outside that range may cause unpredictable results. For example, using 224.0.0.x (which is "All hosts on the network") may not be routed correctly, or even routed at all by some hardware. If you specify or modify a multicast address, you must restart the cluster for this to take effect. For information on starting and stopping a cluster with the ccs command, see Section 7.2, "Starting and Stopping a Cluster" . Note If you specify a multicast address, make sure that you check the configuration of routers that cluster packets pass through. Some routers may take a long time to learn addresses, seriously impacting cluster performance. To remove a multicast address from a configuration file, use the --setmulticast option of the ccs but do not specify a multicast address:
[ "ccs -h host --setmulticast multicastaddress", "ccs -h host --setmulticast" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-networkconfig-ccs-ca
Chapter 2. Skupper Hello World
Chapter 2. Skupper Hello World A minimal HTTP application deployed across Kubernetes clusters using Skupper This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites. Overview This example is a very simple multi-service HTTP application deployed across Kubernetes clusters using Skupper. It contains two services: A backend service that exposes an /api/hello endpoint. It returns greetings of the form Hi, <your-name>. I am <my-name> (<pod-name>) . A frontend service that sends greetings to the backend and fetches new greetings in response. With Skupper, you can place the backend in one cluster and the frontend in another and maintain connectivity between the two services without exposing the backend to the public internet. Prerequisites The kubectl command-line tool, version 1.15 or later ( installation guide ) Access to at least one Kubernetes cluster, from any provider you choose Procedure Clone the repo for this example. Install the Skupper command-line tool Set up your clusters Deploy the frontend and backend Create your sites Link your sites Expose the backend Access the frontend Clone the repo for this example. Navigate to the appropriate GitHub repository from https://skupper.io/examples/index.html and clone the repository. Install the Skupper command-line tool This example uses the Skupper command-line tool to deploy Skupper. You need to install the skupper command only once for each development environment. See the Installation for details about installing the CLI. For configured systems, use the following command: Set up your clusters Skupper is designed for use with multiple Kubernetes clusters. The skupper and kubectl commands use your kubeconfig and current context to select the cluster and namespace where they operate. Your kubeconfig is stored in a file in your home directory. The skupper and kubectl commands use the KUBECONFIG environment variable to locate it. A single kubeconfig supports only one active context per user. Since you will be using multiple contexts at once in this exercise, you need to create distinct kubeconfigs. For each namespace, open a new terminal window. In each terminal, set the KUBECONFIG environment variable to a different path and log in to your cluster. Then create the namespace you wish to use and set the namespace on your current context. Note The login procedure varies by provider. See the documentation for yours: Amazon Elastic Kubernetes Service (EKS) Azure Kubernetes Service (AKS) Google Kubernetes Engine (GKE) IBM Kubernetes Service OpenShift West: East: Deploy the frontend and backend This example runs the frontend and the backend in separate Kubernetes namespaces, on different clusters. Use kubectl create deployment to deploy the frontend in West and the backend in East. West: East: Create your sites A Skupper site is a location where components of your application are running. Sites are linked together to form a network for your application. In Kubernetes, a site is associated with a namespace. For each namespace, use skupper init to create a site. This deploys the Skupper router and controller. Then use skupper status to see the outcome. West: Sample output: East: Sample output: As you move through the steps below, you can use skupper status at any time to check your progress. Link your sites A Skupper link is a channel for communication between two sites. Links serve as a transport for application connections and requests. Creating a link requires use of two skupper commands in conjunction, skupper token create and skupper link create . The skupper token create command generates a secret token that signifies permission to create a link. The token also carries the link details. Then, in a remote site, The skupper link create command uses the token to create a link to the site that generated it. Note The link token is truly a secret. Anyone who has the token can link to your site. Make sure that only those you trust have access to it. First, use skupper token create in West to generate the token. Then, use skupper link create in East to link the sites. West: Sample output: East: Sample output: If your terminal sessions are on different machines, you may need to use scp or a similar tool to transfer the token securely. By default, tokens expire after a single use or 15 minutes after creation. Expose the backend We now have our sites linked to form a Skupper network, but no services are exposed on it. Skupper uses the skupper expose command to select a service from one site for exposure in all the linked sites. Use skupper expose to expose the backend service in East to the frontend in West. East: Sample output: Access the frontend In order to use and test the application, we need external access to the frontend. Use kubectl port-forward to make the frontend available at localhost:8080 . West: You can now access the web interface by navigating to http://localhost:8080 in your browser.
[ "sudo dnf install skupper-cli", "export KUBECONFIG=~/.kube/config-west Enter your provider-specific login command create namespace west config set-context --current --namespace west", "export KUBECONFIG=~/.kube/config-east Enter your provider-specific login command create namespace east config set-context --current --namespace east", "create deployment frontend --image quay.io/skupper/hello-world-frontend", "create deployment backend --image quay.io/skupper/hello-world-backend --replicas 3", "skupper init skupper status", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'west'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"west\". It is not connected to any other sites. It has no exposed services.", "skupper init skupper status", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'east'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"east\". It is not connected to any other sites. It has no exposed services.", "skupper token create ~/secret.token", "skupper token create ~/secret.token Token written to ~/secret.token", "skupper link create ~/secret.token", "skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.", "skupper expose deployment/backend --port 8080", "skupper expose deployment/backend --port 8080 deployment backend exposed as backend", "port-forward deployment/frontend 8080:8080" ]
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/examples/skupper_hello_world
Chapter 4. Developer Preview features
Chapter 4. Developer Preview features Important This section describes Developer Preview features in Red Hat OpenShift AI 2.18. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to functionality in advance of possible inclusion in a Red Hat product offering. Customers can use these features to test functionality and provide feedback during the development process. Developer Preview features might not have any documentation, are subject to change or removal at any time, and have received limited testing. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope . Support for AppWrapper in Kueue AppWrapper support in Kueue is available as a Developer Preview feature. The experimental API enables the use of AppWrapper-based workloads with the distributed workloads feature.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/release_notes/developer-preview-features_relnotes
Virtualization
Virtualization OpenShift Container Platform 4.10 OpenShift Virtualization installation, usage, and release notes Red Hat OpenShift Documentation Team
[ "E0222 17:52:54.088950 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: unable to parse requirement: values[0][csi.storage.k8s.io/managed-by]: Invalid value: \"external-provisioner-<node_FQDN>\": must be no more than 63 characters 1", "oc patch csidriver kubevirt.io.hostpath-provisioner --type merge --patch '{\"spec\": {\"storageCapacity\": false}}'", "oc annotate --overwrite -n openshift-cnv hyperconverged kubevirt-hyperconverged kubevirt.kubevirt.io/jsonpatch='[ { \"op\": \"add\", \"path\": \"/spec/configuration/cpuModel\", \"value\": \"<cpu_model>\" 1 } ]'", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce volumeMode: Filesystem cloneStrategy: copy 1 status: provisioner: <provisioner> storageClass: <provisioner_class>", "oc annotate --overwrite -n openshift-cnv hco kubevirt-hyperconverged 'networkaddonsconfigs.kubevirt.io/jsonpatch=[ { \"op\": \"replace\" \"path\": \"/spec/kubeMacPool\" \"value\": null } ]'", "Memory overhead per infrastructure node ~ 150 MiB", "Memory overhead per worker node ~ 360 MiB", "Memory overhead per virtual machine ~ (1.002 * requested memory) + 146 MiB + 8 MiB * (number of vCPUs) \\ 1 + 16 MiB * (number of graphics devices) 2", "CPU overhead for infrastructure nodes ~ 4 cores", "CPU overhead for worker nodes ~ 2 cores + CPU overhead per virtual machine", "Aggregated storage overhead per node ~ 10 GiB", "Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.10.10 channel: \"stable\" config: 1", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: 1 workloads: nodePlacement:", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.10.10 channel: \"stable\" config: nodeSelector: example.io/example-infra-key: example-infra-value", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.10.10 channel: \"stable\" config: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value", "apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.10.10 channel: \"stable\" 1", "oc apply -f <file name>.yaml", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:", "oc apply -f <file_name>.yaml", "watch oc get csv -n openshift-cnv", "NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.10.10 OpenShift Virtualization 4.10.10 Succeeded", "oc get ConsoleCLIDownload virtctl-clidownloads-kubevirt-hyperconverged -o yaml", "tar -xvf <virtctl-version-distribution.arch>.tar.gz", "chmod +x <virtctl-file-name>", "echo USDPATH", "C:\\> path", "echo USDPATH", "yum install kubevirt-virtctl", "subscription-manager repos --enable <repository>", "oc delete apiservices v1alpha3.subresources.kubevirt.io -n openshift-cnv", "oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv", "oc delete subscription kubevirt-hyperconverged -n openshift-cnv", "CSV_NAME=USD(oc get csv -n openshift-cnv -o=jsonpath=\"{.items[0].metadata.name}\")", "oc delete csv USD{CSV_NAME} -n openshift-cnv", "clusterserviceversion.operators.coreos.com \"kubevirt-hyperconverged-operator.v4.10.10\" deleted", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: \"1m0s\" 5", "oc get csv -n openshift-cnv", "VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing", "oc get hco -n openshift-cnv kubevirt-hyperconverged -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}'", "ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully", "oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces", "oc get scc kubevirt-controller -o yaml", "oc get clusterrole kubevirt-controller -o yaml", "virtctl help", "virtctl image-upload -h", "virtctl options", "virtctl guestfs -n <namespace> <pvc_name> 1", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: app: <vm_name> 1 name: <vm_name> spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <vm_name> spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: <vm_name> spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: <vm_name> name: rootdisk - cloudInitNoCloud: userData: |- #cloud-config user: cloud-user password: '<password>' 2 chpasswd: { expire: False } name: cloudinitdisk", "oc create -f <vm_manifest_file>.yaml", "virtctl start <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: RunStrategy: Always 1 template:", "oc edit <object_type> <object_ID>", "oc apply <object_type> <object_ID>", "oc edit vm example", "disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default", "oc delete vm <vm_name>", "oc get vmis -A", "oc delete vmi <vmi_name>", "remmina --connect /path/to/console.rdp", "virtctl expose vm <fedora-vm> --port=22 --name=fedora-vm-ssh --type=NodePort 1", "oc get svc", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE fedora-vm-ssh NodePort 127.0.0.1 <none> 22:32551/TCP 6s", "ssh username@<node_IP_address> -p 32551", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: namespace: ssh-ns 1 name: vm-ssh spec: running: false template: metadata: labels: kubevirt.io/vm: vm-ssh special: vm-ssh 2 spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} 3 name: testmasquerade 4 rng: {} machine: type: \"\" resources: requests: memory: 1024M networks: - name: testmasquerade pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: userData: | #cloud-config user: fedora password: fedora chpasswd: {expire: False}", "oc create -f <path_for_the_VM_YAML_file>", "virtctl start vm-ssh", "apiVersion: v1 kind: Service metadata: name: svc-ssh 1 namespace: ssh-ns 2 spec: ports: - targetPort: 22 3 protocol: TCP port: 27017 selector: special: vm-ssh 4 type: NodePort", "oc create -f <path_for_the_service_YAML_file>", "oc get vmi", "NAME AGE PHASE IP NODENAME vm-ssh 6s Running 10.244.196.152 node01", "oc get svc", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc-ssh NodePort 10.106.236.208 <none> 27017:30093/TCP 22s", "oc get node <node_name> -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP node01 Ready worker 6d22h v1.23.0 192.168.55.101 <none>", "ssh [email protected] -p 30093", "virtctl console <VMI>", "virtctl vnc <VMI>", "virtctl vnc <VMI> -v 4", "oc login -u <user> https://<cluster.example.com>:8443", "oc describe vmi <windows-vmi-name>", "spec: networks: - name: default pod: {} - multus: networkName: cnv-bridge name: bridge-net status: interfaces: - interfaceName: eth0 ipAddress: 198.51.100.0/24 ipAddresses: 198.51.100.0/24 mac: a0:36:9f:0f:b1:70 name: default - interfaceName: eth1 ipAddress: 192.0.2.0/24 ipAddresses: 192.0.2.0/24 2001:db8::/32 mac: 00:17:a4:77:77:25 name: bridge-net", "%WINDIR%\\System32\\Sysprep\\sysprep.exe /generalize /shutdown /oobe /mode:vm", "oc adm cordon <node_name>", "oc adm drain <node_name> --force=true", "oc delete node <node_name>", "oc get vmis -A", "yum install -y qemu-guest-agent", "systemctl enable --now qemu-guest-agent", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "oc edit vm <vm-name>", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "oc edit vm <vm-name>", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: resources: requests: memory: 128Mi limits: memory: 256Mi 1", "metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2", "metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname", "metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value", "metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3", "certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s", "error: hyperconvergeds.hco.kubevirt.io \"kubevirt-hyperconverged\" could not be patched: admission webhook \"validate-hco.kubevirt.io\" denied the request: spec.certConfig: ca.duration is smaller than server.duration", "kubevirt_vm: namespace: name: cpu_cores: memory: disks: - name: volume: containerDisk: image: disk: bus:", "kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio", "kubevirt_vm: namespace: default name: vm1 state: running 1 cpu_cores: 1", "ansible-playbook create-vm.yaml", "(...) TASK [Create my first VM] ************************************************************************ changed: [localhost] PLAY RECAP ******************************************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0", "ansible-playbook create-vm.yaml", "--- - name: Ansible Playbook 1 hosts: localhost connection: local tasks: - name: Create my first VM kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio", "apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2", "oc create -f <file_name>.yaml", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"pxe-net-conf\", \"plugins\": [ { \"type\": \"cnv-bridge\", \"bridge\": \"br1\", \"vlan\": 1 1 }, { \"type\": \"cnv-tuning\" 2 } ] }'", "oc create -f pxe-net-conf.yaml", "interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1", "devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2", "networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf", "oc create -f vmi-pxe-boot.yaml", "virtualmachineinstance.kubevirt.io \"vmi-pxe-boot\" created", "oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running", "virtctl vnc vmi-pxe-boot", "virtctl console vmi-pxe-boot", "ip addr", "3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff", "kind: VirtualMachine spec: domain: resources: requests: memory: \"4Gi\" 1 memory: hugepages: pageSize: \"1Gi\" 2", "oc apply -f <virtual_machine>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1", "apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "lspci -nnv | grep -i nvidia", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "variant: openshift version: 4.10.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci", "butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml", "oc apply -f 100-worker-vfiopci.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s", "lspci -nnk -d 10de:", "04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: \"10DE:1DB6\" 3 resourceName: \"nvidia.com/GV100GL_Tesla_V100\" 4 - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\" - pciDeviceSelector: \"8086:6F54\" resourceName: \"intel.com/qat\" externalResourceProvider: true 5", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: \"10DE:1DB6\" resourceName: \"nvidia.com/GV100GL_Tesla_V100\" - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\"", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1", "lspci -nnk | grep NVIDIA", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "spec: mediatedDevicesConfiguration: mediatedDevicesTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDevicesTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value>", "permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2", "oc get USDNODE -o json | jq '.status.allocatable | with_entries(select(.key | startswith(\"nvidia.com/\"))) | with_entries(select(.value != \"0\"))'", "mediatedDevicesConfiguration: mediatedDevicesTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108", "nvidia-105 nvidia-108 nvidia-217 nvidia-299", "mediatedDevicesConfiguration: mediatedDevicesTypes: - nvidia-22 - nvidia-223 - nvidia-224", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: <.> mediatedDevicesTypes: <.> - nvidia-231 nodeMediatedDeviceTypes: <.> - mediatedDevicesTypes: <.> - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: <.> mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q", "oc describe node <node_name>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDevicesTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-1Q name: gpu2", "lspci -nnk | grep <device_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: \"poweroff\" 1", "oc apply -f <file_name>.yaml", "lspci | grep watchdog -i", "echo c > /proc/sysrq-trigger", "pkill -9 watchdog", "yum install watchdog", "#watchdog-device = /dev/watchdog", "systemctl enable --now watchdog.service", "oc label --overwrite DataSource rhel8 -n openshift-virtualization-os-images cdi.kubevirt.io/dataImportCron=true", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": false}]'", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": true}]'", "oc edit -n openshift-cnv HyperConverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos7-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" 1 spec: schedule: \"0 */12 * * *\" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 10Gi managedDataSource: centos7 4 retentionPolicy: \"None\" 5", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: \"true\"", "apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - DevPreviewLongLifecycle", "oc get ns", "oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem>", "apiVersion: v1 kind: ConfigMap metadata: name: tls-certs data: ca.pem: | -----BEGIN CERTIFICATE----- ... <base64 encoded cert> -----END CERTIFICATE-----", "apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 2 secretKey: \"\" 3", "oc apply -f endpoint-secret.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi storageClassName: local source: http: 3 url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 4 secretRef: endpoint-secret 5 certConfigMap: \"\" 6 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: \"\" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}", "oc create -f vm-fedora-datavolume.yaml", "oc get pods", "oc describe dv fedora-dv 1", "virtctl console vm-fedora-datavolume", "dd if=/dev/zero of=<loop10> bs=100M count=20", "losetup </dev/loop10>d3 <loop10> 1 2", "kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4", "oc create -f <local-block-pv10.yaml> 1", "apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 2 secretKey: \"\" 3", "oc apply -f endpoint-secret.yaml", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: import-pv-datavolume 1 spec: storageClassName: local 2 source: http: url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 3 secretRef: endpoint-secret 4 storage: volumeMode: Block 5 resources: requests: storage: 10Gi", "oc create -f import-pv-datavolume.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: [\"cdi.kubevirt.io\"] resources: [\"datavolumes/source\"] verbs: [\"*\"]", "oc create -f <datavolume-cloner.yaml> 1", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io", "oc create -f <datavolume-cloner.yaml> 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4", "oc create -f <cloner-datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: \"source-namespace\" name: \"my-favorite-vm-disk\"", "oc create -f <vm-clone-datavolumetemplate>.yaml", "dd if=/dev/zero of=<loop10> bs=100M count=20", "losetup </dev/loop10>d3 <loop10> 1 2", "kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4", "oc create -f <local-block-pv10.yaml> 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 volumeMode: Block 5", "oc create -f <cloner-datavolume>.yaml", "kind: VirtualMachine spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {}", "oc create -f <vm-name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4", "oc create -f example-vm-ipv6.yaml", "oc get vmi <vmi-name> -o jsonpath=\"{.status.interfaces[*].ipAddresses}\"", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: vmservice 1 namespace: example-namespace 2 spec: externalTrafficPolicy: Cluster 3 ports: - nodePort: 30000 4 port: 27017 protocol: TCP targetPort: 22 5 selector: special: key 6 type: NodePort 7", "oc create -f <service_name>.yaml", "oc get service -n example-namespace", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice NodePort 172.30.232.73 <none> 27017:30000/TCP 5m", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice LoadBalancer 172.30.27.5 172.29.10.235,172.29.10.235 27017:31829/TCP 5s", "ssh [email protected] -p 27017", "ssh fedora@USDNODE_IP -p 30000", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: <bridge-network> 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/<bridge-interface> 2 spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"<bridge-network>\", 3 \"type\": \"cnv-bridge\", 4 \"bridge\": \"<bridge-interface>\", 5 \"macspoofchk\": true, 6 \"vlan\": 1 7 }'", "oc create -f <network-attachment-definition.yaml> 1", "oc get network-attachment-definition <bridge-network>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <example-vm> spec: template: spec: domain: devices: interfaces: - masquerade: {} name: <default> - bridge: {} name: <bridge-net> 1 networks: - name: <default> pod: {} - name: <bridge-net> 2 multus: networkName: <network-namespace>/<a-bridge-network> 3", "oc apply -f <example-vm.yaml>", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: \"<trust_vf>\" 11 capabilities: <capabilities> 12", "oc create -f <name>-sriov-network.yaml", "oc get net-attach-def -n <namespace>", "kind: VirtualMachine spec: domain: devices: interfaces: - name: <default> 1 masquerade: {} 2 - name: <nic1> 3 sriov: {} networks: - name: <default> 4 pod: {} - name: <nic1> 5 multus: networkName: <sriov-network> 6", "oc apply -f <vm-sriov.yaml> 1", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: \"true\" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk", "oc apply -f <vm_name>.yaml 1", "apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP", "oc create -f <service_name>.yaml 1", "kind: VirtualMachine spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true 2", "kind: VirtualMachine spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2", "oc describe vmi <vmi_name>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: any_name path: \"/var/myvolumes\" 2 workload: nodeSelector: kubernetes.io/os: linux", "oc create -f hpp_cr.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi 1 provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 2 volumeBindingMode: WaitForFirstConsumer 3 parameters: storagePool: my-storage-pool 4", "oc create -f storageclass_csi.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-provisioner provisioner: kubevirt.io/hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2", "oc create -f storageclass.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: iso-pvc spec: volumeMode: Block 1 storageClassName: my-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 5Gi", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: my-storage-pool path: \"/var/myvolumes\" 2 pvcTemplate: volumeMode: Block 3 storageClassName: my-storage-class 4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5 workload: nodeSelector: kubernetes.io/os: linux", "oc create -f hpp_pvc_template_pool.yaml", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: \"<source_namespace>\" 3 name: \"<my_vm_disk>\" 4 storage: 5 resources: requests: storage: 2Gi 6 storageClassName: <storage_class> 7", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: 2 namespace: \"<source_namespace>\" 3 name: \"<my_vm_disk>\" 4 pvc: 5 accessModes: 6 - ReadWriteMany resources: requests: storage: 2Gi 7 volumeMode: Block 8 storageClassName: <storage_class> 9", "oc edit -n openshift-cnv storageprofile <storage_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class>", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: \"500m\" memory: \"2Gi\" requests: cpu: \"250m\" memory: \"1Gi\"", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: dv-ann annotations: v1.multus-cni.io/default-network: bridge-network 1 spec: source: http: url: \"example.exampleurl.com\" pvc: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 pvc: preallocation: true 2", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2", "oc create -f <upload-datavolume>.yaml", "virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3", "oc get dvs", "dd if=/dev/zero of=<loop10> bs=100M count=20", "losetup </dev/loop10>d3 <loop10> 1 2", "kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4", "oc create -f <local-block-pv10.yaml> 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2", "oc create -f <upload-datavolume>.yaml", "virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3", "oc get dvs", "yum install -y qemu-guest-agent", "systemctl enable --now qemu-guest-agent", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: my-vmsnapshot 1 spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2", "oc create -f <my-vmsnapshot>.yaml", "oc wait my-vm my-vmsnapshot --for condition=Ready", "oc describe vmsnapshot <my-vmsnapshot>", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: \"2020-09-30T14:41:51Z\" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: \"3897\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"False\" 1 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"True\" 2 type: Ready creationTime: \"2020-09-30T14:42:03Z\" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: my-vmrestore 1 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2 virtualMachineSnapshotName: my-vmsnapshot 3", "oc create -f <my-vmrestore>.yaml", "oc get vmrestore <my-vmrestore>", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: \"2020-09-30T14:46:27Z\" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: \"5512\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"False\" 2 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"True\" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: \"2020-09-30T14:46:28Z\" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1", "oc delete vmsnapshot <my-vmsnapshot>", "oc get vmsnapshot", "kind: PersistentVolume apiVersion: v1 metadata: name: <destination-pv> 1 annotations: spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi 2 local: path: /mnt/local-storage/local/disk1 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node01 4 persistentVolumeReclaimPolicy: Delete storageClassName: local volumeMode: Filesystem", "oc get pv <destination-pv> -o yaml", "spec: nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname 1 operator: In values: - node01 2", "oc label pv <destination-pv> node=node01", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <clone-datavolume> 1 spec: source: pvc: name: \"<source-vm-disk>\" 2 namespace: \"<source-namespace>\" 3 pvc: accessModes: - ReadWriteOnce selector: matchLabels: node: node01 4 resources: requests: storage: <10Gi> 5", "oc apply -f <clone-datavolume.yaml>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: \"hostpath\" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi", "oc create -f <blank-image-datavolume>.yaml", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 storage: 4 resources: requests: storage: <2Gi> 5", "oc create -f <cloner-datavolume>.yaml", "virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> [--persist] [--serial=<label-name>]", "virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>", "cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF", "podman build -t <registry>/<container_disk_name>:latest .", "podman push <registry>/<container_disk_name>:latest", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - \"private-registry-example-1:5000\" - \"private-registry-example-2:5000\"", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: \"<storage_class>\" 1", "oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy'", "oc patch pv <pv_name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'", "oc describe pvc <pvc_name> | grep 'Mounted By:'", "oc delete pvc <pvc_name>", "oc get pv <pv_name> -o yaml > <file_name>.yaml", "oc delete pv <pv_name>", "rm -rf <path_to_share_storage>", "oc create -f <new_pv_name>.yaml", "oc edit pvc <pvc_name>", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1", "oc get dvs", "oc delete dv <datavolume_name>", "oc create namespace <mycustomnamespace>", "oc get templates -n openshift", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "oc get templates -n customnamespace", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: commonTemplatesNamespace: customnamespace 1", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: commonTemplatesNamespace: customnamespace 1", "oc delete templates -n customnamespace <template_name>", "oc get templates -n customnamespace", "oc edit hco -n openshift-cnv kubevirt-hyperconverged", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: 1 bandwidthPerMigration: 64Mi completionTimeoutPerGiB: 800 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: migration-job spec: vmiName: vmi-fedora", "oc create -f vmi-migrate.yaml", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'", "edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: my-secondary-network 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'", "oc describe vmi vmi-fedora", "Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true", "oc delete vmim migration-job", "oc edit vm <custom-vm> -n <my-namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: custom-vm spec: template: spec: evictionStrategy: LiveMigrate", "virtctl restart <custom-vm> -n <my-namespace>", "oc adm cordon <node1>", "oc adm drain <node1> --delete-emptydir-data --ignore-daemonsets=true --force", "apiVersion: nodemaintenance.kubevirt.io/v1beta1 kind: NodeMaintenance metadata: name: maintenance-example 1 spec: nodeName: node-1.example.com 2 reason: \"Node maintenance\" 3", "oc apply -f nodemaintenance-cr.yaml", "oc describe node <node-name>", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeNotSchedulable 61m kubelet Node node-1.example.com status is now: NodeNotSchedulable", "oc get NodeMaintenance -o yaml", "apiVersion: v1 items: - apiVersion: nodemaintenance.kubevirt.io/v1beta1 kind: NodeMaintenance metadata: spec: nodeName: node-1.example.com reason: Node maintenance status: evictionPods: 3 1 pendingPods: - pod-example-workload-0 - httpd - httpd-manual phase: Running lastError: \"Last failure message\" 2 totalpods: 5", "oc adm uncordon <node1>", "oc delete -f nodemaintenance-cr.yaml", "nodemaintenance.nodemaintenance.kubevirt.io \"maintenance-example\" deleted", "\"486\" Conroe athlon core2duo coreduo kvm32 kvm64 n270 pentium pentium2 pentium3 pentiumpro phenom qemu32 qemu64", "apic clflush cmov cx16 cx8 de fpu fxsr lahf_lm lm mca mce mmx msr mtrr nx pae pat pge pni pse pse36 sep sse sse2 sse4.1 ssse3 syscall tsc", "aes apic avx avx2 bmi1 bmi2 clflush cmov cx16 cx8 de erms fma fpu fsgsbase fxsr hle invpcid lahf_lm lm mca mce mmx movbe msr mtrr nx pae pat pcid pclmuldq pge pni popcnt pse pse36 rdtscp rtm sep smep sse sse2 sse4.1 sse4.2 ssse3 syscall tsc tsc-deadline x2apic xsave", "aes avx avx2 bmi1 bmi2 erms fma fsgsbase hle invpcid movbe pcid pclmuldq popcnt rdtscp rtm sse4.2 tsc-deadline x2apic xsave", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - \"<obsolete_cpu_1>\" - \"<obsolete_cpu_2>\" minCPUModel: \"<minimum_cpu_model>\" 2", "oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1", "oc get nns", "oc get nns node01 -o yaml", "apiVersion: nmstate.io/v1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: interfaces: route-rules: routes: lastSuccessfulUpdateTime: \"2020-01-31T12:14:00Z\" 3", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 maxUnavailable: 3 4 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 5 type: linux-bridge state: up ipv4: dhcp: true enabled: true auto-dns: false bridge: options: stp: enabled: false port: - name: eth1 dns-resolver: 6 config: search: - example.com - example.org server: - 8.8.8.8", "oc apply -f br1-eth1-policy.yaml 1", "oc get nncp", "oc get nncp <policy> -o yaml", "oc get nnce", "oc get nnce <node>.<policy> -o yaml", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9", "oc apply -f <br1-eth1-policy.yaml> 1", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9", "# interfaces: - name: bond10 description: Bonding eth2 and eth3 for Linux bridge type: bond state: up link-aggregation: port: - eth2 - eth3 - name: br1 description: Linux bridge on bond type: linux-bridge state: up bridge: port: - name: bond10 #", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-copy-ipv4-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" capture: eth1-nic: interfaces.name==\"eth1\" 3 eth1-routes: routes.running.next-hop-interface==\"eth1\" br1-routes: capture.eth1-routes | routes.running.next-hop-interface := \"br1\" desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge 4 state: up ipv4: \"{{ capture.eth1-nic.interfaces.0.ipv4 }}\" 5 bridge: options: stp: enabled: false port: - name: eth1 6 routes: config: \"{{ capture.br1-routes.routes.running }}\"", "interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true", "interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false", "interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true", "interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true", "interfaces: 1 ipv4: auto-dns: false dns-resolver: config: search: - example.com - example.org server: - 8.8.8.8", "interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.0.2.251 1 prefix-length: 24 enabled: true routes: config: - destination: 198.51.100.0/24 metric: 150 next-hop-address: 192.0.2.1 2 next-hop-interface: eth1 table-id: 254", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01", "oc apply -f ens01-bridge-testfail.yaml", "nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created", "oc get nncp", "NAME STATUS ens01-bridge-testfail FailedToConfigure", "oc get nnce", "NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure", "oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type==\"Failing\")].message}'", "error reconciling NodeNetworkConfigurationPolicy at desired state apply: , failed to execute nmstatectl set --no-commit --timeout 480: 'exit status 1' '' libnmstate.error.NmstateVerificationError: desired ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: ens01 description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 current ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: [] description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 difference ========== --- desired +++ current @@ -13,8 +13,7 @@ hello-time: 2 max-age: 20 priority: 32768 - port: - - name: ens01 + port: [] description: Linux bridge with the wrong port ipv4: address: [] line 651, in _assert_interfaces_equal\\n current_state.interfaces[ifname],\\nlibnmstate.error.NmstateVerificationError:", "oc get nns control-plane-1 -o yaml", "- ipv4: name: ens1 state: up type: ethernet", "oc edit nncp ens01-bridge-testfail", "port: - name: ens1", "oc get nncp", "NAME STATUS ens01-bridge-testfail SuccessfullyConfigured", "oc logs <virt-launcher-name>", "oc get events", "oc describe vm <vm>", "oc describe vmi <vmi>", "oc describe pod virt-launcher-<name>", "oc describe dv <DataVolume>", "Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready", "spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8", "oc create -f <file_name>.yaml", "spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5", "oc create -f <file_name>.yaml", "spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-fedora name: vm-fedora spec: template: metadata: labels: special: vm-fedora spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M readinessProbe: httpGet: port: 1500 initialDelaySeconds: 120 periodSeconds: 20 timeoutSeconds: 10 failureThreshold: 3 successThreshold: 3 terminationGracePeriodSeconds: 180 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-registry-disk-demo - cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } bootcmd: - setenforce 0 - dnf install -y nmap-ncat - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\\\n\\\\nHello World!' name: cloudinitdisk", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes_total[6m]))) > 0 1", "kind: Service apiVersion: v1 metadata: name: node-exporter-service 1 namespace: dynamation 2 labels: servicetype: metrics 3 spec: ports: - name: exmet 4 protocol: TCP port: 9100 5 targetPort: 9100 6 type: ClusterIP selector: monitor: metrics 7", "oc create -f node-exporter-service.yaml", "wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz", "sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz --directory /usr/bin --strip 1 \"*/node_exporter\"", "[Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.target", "sudo systemctl enable node_exporter.service sudo systemctl start node_exporter.service", "curl http://localhost:9100/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5244e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.0449e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.7913e-05", "spec: template: metadata: labels: monitor: metrics", "oc get service -n <namespace> <node-exporter-service>", "curl http://<172.30.226.162:9100>/metrics | grep -vE \"^#|^USD\"", "node_arp_entries{device=\"eth0\"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name=\"0\",type=\"Processor\"} 0 node_cooling_device_max_state{name=\"0\",type=\"Processor\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"nice\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"user\"} 0 node_cpu_seconds_total{cpu=\"0\",mode=\"idle\"} 1.10586485e+06 node_cpu_seconds_total{cpu=\"0\",mode=\"iowait\"} 37.61 node_cpu_seconds_total{cpu=\"0\",mode=\"irq\"} 233.91 node_cpu_seconds_total{cpu=\"0\",mode=\"nice\"} 551.47 node_cpu_seconds_total{cpu=\"0\",mode=\"softirq\"} 87.3 node_cpu_seconds_total{cpu=\"0\",mode=\"steal\"} 86.12 node_cpu_seconds_total{cpu=\"0\",mode=\"system\"} 464.15 node_cpu_seconds_total{cpu=\"0\",mode=\"user\"} 1075.2 node_disk_discard_time_seconds_total{device=\"vda\"} 0 node_disk_discard_time_seconds_total{device=\"vdb\"} 0 node_disk_discarded_sectors_total{device=\"vda\"} 0 node_disk_discarded_sectors_total{device=\"vdb\"} 0 node_disk_discards_completed_total{device=\"vda\"} 0 node_disk_discards_completed_total{device=\"vdb\"} 0 node_disk_discards_merged_total{device=\"vda\"} 0 node_disk_discards_merged_total{device=\"vdb\"} 0 node_disk_info{device=\"vda\",major=\"252\",minor=\"0\"} 1 node_disk_info{device=\"vdb\",major=\"252\",minor=\"16\"} 1 node_disk_io_now{device=\"vda\"} 0 node_disk_io_now{device=\"vdb\"} 0 node_disk_io_time_seconds_total{device=\"vda\"} 174 node_disk_io_time_seconds_total{device=\"vdb\"} 0.054 node_disk_io_time_weighted_seconds_total{device=\"vda\"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device=\"vdb\"} 0.039 node_disk_read_bytes_total{device=\"vda\"} 3.71867136e+08 node_disk_read_bytes_total{device=\"vdb\"} 366592 node_disk_read_time_seconds_total{device=\"vda\"} 19.128 node_disk_read_time_seconds_total{device=\"vdb\"} 0.039 node_disk_reads_completed_total{device=\"vda\"} 5619 node_disk_reads_completed_total{device=\"vdb\"} 96 node_disk_reads_merged_total{device=\"vda\"} 5 node_disk_reads_merged_total{device=\"vdb\"} 0 node_disk_write_time_seconds_total{device=\"vda\"} 240.66400000000002 node_disk_write_time_seconds_total{device=\"vdb\"} 0 node_disk_writes_completed_total{device=\"vda\"} 71584 node_disk_writes_completed_total{device=\"vdb\"} 0 node_disk_writes_merged_total{device=\"vda\"} 19761 node_disk_writes_merged_total{device=\"vdb\"} 0 node_disk_written_bytes_total{device=\"vda\"} 2.007924224e+09 node_disk_written_bytes_total{device=\"vdb\"} 0", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor 1 namespace: dynamation 2 spec: endpoints: - interval: 30s 3 port: exmet 4 scheme: http selector: matchLabels: servicetype: metrics", "oc create -f node-exporter-metrics-monitor.yaml", "oc expose service -n <namespace> <node_exporter_service_name>", "oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host", "NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org", "curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5382e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.1163e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.8546e-05 go_gc_duration_seconds{quantile=\"0.75\"} 4.9139e-05 go_gc_duration_seconds{quantile=\"1\"} 0.000189423", "export KMP_NAMESPACE=\"USD(oc get pod -A --no-headers -l control-plane=mac-controller-manager | awk '{print USD1}')\"", "export KMP_NAME=\"USD(oc get pod -A --no-headers -l control-plane=mac-controller-manager | awk '{print USD2}')\"", "oc describe pod -n USDKMP_NAMESPACE USDKMP_NAME", "oc logs -n USDKMP_NAMESPACE USDKMP_NAME", "export NAMESPACE=\"USD(oc get deployment -A | grep ssp-operator | awk '{print USD1}')\"", "oc -n USDNAMESPACE describe pods -l control-plane=ssp-operator", "oc -n USDNAMESPACE logs --tail=-1 -l control-plane=ssp-operator", "export NAMESPACE=\"USD(USD oc get deployment -A | grep ssp-operator | awk '{print USD1}')\"", "oc -n USDNAMESPACE get pods -l name=virt-template-validator", "oc -n USDNAMESPACE describe pods -l name=virt-template-validator", "oc -n USDNAMESPACE logs --tail=-1 -l name=virt-template-validator", "export NAMESPACE=\"USD(oc get deployment -A | grep ssp-operator | awk '{print USD1}')\"", "oc -n USDNAMESPACE get pods -l control-plane=ssp-operator", "oc -n USDNAMESPACE describe pods -l control-plane=ssp-operator", "oc -n USDNAMESPACE logs --tail=-1 -l control-plane=ssp-operator", "export NAMESPACE_SSP=\"USD(oc get deployment -A | grep ssp-operator | awk '{print USD1}')\"", "export NAMESPACE=\"USD(oc get deployment -A | grep virt-template-validator | awk '{print USD1}')\"", "oc -n USDNAMESPACE get pods -l name=virt-template-validator", "oc -n USDNAMESPACE_SSP describe pods -l name=ssp-operator", "oc -n USDNAMESPACE_SSP logs --tail=-1 -l name=ssp-operator", "oc -n USDNAMESPACE describe pods -l name=virt-template-validator", "oc -n USDNAMESPACE logs --tail=-1 -l name=virt-template-validator", "export NAMESPACE=\"USD(oc get kubevirt -A -o custom-columns=\"\":.metadata.namespace)\"", "oc -n USDNAMESPACE get pods -l kubevirt.io=virt-operator", "oc -n USDNAMESPACE logs <pod-name>", "oc -n USDNAMESPACE describe pod <pod-name>", "oc -n USDNAMESPACE logs <pod-name> |grep lead", "{\"component\":\"virt-operator\",\"level\":\"info\",\"msg\":\"Attempting to acquire leader status\",\"pos\":\"application.go:400\",\"timestamp\":\"2021-11-30T12:15:18.635387Z\"} I1130 12:15:18.635452 1 leaderelection.go:243] attempting to acquire leader lease <namespace>/virt-operator I1130 12:15:19.216582 1 leaderelection.go:253] successfully acquired lease <namespace>/virt-operator", "{\"component\":\"virt-operator\",\"level\":\"info\",\"msg\":\"Started leading\",\"pos\":\"application.go:385\",\"timestamp\":\"2021-11-30T12:15:19.216836Z\"}", "oc -n USDNAMESPACE logs <pod-name> |grep lead", "{\"component\":\"virt-operator\",\"level\":\"info\",\"msg\":\"Attempting to acquire leader status\",\"pos\":\"application.go:400\",\"timestamp\":\"2021-11-30T12:15:20.533696Z\"} I1130 12:15:20.533792 1 leaderelection.go:243] attempting to acquire leader lease <namespace>/virt-operator", "oc -n USDNAMESPACE get deployment virt-controller -o yaml", "get pods -n USDNAMESPACE |grep virt-controller", "oc -n USDNAMESPACE describe pods <virt-controller pod>", "oc -n USDNAMESPACE logs <virt-controller pod>", "oc get nodes", "oc -n USDNAMESPACE get deployment virt-operator -o yaml", "oc -n USDNAMESPACE describe pods <virt-operator pod>", "oc -n USDNAMESPACE logs <virt-operator pod>", "oc get nodes", "export NAMESPACE=\"USD(oc get kubevirt -A -o custom-columns=\"\":.metadata.namespace)\"", "oc -n USDNAMESPACE get pods -l kubevirt.io=virt-api", "oc -n USDNAMESPACE get deployment virt-api -o yaml", "oc -n USDNAMESPACE describe deployment virt-api", "oc get nodes", "export NAMESPACE=\"USD(oc get kubevirt -A -o custom-columns=\"\":.metadata.namespace)\"", "oc -n USDNAMESPACE get pods -l kubevirt.io=virt-api", "oc -n USDNAMESPACE get deployment virt-api -o yaml", "oc -n USDNAMESPACE describe deployment virt-api", "oc get nodes", "export NAMESPACE=\"USD(oc get kubevirt -A -o custom-columns=\"\":.metadata.namespace)\"", "oc get deployment -n USDNAMESPACE virt-controller -o yaml", "oc -n USDNAMESPACE describe pods <virt-controller pod>", "oc -n USDNAMESPACE logs <virt-controller pod>", "oc get logs <virt-controller-pod>", "export NAMESPACE=\"USD(oc get kubevirt -A -o custom-columns=\"\":.metadata.namespace)\"", "oc get deployment -n USDNAMESPACE virt-operator -o yaml", "oc -n USDNAMESPACE describe pods <virt-operator pod>", "oc -n USDNAMESPACE logs <virt-operator pod>", "oc get logs <virt-operator-pod>", "export NAMESPACE=\"USD(oc get kubevirt -A -o custom-columns=\"\":.metadata.namespace)\"", "oc -n USDNAMESPACE get pods -l kubevirt.io=virt-operator", "oc -n USDNAMESPACE logs <pod-name>", "oc -n USDNAMESPACE describe pod <pod-name>", "oc adm must-gather --image-stream=openshift/must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 -- <environment_variable_1> <environment_variable_2> <script_name>", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 -- NS=mynamespace VM=my-vm gather_vms_details 1", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 -- PROS=3 gather", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.10.10 -- gather_images" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/virtualization/index
Chapter 96. Kamelet Main
Chapter 96. Kamelet Main Since Camel 3.11 A main class that is opinionated to boostrap and run Camel standalone with Kamelets (or plain YAML routes) for development and demo purposes. 96.1. Initial configuration The KameletMain is pre-configured with the following properties: camel.component.kamelet.location = classpath:/kamelets,github:apache:camel-kamelets/kamelets camel.component.rest.consumerComponentName = platform-http camel.component.rest.producerComponentName = vertx-http You can override these settings by updating the configuration in application.properties . 96.2. Automatic dependencies downloading The Kamelet Main can automatically download Kamelet YAML files from a remote location over http/https, and from github as well. The official Kamelets from the Apache Camel Kamelet Catalog is stored on github and they can be used out of the box as-is. For example a Camel route can be coded in YAML which uses the Earthquake Kamelet from the catalog, as shown below: - route: from: "kamelet:earthquake-source" steps: - unmarshal: json: {} - log: "Earthquake with magnitude USD{body[properties][mag]} at USD{body[properties][place]}" In the above example, the earthquake kamelet will be downloaded from github, and as well its required dependencies. For more information, see Kamelet Main example
[ "camel.component.kamelet.location = classpath:/kamelets,github:apache:camel-kamelets/kamelets camel.component.rest.consumerComponentName = platform-http camel.component.rest.producerComponentName = vertx-http", "- route: from: \"kamelet:earthquake-source\" steps: - unmarshal: json: {} - log: \"Earthquake with magnitude USD{body[properties][mag]} at USD{body[properties][place]}\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-kamelet-main-component-starter
14.6. Compatibility with Older Systems
14.6. Compatibility with Older Systems If an ACL has been set on any file on a given file system, that file system has the ext_attr attribute. This attribute can be seen using the following command: A file system that has acquired the ext_attr attribute can be mounted with older kernels, but those kernels do not enforce any ACLs which have been set. Versions of the e2fsck utility included in version 1.22 and higher of the e2fsprogs package (including the versions in Red Hat Enterprise Linux 2.1 and 4) can check a file system with the ext_attr attribute. Older versions refuse to check it.
[ "tune2fs -l <filesystem-device>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/access_control_lists-compatibility_with_older_systems
OpenShift AI tutorial - Fraud detection example
OpenShift AI tutorial - Fraud detection example Red Hat OpenShift AI Cloud Service 1 Use OpenShift AI to train an example model in JupyterLab, deploy the model, and refine the model by using automated pipelines
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/openshift_ai_tutorial_-_fraud_detection_example/index
Chapter 13. Security
Chapter 13. Security TLS 1.2 support added to basic system components With these updates, basic system tools, such as yum , stunnel , vsftpd , Git , or Postfix have been modified to support the 1.2 version of the TLS protocol. This is to ensure that the tools are not vulnerable to security exploits that exist for older versions of the protocol. (BZ#1253743) NSS now enables the TLS version 1.2 protocol by default In order to satisfy current best security practices, the Transport Layer Security (TLS) 1.2 protocol has been enabled by default in NSS. This means that it is no longer necessary to explicitly enable it in applications that use NSS library defaults. If both sides of TLS connection enable TLS 1.2, this protocol version is now used automatically. (BZ# 1272504 ) pycurl now provides options to require TLSv1.1 or 1.2 With this update, pycurl has been enhanced to support options that make it possible to require the use of the 1.1 or 1.2 versions of the TLS protocol, which improves the security of communication. (BZ# 1260406 ) PHP cURL module now supports TLS 1.1 and TLS 1.2 Support for the TLS protocol version 1.1 and 1.2, which was previously made available in the curl library, has been added to the PHP cURL extension. (BZ# 1255920 ) openswan deprecated in favor of libreswan The openswan packages have been deprecated, and libreswan packages have been introduced as a direct replacement for openswan . libreswan is a more stable and secure VPN solution for Red Hat Enterprise Linux 6. libreswan is already available as the VPN endpoint solution for Red Hat Enterprise Linux 7. openswan will be replaced by libreswan during system upgrade. See https://access.redhat.com/articles/2089191 for instructions on how to migrate from openswan to libreswan . Note that the openswan packages remain available in the repository. To install openswan instead of libreswan , use the -x option of yum to exclude libreswan : yum install openswan -x libreswan . (BZ#1266222) SELinux support added for GlusterFS With this update, the SELinux mandatory access control is provided for the glusterd (GlusterFS Management Service) and glusterfsd (NFS server) processes as a part of Red Hat Gluster Storage. (BZ#1241112) shadow-utils rebased to version 4.1.5.1 The shadow-utils package, which provides utilities for managing user and group accounts, has been rebased to version 4.1.5.1. This is the same as the version of shadow-utils in Red Hat Enterprise Linux 7. Enhancements include improved auditing, which was corrected to provide a better record of system-administrator actions on the user-account database. The main new feature added to this package is the support for operation in chroot environments using the --root option of the respective tools. (BZ#1257643) audit rebased to version 2.4.5 The audit package, which provides the user-space utilities for storing and searching the audit records generated by the audit subsystem in the Linux kernel, has been rebased to version 2.4.5. This update includes enhanced event interpretation facilities that provide more system-call names and arguments to make the understanding of events easier. This update also has an important behavior change in the way that auditd records events to disk. If you are using either data or sync modes for the flush setting in auditd.conf , you will see a performance decrease in auditd's ability to log events. This is because it was previously not properly informing the kernel that full synchronous writes should be used. This was corrected, which has improved the reliability of the operation, but this has come at the expense of performance. If the performance drop is not tolerable, the flush setting should be changed to incremental and the freq setting will control how often auditd instructs the kernel to synchronize all records to disk. A freq setting of 100 should give good performance while making sure that new records are flushed to disk periodically. (BZ# 1257650 ) LWP now supports host name and certificate verification Certificate and host-name verification, which is disabled by default, has been implemented in the World Wide Web library for Perl (LWP, also called libwww-perl). This allows users of the LWP::UserAgent Perl module to verify the identity of HTTPS servers. To enable the verification, make sure the IO::Socket::SSL Perl module is installed and the PERL_LWP_SSL_VERIFY_HOSTNAME environment variable set to 1 or that the application is modified to set the ssl_opts option correctly. See LWP::UserAgent POD for more details. (BZ# 745800 ) Perl Net:SSLeay now supports elliptic curve parameters Support for elliptic-curve parameters has been added to the Perl Net:SSLeay module, which contains bindings to the OpenSSL library. Namely, the EC_KEY_new_by_curve_name() , EC_KEY_free*() , SSL_CTX_set_tmp_ecdh() , and OBJ_txt2nid() subroutines have been ported from upstream. This is required for the support of the Elliptic Curve Diffie-Hellman Exchange (ECDHE) key exchange in the IO::Socket::SSL Perl module. (BZ# 1044401 ) Perl IO::Socket::SSL now supports ECDHE Support for Elliptic Curve Diffie-Hellman Exchange (ECDHE) has been added to the IO::Socket::SSL Perl module. The new SSL_ecdh_curve option can be used for specifying a suitable curve by the Object Identifier (OID) or Name Identifier (NID). As a result, it is now possible to override the default elliptic curve parameters when implementing a TLS client using IO::Socket:SSL . (BZ# 1078084 ) openscap rebased to version 1.2.8 OpenSCAP, a set of libraries providing a path for the integration of SCAP standards, has been rebased to 1.2.8, the latest upstream version. Notable enhancements include support for the OVAL-5.11 and OVAL-5.11.1 language versions, the introduction of a verbose mode, which helps to understand the details of running scans, two new commands, oscap-ssh and oscap-vm , for scanning over SSH and scanning of inactive virtual systems respectively, native support for bz2 archives, and a modern interface for HTML reports and guides. (BZ# 1259037 ) scap-workbench rebased to version 1.1.1 The scap-workbench package has been rebased to version 1.1.1, which provides a new SCAP Security Guide integration dialog. It can help the administrator choose a product that needs to be scanned instead of choosing content files. The new version also offers a number of performance and user-experience improvements, including improved rule searching in the tailoring window and the possibility to fetch remote resources in SCAP content using the GUI. (BZ#1269551) scap-security-guide rebased to version 0.1.28 The scap-security-guide package has been rebased to the latest upstream version (0.1.28), which offers a number of important fixes and enhancements. These include several improved or completely new profiles for both Red Hat Enterprise Linux 6 and 7, added automated checks and remediation scripts for many rules, human readable OVAL IDs that are consistent between releases, or HTML-formatted guides accompanying each profile. (BZ# 1267509 ) Support for SSLv3 and RC4 disabled in luci The use of the insecure SSLv3 protocol and RC4 algorithm has been disabled in luci , the web-based high availability administration application. By default, only TLSv1.0 and higher protocol versions are allowed, and the digest algorithm used for self-managed certificates has been updated to SHA256. It is possible to re-enable SSLv3 (by uncommenting the allow_insecure options in relevant sections of the /etc/sysconfig/luci configuration file), but that is only for unlikely and unpredictable cases and should be used with extreme caution. This update also adds the possibility to adjust the most important SSL/TLS properties (in addition to the mentioned allow_insecure ): the path to the certificate pair and the cipher list. These settings can be used either globally, or independently for both secure channels (HTTPS web UI access and connection with ricci instances). (BZ# 1156167 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_release_notes/new_features_security
Chapter 140. KafkaBridgeTemplate schema reference
Chapter 140. KafkaBridgeTemplate schema reference Used in: KafkaBridgeSpec Property Property type Description deployment DeploymentTemplate Template for Kafka Bridge Deployment . pod PodTemplate Template for Kafka Bridge Pods . apiService InternalServiceTemplate Template for Kafka Bridge API Service . podDisruptionBudget PodDisruptionBudgetTemplate Template for Kafka Bridge PodDisruptionBudget . bridgeContainer ContainerTemplate Template for the Kafka Bridge container. clusterRoleBinding ResourceTemplate Template for the Kafka Bridge ClusterRoleBinding. serviceAccount ResourceTemplate Template for the Kafka Bridge service account. initContainer ContainerTemplate Template for the Kafka Bridge init container.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaBridgeTemplate-reference
Chapter 111. Platform HTTP
Chapter 111. Platform HTTP Since Camel 3.0 Only consumer is supported The Platform HTTP is used to allow Camel to use the existing HTTP server from the runtime, for example when running Camel on Spring Boot, Quarkus, or other runtimes. 111.1. Dependencies When using platform-http with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-platform-http-starter</artifactId> </dependency> 111.2. Platform HTTP Provider To use Platform HTTP a provider (engine) is required to be available on the classpath. The purpose is to have drivers for different runtimes such as Quarkus, VertX, or Spring Boot. At this moment there is only support for Quarkus and VertX by camel-platform-http-vertx . This JAR must be on the classpath otherwise the Platform HTTP component cannot be used and an exception will be thrown on startup. <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-platform-http-vertx</artifactId> <version>4.8.3.redhat-00004</version> <!-- use the same version as your Camel core version --> </dependency> 111.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 111.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 111.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 111.4. Component Options The Platform HTTP component supports 5 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean handleWriteResponseError (consumer) When Camel is complete processing the message, and the HTTP server is writing response. This option controls whether Camel should catch any failure during writing response and store this on the Exchange, which allows onCompletion/UnitOfWork to regard the Exchange as failed and have access to the caused exception from the HTTP server. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean engine (advanced) An HTTP Server engine implementation to serve the requests. PlatformHttpEngine headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy 111.4.1. Endpoint Options The Platform HTTP endpoint is configured using URI syntax: with the following path and query parameters: 111.4.1.1. Path Parameters (1 parameters) Name Description Default Type path (consumer) Required The path under which this endpoint serves the HTTP requests, for proxy use 'proxy'. String 111.4.1.2. Query Parameters (21 parameters) Name Description Default Type consumes (consumer) The content type this endpoint accepts as an input, such as application/xml or application/json. null or / mean no restriction. String cookieDomain (consumer) Sets which server can receive cookies. String cookieHttpOnly (consumer) Sets whether to prevent client side scripts from accessing created cookies. false boolean cookieMaxAge (consumer) Sets the maximum cookie age in seconds. Long cookiePath (consumer) Sets the URL path that must exist in the requested URL in order to send the Cookie. / String cookieSameSite (consumer) Sets whether to prevent the browser from sending cookies along with cross-site requests. Enum values: STRICT LAX NONE Lax CookieSameSite cookieSecure (consumer) Sets whether the cookie is only sent to the server with an encrypted request over HTTPS. false boolean handleWriteResponseError (consumer) When Camel is complete processing the message, and the HTTP server is writing response. This option controls whether Camel should catch any failure during writing response and store this on the Exchange, which allows onCompletion/UnitOfWork to regard the Exchange as failed and have access to the caused exception from the HTTP server. false boolean httpMethodRestrict (consumer) A comma separated list of HTTP methods to serve, e.g. GET,POST . If no methods are specified, all methods will be served. String matchOnUriPrefix (consumer) Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found. false boolean muteException (consumer) If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. true boolean produces (consumer) The content type this endpoint produces, such as application/xml or application/json. String returnHttpRequestHeaders (consumer) Whether to include HTTP request headers (Accept, User-Agent, etc.) into HTTP response produced by this endpoint. false boolean useCookieHandler (consumer) Whether to enable the Cookie Handler that allows Cookie addition, expiry, and retrieval (currently only supported by camel-platform-http-vertx). false boolean useStreaming (consumer) Whether to use streaming for large requests and responses (currently only supported by camel-platform-http-vertx). false boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern fileNameExtWhitelist (consumer (advanced)) A comma or whitespace separated list of file extensions. Uploads having these extensions will be stored locally. Null value or asterisk () will allow all files. String headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter headers to and from Camel message. HeaderFilterStrategy platformHttpEngine (advanced) An HTTP Server engine implementation to serve the requests of this endpoint. PlatformHttpEngine 111.5. Implementing a reverse proxy Platform HTTP component can act as a reverse proxy, in that case some headers are populated from the absolute URL received on the request line of the HTTP request. Those headers are specific to the underlining platform. At this moment, this feature is only supported for Vert.x in camel-platform-http-vertx component. 111.6. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.platform-http.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.platform-http.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.platform-http.enabled Whether to enable auto configuration of the platform-http component. This is enabled by default. Boolean camel.component.platform-http.engine An HTTP Server engine implementation to serve the requests. The option is a org.apache.camel.component.platform.http.spi.PlatformHttpEngine type. PlatformHttpEngine camel.component.platform-http.handle-write-response-error When Camel is complete processing the message, and the HTTP server is writing response. This option controls whether Camel should catch any failure during writing response and store this on the Exchange, which allows onCompletion/UnitOfWork to regard the Exchange as failed and have access to the caused exception from the HTTP server. false Boolean camel.component.platform-http.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-platform-http-starter</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-platform-http-vertx</artifactId> <version>4.8.3.redhat-00004</version> <!-- use the same version as your Camel core version --> </dependency>", "platform-http:path" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-platform-http-component-starter
Chapter 14. Using the Stream Control Transmission Protocol (SCTP) on a bare metal cluster
Chapter 14. Using the Stream Control Transmission Protocol (SCTP) on a bare metal cluster As a cluster administrator, you can use the Stream Control Transmission Protocol (SCTP) on a cluster. 14.1. Support for Stream Control Transmission Protocol (SCTP) on OpenShift Container Platform As a cluster administrator, you can enable SCTP on the hosts in the cluster. On Red Hat Enterprise Linux CoreOS (RHCOS), the SCTP module is disabled by default. SCTP is a reliable message based protocol that runs on top of an IP network. When enabled, you can use SCTP as a protocol with pods, services, and network policy. A Service object must be defined with the type parameter set to either the ClusterIP or NodePort value. 14.1.1. Example configurations using SCTP protocol You can configure a pod or service to use SCTP by setting the protocol parameter to the SCTP value in the pod or service object. In the following example, a pod is configured to use SCTP: apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ... ports: - containerPort: 30100 name: sctpserver protocol: SCTP In the following example, a service is configured to use SCTP: apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ... ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP In the following example, a NetworkPolicy object is configured to apply to SCTP network traffic on port 80 from any pods with a specific label: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80 14.2. Enabling Stream Control Transmission Protocol (SCTP) As a cluster administrator, you can load and enable the blacklisted SCTP kernel module on worker nodes in your cluster. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Create a file named load-sctp-module.yaml that contains the following YAML definition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp To create the MachineConfig object, enter the following command: USD oc create -f load-sctp-module.yaml Optional: To watch the status of the nodes while the MachineConfig Operator applies the configuration change, enter the following command. When the status of a node transitions to Ready , the configuration update is applied. USD oc get nodes 14.3. Verifying Stream Control Transmission Protocol (SCTP) is enabled You can verify that SCTP is working on a cluster by creating a pod with an application that listens for SCTP traffic, associating it with a service, and then connecting to the exposed service. Prerequisites Access to the internet from the cluster to install the nc package. Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Create a pod starts an SCTP listener: Create a file named sctp-server.yaml that defines a pod with the following YAML: apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi8/ubi command: ["/bin/sh", "-c"] args: ["dnf install -y nc && sleep inf"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP Create the pod by entering the following command: USD oc create -f sctp-server.yaml Create a service for the SCTP listener pod. Create a file named sctp-service.yaml that defines a service with the following YAML: apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102 To create the service, enter the following command: USD oc create -f sctp-service.yaml Create a pod for the SCTP client. Create a file named sctp-client.yaml with the following YAML: apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi8/ubi command: ["/bin/sh", "-c"] args: ["dnf install -y nc && sleep inf"] To create the Pod object, enter the following command: USD oc apply -f sctp-client.yaml Run an SCTP listener on the server. To connect to the server pod, enter the following command: USD oc rsh sctpserver To start the SCTP listener, enter the following command: USD nc -l 30102 --sctp Connect to the SCTP listener on the server. Open a new terminal window or tab in your terminal program. Obtain the IP address of the sctpservice service. Enter the following command: USD oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{"\n"}}' To connect to the client pod, enter the following command: USD oc rsh sctpclient To start the SCTP client, enter the following command. Replace <cluster_IP> with the cluster IP address of the sctpservice service. # nc <cluster_IP> 30102 --sctp
[ "apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ports: - containerPort: 30100 name: sctpserver protocol: SCTP", "apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp", "oc create -f load-sctp-module.yaml", "oc get nodes", "apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi8/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP", "oc create -f sctp-server.yaml", "apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102", "oc create -f sctp-service.yaml", "apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi8/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"]", "oc apply -f sctp-client.yaml", "oc rsh sctpserver", "nc -l 30102 --sctp", "oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{\"\\n\"}}'", "oc rsh sctpclient", "nc <cluster_IP> 30102 --sctp" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/using-sctp
Red Hat build of Apache Camel for Quarkus Reference
Red Hat build of Apache Camel for Quarkus Reference Red Hat build of Apache Camel 4.8 Red Hat build of Apache Camel for Quarkus provided by Red Hat
[ "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-amqp</artifactId> </dependency>", "<dependency> <groupId>io.quarkiverse.messaginghub</groupId> <artifactId>quarkus-pooled-jms</artifactId> </dependency>", "quarkus.qpid-jms.wrap=true", "@RegisterForReflection(targets = { IllegalStateException.class, MyCustomException.class }, serialization = true)", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-attachments</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-avro</artifactId> </dependency>", "<plugin> <groupId>io.quarkus</groupId> <artifactId>quarkus-maven-plugin</artifactId> <executions> <execution> <id>generate-code-and-build</id> <goals> <goal>generate-code</goal> <goal>build</goal> </goals> </execution> </executions> </plugin>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-aws2-cw</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-aws2-ddb</artifactId> </dependency>", "quarkus.dynamodb.sync-client.type=apache", "import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import software.amazon.awssdk.services.dynamodb.DynamoDbClient; @ApplicationScoped @Unremovable class UnremovableDynamoDbClient { @Inject DynamoDbClient dynamoDbClient; }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-aws2-kinesis</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-aws2-lambda</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-aws2-s3</artifactId> </dependency>", "quarkus.s3.sync-client.type=apache", "import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import software.amazon.awssdk.services.s3.S3Client; @ApplicationScoped @Unremovable class UnremovableS3Client { @Inject S3Client s3Client; }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-aws2-sns</artifactId> </dependency>", "quarkus.sns.sync-client.type=apache", "import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import software.amazon.awssdk.services.sns.SnsClient; @ApplicationScoped @Unremovable class UnremovableSnsClient { @Inject SnsClient snsClient; }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-aws2-sqs</artifactId> </dependency>", "quarkus.sqs.sync-client.type=apache", "import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import software.amazon.awssdk.services.sqs.SqsClient; @ApplicationScoped @Unremovable class UnremovableSqsClient { @Inject SqsClient sqsClient; }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-azure-eventhubs</artifactId> </dependency>", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-micrometer</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-azure-key-vault</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-azure-servicebus</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-azure-storage-blob</artifactId> </dependency>", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-micrometer</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-azure-storage-queue</artifactId> </dependency>", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-micrometer</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-bean-validator</artifactId> </dependency>", "@RegisterForReflection public interface OptionalChecks { }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-bean</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-beanio</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-bindy</artifactId> </dependency>", "BindyDataFormat dataFormat = new BindyDataFormat(); dataFormat.setLocale(\"ar\");", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-browse</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-cassandraql</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-cli-connector</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-controlbus</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-management</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-bean</artifactId> </dependency>", "template.sendBody( \"controlbus:language:simple\", \"USD{camelContext.getRouteController().stopRoute('foo')}\" );", "quarkus.camel.native.reflection.include-patterns = org.apache.camel.spi.RouteController", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-core</artifactId> </dependency>", "--- simple(\"USD{body.address}\") ---", "--- simple(\"USD{body} is 'java.nio.ByteBuffer'\") ---", "from(\"direct:start\").transform().simple(\"resource:classpath:mysimple.txt\");", "quarkus.native.resources.includes = mysimple.txt", "--- camel.beans.customBeanWithSetterInjection = #class:org.example.PropertiesCustomBeanWithSetterInjection camel.beans.customBeanWithSetterInjection.counter = 123 ---", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-cron</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-crypto</artifactId> </dependency>", "<dependency> <groupId>org.bouncycastle</groupId> <artifactId>bc-fips</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-security</artifactId> </dependency>", "quarkus.security.security-providers=BCFIPS", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-cxf-soap</artifactId> </dependency>", "import org.apache.camel.builder.RouteBuilder; import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.context.SessionScoped; import jakarta.enterprise.inject.Produces; import jakarta.inject.Named; @ApplicationScoped public class CxfSoapClientRoutes extends RouteBuilder { @Override public void configure() { /* You can either configure the client inline */ from(\"direct:cxfUriParamsClient\") .to(\"cxf://http://localhost:8082/calculator-ws?wsdlURL=wsdl/CalculatorService.wsdl&dataFormat=POJO&serviceClass=org.foo.CalculatorService\"); /* Or you can use a named bean produced below by beanClient() method */ from(\"direct:cxfBeanClient\") .to(\"cxf:bean:beanClient?dataFormat=POJO\"); } @Produces @SessionScoped @Named CxfEndpoint beanClient() { final CxfEndpoint result = new CxfEndpoint(); result.setServiceClass(CalculatorService.class); result.setAddress(\"http://localhost:8082/calculator-ws\"); result.setWsdlURL(\"wsdl/CalculatorService.wsdl\"); // a resource in the class path return result; } }", "import jakarta.jws.WebMethod; import jakarta.jws.WebService; @WebService(targetNamespace = CalculatorService.TARGET_NS) 1 public interface CalculatorService { public static final String TARGET_NS = \"http://acme.org/wscalculator/Calculator\"; @WebMethod 2 public int add(int intA, int intB); @WebMethod 3 public int subtract(int intA, int intB); @WebMethod 4 public int divide(int intA, int intB); @WebMethod 5 public int multiply(int intA, int intB); }", "docker run -p 8082:8080 quay.io/l2x6/calculator-ws:1.2", "import org.apache.camel.builder.RouteBuilder; import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.inject.Produces; import jakarta.inject.Named; @ApplicationScoped public class CxfSoapRoutes extends RouteBuilder { @Override public void configure() { /* A CXF Service configured through a CDI bean */ from(\"cxf:bean:helloBeanEndpoint\") .setBody().simple(\"Hello USD{body} from CXF service\"); /* A CXF Service configured through Camel URI parameters */ from(\"cxf:///hello-inline?wsdlURL=wsdl/HelloService.wsdl&serviceClass=org.foo.HelloService\") .setBody().simple(\"Hello USD{body} from CXF service\"); } @Produces @ApplicationScoped @Named CxfEndpoint helloBeanEndpoint() { final CxfEndpoint result = new CxfEndpoint(); result.setServiceClass(HelloService.class); result.setAddress(\"/hello-bean\"); result.setWsdlURL(\"wsdl/HelloService.wsdl\"); return result; } }", "quarkus.cxf.path = /soap-services", "import org.apache.camel.builder.RouteBuilder; import org.apache.cxf.ext.logging.LoggingFeature; import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.context.SessionScoped; import jakarta.enterprise.inject.Produces; import jakarta.inject.Named; @ApplicationScoped public class MyBeans { @Produces @ApplicationScoped @Named(\"prettyLoggingFeature\") public LoggingFeature prettyLoggingFeature() { final LoggingFeature result = new LoggingFeature(); result.setPrettyLogging(true); return result; } @Inject @Named(\"prettyLoggingFeature\") LoggingFeature prettyLoggingFeature; @Produces @SessionScoped @Named CxfEndpoint cxfBeanClient() { final CxfEndpoint result = new CxfEndpoint(); result.setServiceClass(CalculatorService.class); result.setAddress(\"https://acme.org/calculator\"); result.setWsdlURL(\"wsdl/CalculatorService.wsdl\"); result.getFeatures().add(prettyLoggingFeature); return result; } @Produces @ApplicationScoped @Named CxfEndpoint helloBeanEndpoint() { final CxfEndpoint result = new CxfEndpoint(); result.setServiceClass(HelloService.class); result.setAddress(\"/hello-bean\"); result.setWsdlURL(\"wsdl/HelloService.wsdl\"); result.getFeatures().add(prettyLoggingFeature); return result; } }", "quarkus.cxf.codegen.wsdl2java.additional-params = -validate", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-dataformat</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-dataset</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-direct</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-elasticsearch</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-elasticsearch-rest-client</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-fhir</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-file</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-file-cluster-service</artifactId> </dependency>", "from(\"master:ns:timer:test?period=100\").log(\"Timer invoked on a single JVM at a time\");", "quarkus.camel.cluster.file.root = target/cluster-folder-where-lock-file-will-be-held", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-flink</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-ftp</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-google-bigquery</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-google-pubsub</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-google-secret-manager</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-graphql</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-grpc</artifactId> </dependency>", "<build> <plugins> <plugin> <groupId>io.quarkus</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>USD{quarkus.platform.version}</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> <goal>generate-code-tests</goal> </goals> </execution> </executions> </plugin> </plugins> </build>", "quarkus.camel.grpc.codegen.scan-for-proto=org.my.groupId1:my-artifact-id-1,org.my.groupId2:my-artifact-id-2", "quarkus.camel.grpc.codegen.scan-for-proto-includes.\"<groupId>\\:<artifactId>\"=foo/**,bar/**,baz/a-proto.proto quarkus.camel.grpc.codegen.scan-for-proto-excludes.\"<groupId>\\:<artifactId>\"=foo/private/**,baz/another-proto.proto", "quarkus.native.resources.includes = certs/*.pem,certs.*.key", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-gson</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-hl7</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-http</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-hashicorp-vault</artifactId> </dependency>", "@RegisterForReflection public class Credentials { private String username; private String password; // Getters & setters }", "from(\"direct:createSecret\") .process(new Processor() { @Override public void process(Exchange exchange) { Credentials credentials = new Credentials(); credentials.setUsername(\"admin\"); credentials.setPassword(\"2s3cr3t\"); exchange.getMessage().setBody(credentials); } }) .to(\"hashicorp-vault:secret?operation=createSecret&token=my-token&secretPath=my-secret\")", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-infinispan</artifactId> </dependency>", "quarkus.infinispan-client.devservices.create-default-client=false", "dev / test mode Quarkus Infinispan Dev services configuration quarkus.infinispan-client.devservices.port=31222 %dev,test.camel.component.infinispan.username=admin %dev,test.camel.component.infinispan.password=password %dev,test.camel.component.infinispan.secure=true %dev,test.camel.component.infinispan.hosts=localhost:31222 Example prod mode configuration %prod.camel.component.infinispan.username=prod-user %prod.camel.component.infinispan.password=prod-password %prod.camel.component.infinispan.secure=true %prod.camel.component.infinispan.hosts=infinispan.prod:11222", "public class Routes extends RouteBuilder { // Injects the default unnamed RemoteCacheManager @Inject RemoteCacheManager cacheManager; // If configured, injects an optional named RemoteCacheManager @Inject @InfinispanClientName(\"myNamedClient\") RemoteCacheManager namedCacheManager; @Override public void configure() { // Route configuration here } }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jackson-avro</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jackson-protobuf</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jackson</artifactId> </dependency>", "import com.fasterxml.jackson.databind.ObjectMapper; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.component.jackson.JacksonDataFormat; public class Routes extends RouteBuilder { public void configure() { ObjectMapper mapper = new ObjectMapper(); JacksonDataFormat dataFormat = new JacksonDataFormat(); dataFormat.setObjectMapper(mapper); // Use the dataFormat instance in a route definition from(\"direct:my-direct\").marshal(dataFormat) } }", "import org.apache.camel.builder.RouteBuilder; import org.apache.camel.component.jackson.JacksonDataFormat; public class Routes extends RouteBuilder { public void configure() { JacksonDataFormat dataFormat = new JacksonDataFormat(); // Make JacksonDataFormat discover the Quarkus Jackson `ObjectMapper` from the Camel registry dataFormat.setAutoDiscoverObjectMapper(true); // Use the dataFormat instance in a route definition from(\"direct:my-direct\").marshal(dataFormat) } }", "import org.apache.camel.builder.RouteBuilder; @ApplicationScoped public class Routes extends RouteBuilder { public void configure() { restConfiguration().dataFormatProperty(\"autoDiscoverObjectMapper\", \"true\"); // REST definition follows } }", "import com.fasterxml.jackson.databind.ObjectMapper; import io.quarkus.jackson.ObjectMapperCustomizer; @Singleton public class RegisterCustomModuleCustomizer implements ObjectMapperCustomizer { public void customize(ObjectMapper mapper) { mapper.registerModule(new CustomModule()); } }", "import com.fasterxml.jackson.databind.ObjectMapper; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.component.jackson.JacksonDataFormat; @ApplicationScoped public class Routes extends RouteBuilder { @Inject ObjectMapper mapper; public void configure() { JacksonDataFormat dataFormat = new JacksonDataFormat(); dataFormat.setObjectMapper(mapper); // Use the dataFormat instance in a route definition from(\"direct:my-direct\").marshal(dataFormat) } }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jacksonxml</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jasypt</artifactId> </dependency>", "jbang org.apache.camel:camel-jasypt:{camel-version} -c encrypt -p secret-password -i \"Some secret content\"", "my.secret = ENC(BoDSRQfdBME4V/AcugPOkaR+IcyKufGz)", "public class MySecureRoute extends RouteBuilder { @Override public void configure() { from(\"timer:tick?period=5s\") .to(\"{{my.secret}}\"); } }", "@ApplicationScoped public class MySecureRoute extends RouteBuilder { @ConfigInject(\"my.secret\") String mySecret; @Override public void configure() { from(\"timer:tick?period=5s\") .to(mySecret); } }", "package org.acme; import org.apache.camel.quarkus.component.jasypt.JasyptConfigurationCustomizer; import org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig; import org.jasypt.iv.RandomIvGenerator; import org.jasypt.salt.RandomSaltGenerator; public class JasyptConfigurationCustomizer implements JasyptConfigurationCustomizer { public void customize(EnvironmentStringPBEConfig config) { // Custom algorithms config.setAlgorithm(\"PBEWithHmacSHA256AndAES_256\"); config.setSaltGenerator(new RandomSaltGenerator(\"PKCS11\")); config.setIvGenerator(new RandomIvGenerator(\"PKCS11\")); // Additional customizations } }", "quarkus.camel.jasypt.configuration-customizer-class-name = org.acme.MyJasyptEncryptorCustomizer", "import org.apache.camel.CamelContext; import org.apache.camel.component.jasypt.JasyptPropertiesParser; import org.apache.camel.component.properties.PropertiesComponent; public class MySecureRoute extends RouteBuilder { @Override public void configure() { JasyptPropertiesParser jasypt = new JasyptPropertiesParser(); jasypt.setPassword(\"secret\"); PropertiesComponent component = (PropertiesComponent) getContext().getPropertiesComponent(); jasypt.setPropertiesComponent(component); component.setPropertiesParser(jasypt); from(\"timer:tick?period=5s\") .to(\"{{my.secret}}\"); } }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-java-joor-dsl</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jaxb</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jdbc</artifactId> </dependency>", "quarkus.datasource.camel.db-kind=postgresql quarkus.datasource.camel.username=your-username quarkus.datasource.camel.password=your-password quarkus.datasource.camel.jdbc.url=jdbc:postgresql://localhost:5432/your-database quarkus.datasource.camel.jdbc.max-size=16", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jira</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jms</artifactId> </dependency>", "quarkus.pooled-jms.max-connections = 8", "<dependency> <groupId>io.quarkiverse.messaginghub</groupId> <artifactId>quarkus-pooled-jms</artifactId> </dependency>", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-narayana-jta</artifactId> </dependency>", "quarkus.pooled-jms.transaction=xa quarkus.transaction-manager.enable-recovery=true", "@Produces public ConnectionFactory createXAConnectionFactory(PooledJmsWrapper wrapper) { MQXAConnectionFactory mq = new MQXAConnectionFactory(); try { mq.setHostName(ConfigProvider.getConfig().getValue(\"ibm.mq.host\", String.class)); mq.setPort(ConfigProvider.getConfig().getValue(\"ibm.mq.port\", Integer.class)); mq.setChannel(ConfigProvider.getConfig().getValue(\"ibm.mq.channel\", String.class)); mq.setQueueManager(ConfigProvider.getConfig().getValue(\"ibm.mq.queueManagerName\", String.class)); mq.setTransportType(WMQConstants.WMQ_CM_CLIENT); mq.setStringProperty(WMQConstants.USERID, ConfigProvider.getConfig().getValue(\"ibm.mq.user\", String.class)); mq.setStringProperty(WMQConstants.PASSWORD, ConfigProvider.getConfig().getValue(\"ibm.mq.password\", String.class)); } catch (Exception e) { throw new RuntimeException(\"Unable to create new IBM MQ connection factory\", e); } return wrapper.wrapConnectionFactory(mq); }", "@Inject TransactionManager transactionManager; @Override public void configure() throws Exception { from(\"jms:queue:DEV.QUEUE.XA?transactionManager=#jtaTransactionManager\"); } @Named(\"jtaTransactionManager\") public PlatformTransactionManager getTransactionManager() { return new JtaTransactionManager(transactionManager); }", "WARN [com.arj.ats.jta] (executor-thread-1) ARJUNA016045: attempted rollback of < formatId=131077, gtrid_length=35, bqual_length=36, tx_uid=0:ffffc0a86510:aed3:650915d7:16, node_name=quarkus, branch_uid=0:ffffc0a86510:aed3:650915d7:1f, subordinatenodename=null, eis_name=0 > (com.ibm.mq.jmqi.JmqiXAResource@79786dde) failed with exception code XAException.XAER_NOTA: javax.transaction.xa.XAException: The method 'xa_rollback' has failed with errorCode '-4'.", "it may be ignored and can be assumed that MQ has discarded the transaction's work. Refer to https://access.redhat.com/solutions/1250743[Red Hat Knowledgebase] for more information.", "@RegisterForReflection(targets = { IllegalStateException.class, MyCustomException.class }, serialization = true)", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jpa</artifactId> </dependency>", "@Inject EntityManagerFactory entityManagerFactory; @Inject TransactionStrategy transactionStrategy; from(\"direct:idempotent\") .idempotentConsumer( header(\"messageId\"), new JpaMessageIdRepository(entityManagerFactory, transactionStrategy, \"idempotentProcessor\"));", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jslt</artifactId> </dependency>", "from(\"direct:start\").to(\"jslt:transformation.json\");", "quarkus.native.resources.includes = *.json", "@RegisterForReflection public class MathFunctionStub { public static double pow(double a, double b) { return java.lang.Math.pow(a, b); } }", "@Named JsltComponent jsltWithFunction() throws ClassNotFoundException { JsltComponent component = new JsltComponent(); component.setFunctions(singleton(wrapStaticMethod(\"power\", \"org.apache.cq.example.MathFunctionStub\", \"pow\"))); return component; }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jsonpath</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jta</artifactId> </dependency>", "from(\"direct:transaction\") .transacted() .to(\"sql:INSERT INTO A TABLE ...?dataSource=#ds1\") .to(\"sql:INSERT INTO A TABLE ...?dataSource=#ds2\") .log(\"all data are in the ds1 and ds2\")", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jt400</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jq</artifactId> </dependency>", "@RegisterForReflection public class Book { }", "public class MyJQRoutes extends RouteBuilder { @Override public void configure() { from(\"direct:jq\") .transform().jq(\".book\", Book.class); } }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-kafka</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-kamelet</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.kamelets</groupId> <artifactId>camel-kamelets</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.kamelets</groupId> <artifactId>camel-kamelets-utils</artifactId> <exclusions> <exclusion> <groupId>org.apache.camel</groupId> <artifactId>*</artifactId> </exclusion> </exclusions> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-kubernetes</artifactId> </dependency>", "from(\"direct:pods\") .to(\"kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPods\")", "quarkus.kubernetes-client.master-url=https://my.k8s.host quarkus.kubernetes-client.namespace=my-namespace", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-kubernetes-cluster-service</artifactId> </dependency>", "from(\"master:ns:timer:test?period=100\").log(\"Timer invoked on a single pod at a time\");", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-kudu</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-language</artifactId> </dependency>", "quarkus.native.resources.includes=script.txt", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-ldap</artifactId> </dependency>", "@RegisterForReflection public class CustomSSLSocketFactory extends SSLSocketFactory { // The class definition is the same as in the above link. }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-lra</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-log</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-mail</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-management</artifactId> </dependency>", "quarkus.native.monitoring=jmxserver", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-mapstruct</artifactId> </dependency>", "<plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <annotationProcessorPaths> <path> <groupId>org.mapstruct</groupId> <artifactId>mapstruct-processor</artifactId> <version>{mapstruct-version}</version> </path> </annotationProcessorPaths> </configuration> </plugin> </plugins>", "dependencies { annotationProcessor 'org.mapstruct:mapstruct-processor:{mapstruct-version}' testAnnotationProcessor 'org.mapstruct:mapstruct-processor:{mapstruct-version}' }", "camel.component.mapstruct.mapper-package-name = com.first.package,org.second.package", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-master</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-micrometer</artifactId> </dependency>", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-microprofile-fault-tolerance</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-microprofile-health</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-minio</artifactId> </dependency>", "minio:foo?minioClient=#minioClient", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-mllp</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-mock</artifactId> </dependency>", "import jakarta.inject.Inject; import org.apache.camel.CamelContext; import org.apache.camel.ProducerTemplate; import org.apache.camel.component.mock.MockEndpoint; import org.junit.jupiter.api.Test; import io.quarkus.test.junit.QuarkusTest; @QuarkusTest public class MockJvmTest { @Inject CamelContext camelContext; @Inject ProducerTemplate producerTemplate; @Test public void test() throws InterruptedException { producerTemplate.sendBody(\"direct:start\", \"Hello World\"); MockEndpoint mockEndpoint = camelContext.getEndpoint(\"mock:result\", MockEndpoint.class); mockEndpoint.expectedBodiesReceived(\"Hello World\"); mockEndpoint.assertIsSatisfied(); } }", "import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.builder.RouteBuilder; @ApplicationScoped public class MockRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"direct:start\").to(\"mock:result\"); } }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-mongodb</artifactId> </dependency>", "from(\"direct:start\") .to(\"mongodb:camelMongoClient?database=myDb&collection=myCollection&operation=findAll\")", "//application.properties quarkus.mongodb.mongoClient1.connection-string = mongodb://root:example@localhost:27017/", "//Routes.java @ApplicationScoped public class Routes extends RouteBuilder { @Inject @MongoClientName(\"mongoClient1\") MongoClient mongoClient1; @Override public void configure() throws Exception { from(\"direct:defaultServer\") .to(\"mongodb:camelMongoClient?database=myDb&collection=myCollection&operation=findAll\") from(\"direct:otherServer\") .to(\"mongodb:mongoClient1?database=myOtherDb&collection=myOtherCollection&operation=findAll\"); } }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-mybatis</artifactId> </dependency>", "quarkus.mybatis.xmlconfig.enable=true quarkus.mybatis.xmlconfig.path=SqlMapConfig.xml", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-netty-http</artifactId> </dependency>", "@RegisterForReflection(targets = { IllegalStateException.class, MyCustomException.class }, serialization = true)", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-netty</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-openapi-java</artifactId> </dependency>", "quarkus.camel.openapi.expose.enabled=true", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-opentelemetry</artifactId> </dependency>", "Identifier for the origin of spans created by the application quarkus.application.name=my-camel-application OTLP exporter endpoint quarkus.opentelemetry.tracer.exporter.otlp.endpoint=http://localhost:4317", "Exclude all direct & netty-http endpoints from tracing quarkus.camel.opentelemetry.exclude-patterns=direct:*,netty-http:*", "@ApplicationScoped @Named(\"myBean\") public class MyBean { @WithSpan public String greet() { return \"Hello World!\"; } }", "public class MyRoutes extends RouteBuilder { @Override public void configure() throws Exception { from(\"direct:executeBean\") .to(\"bean:myBean?method=greet\"); } }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-paho-mqtt5</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-paho</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-platform-http</artifactId> </dependency>", "from(\"platform-http:/hello\").setBody(simple(\"Hello USD{header.name}\"));", "from(\"platform-http:/hello?httpMethodRestrict=GET\").setBody(simple(\"Hello USD{header.name}\"));", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-rest</artifactId> </dependency>", "rest() .get(\"/my-get-endpoint\") .to(\"direct:handleGetRequest\"); .post(\"/my-post-endpoint\") .to(\"direct:handlePostRequest\");", "from(\"platform-http:/upload/multipart?fileNameExtWhitelist=adoc,txt&httpMethodRestrict=POST\") .to(\"log:multipart\") .process(e -> { final AttachmentMessage am = e.getMessage(AttachmentMessage.class); if (am.hasAttachments()) { am.getAttachments().forEach((fileName, dataHandler) -> { try (InputStream in = dataHandler.getInputStream()) { // do something with the input stream } catch (IOException ioe) { throw new RuntimeException(ioe); } }); } });", "from(\"platform-http:/secure\") .process(e -> { Message message = e.getMessage(); QuarkusHttpUser user = message.getHeader(VertxPlatformHttpConstants.AUTHENTICATED_USER, QuarkusHttpUser.class); SecurityIdentity securityIdentity = user.getSecurityIdentity(); Principal principal = securityIdentity.getPrincipal(); // Do something useful with SecurityIdentity / Principal. E.g check user roles etc. });", "from(\"platform-http:proxy\") .toD(\"http://\" + \"USD{headers.\" + Exchange.HTTP_HOST + \"}\");", "onException(InvalidOrderTotalException.class) .handled(true) .setHeader(Exchange.HTTP_RESPONSE_CODE).constant(500) .setHeader(Exchange.CONTENT_TYPE).constant(\"text/plain\") .setBody().constant(\"The order total was not greater than 100\"); from(\"platform-http:/orders\") .choice().when().xpath(\"//order/total > 100\") .to(\"direct:processOrder\") .otherwise() .throwException(new InvalidOrderTotalException());", "void initRouter(@Observes Router router) { // Custom 404 handler router.errorHandler(404, new Handler<RoutingContext>() { @Override public void handle(RoutingContext event) { event.response() .setStatusCode(404) .putHeader(\"Content-Type\", \"text/plain\") .end(\"Sorry - resource not found\"); } }); }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-quartz</artifactId> </dependency>", "Quartz configuration quarkus.quartz.clustered=true quarkus.quartz.store-type=jdbc-cmt quarkus.scheduler.start-mode=forced Datasource configuration quarkus.datasource.db-kind=postgresql quarkus.datasource.username=quarkus_test quarkus.datasource.password=quarkus_test quarkus.datasource.jdbc.url=jdbc:postgresql://localhost/quarkus_test Optional automatic creation of Quartz tables quarkus.flyway.connect-retries=10 quarkus.flyway.table=flyway_quarkus_history quarkus.flyway.migrate-at-start=true quarkus.flyway.baseline-on-migrate=true quarkus.flyway.baseline-version=1.0 quarkus.flyway.baseline-description=Quartz", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-jdbc-postgresql</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-agroal</artifactId> </dependency>", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-flyway</artifactId> </dependency>", "@Produces @Singleton @Named(\"quartz\") public QuartzComponent quartzComponent(Scheduler scheduler) { QuartzComponent component = new QuartzComponent(); component.setScheduler(scheduler); return component; }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-qute</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-ref</artifactId> </dependency>", "@ApplicationScoped public class MyEndpointProducers { @Inject CamelContext context; @Singleton @Produces @Named(\"endpoint1\") public Endpoint directStart() { return context.getEndpoint(\"direct:start\"); } @Singleton @Produces @Named(\"endpoint2\") public Endpoint logEnd() { return context.getEndpoint(\"log:end\"); } }", "public class MyRefRoutes extends RouteBuilder { @Override public void configure() { // direct:start -> log:end from(\"ref:endpoint1\") .to(\"ref:endpoint2\"); } }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-rest-openapi</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-http</artifactId> </dependency>", "quarkus.native.resources.includes=openapi.json", "<plugin> <groupId>io.quarkus</groupId> <artifactId>quarkus-maven-plugin</artifactId> <executions> <execution> <goals> <goal>generate-code</goal> </goals> </execution> </executions> </plugin>", "quarkus.camel.openapi.codegen.model-package=org.acme", "<build> <resources> <resource> <directory>src/main/openapi</directory> </resource> <resource> <directory>src/main/resources</directory> </resource> </resources> </build>", "quarkus.native.resources.includes=contract.json", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-rest</artifactId> </dependency>", "import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { rest(\"/api\") // Dash '-' is not allowed by default .get(\"/dashed/param/{my-param}\") .to(\"direct:greet\") // The non-dashed path parameter works by default .get(\"/undashed/param/{myParam}\") .to(\"direct:greet\"); from(\"direct:greet\") .setBody(constant(\"Hello World\")); } }", "import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { restConfiguration() .component(\"servlet\"); } }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-salesforce</artifactId> </dependency>", "<plugin> <groupId>org.apache.camel.maven</groupId> <artifactId>camel-salesforce-maven-plugin</artifactId> <version>{camel-version}</version> <executions> <execution> <goals> <goal>generate</goal> </goals> <configuration> <clientId>USD{env.SALESFORCE_CLIENTID}</clientId> <clientSecret>USD{env.SALESFORCE_CLIENTSECRET}</clientSecret> <userName>USD{env.SALESFORCE_USERNAME}</userName> <password>USD{env.SALESFORCE_PASSWORD}</password> <loginUrl>https://login.salesforce.com</loginUrl> <packageName>org.apache.camel.quarkus.component.salesforce.generated</packageName> <outputDirectory>src/main/java</outputDirectory> <includes> <include>Account</include> </includes> </configuration> </execution> </executions> </plugin>", "from(\"salesforce:pubSubSubscribe:/event/TestEvent__e?pubSubDeserializeType=POJO&pubSubPojoClass=org.foo.TestEvent\") .log(\"Received Salesforce POJO topic message: USD{body}\");", "package org.foo; import io.quarkus.runtime.annotations.RegisterForReflection; @RegisterForReflection public class TestEvent { // Getters / setters etc }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-saga</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-sap</artifactId> </dependency>", "Caused by: java.lang.ExceptionInInitializerError: JCo initialization failed with java.lang.ExceptionInInitializerError: Illegal JCo archive \"sap-1.0.0-SNAPSHOT-runner.jar\". It is not allowed to rename or repackage the original archive \"sapjco3.jar\".", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-saxon</artifactId> </dependency>", "from(\"direct:start\").transform().xquery(\"resource:classpath:myxquery.txt\", String.class); from(\"direct:start\").to(\"xquery:another-xquery.txt\");", "quarkus.native.resources.includes = *.txt", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-aws-secrets-manager</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-scheduler</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-seda</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-servlet</artifactId> </dependency>", "quarkus.camel.servlet.url-patterns = /*", "from(\"servlet://greet\") .setBody().constant(\"Hello World\");", "quarkus.camel.servlet.servlet-name = My Custom Name", "quarkus.camel.servlet.servlet-class = org.acme.MyCustomServlet", "quarkus.camel.servlet.my-servlet-a.servlet-name = my-custom-a quarkus.camel.servlet.my-servlet-a.url-patterns = /custom/a/* quarkus.camel.servlet.my-servlet-b.servlet-name = my-custom-b quarkus.camel.servlet.my-servlet-b.servlet-class = org.acme.CustomServletB quarkus.camel.servlet.my-servlet-b.url-patterns = /custom/b/*", "from(\"servlet://greet?servletName=my-custom-a\") .setBody().constant(\"Hello World\"); from(\"servlet://goodbye?servletName=my-custom-b\") .setBody().constant(\"Goodbye World\");", "import jakarta.servlet.annotation.WebServlet; import org.apache.camel.component.servlet.CamelHttpTransportServlet; @WebServlet( urlPatterns = {\"/*\"}, initParams = { @WebInitParam(name = \"myParam\", value = \"myValue\") } ) public class MyCustomServlet extends CamelHttpTransportServlet { }", "<web-app> <servlet> <servlet-name>CamelServlet</servlet-name> <servlet-class>org.apache.camel.component.servlet.CamelHttpTransportServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>CamelServlet</servlet-name> <url-pattern>/services/*</url-pattern> </servlet-mapping> </web-app>", "@RegisterForReflection(targets = { IllegalStateException.class, MyCustomException.class }, serialization = true)", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-slack</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-smb</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-snmp</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-soap</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-splunk</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-splunk-hec</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-spring-rabbitmq</artifactId> </dependency>", "quarkus.native.additional-build-args = -H:+InlineBeforeAnalysis", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-sql</artifactId> </dependency>", "quarkus.datasource.db-kind=postgresql quarkus.datasource.username=your-username quarkus.datasource.password=your-password quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/your-database quarkus.datasource.jdbc.max-size=16", "quarkus.native.resources.includes = queries.sql, sql/*.sql", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-telegram</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-rest</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-timer</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-validator</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-velocity</artifactId> </dependency>", "@RegisterForReflection public interface CustomBody { }", "from(\"direct:start\").to(\"velocity://template/simple.vm\");", "quarkus.native.resources.includes = template/*.vm", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-vertx-http</artifactId> </dependency>", "@RegisterForReflection(targets = { IllegalStateException.class, MyCustomException.class }, serialization = true)", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-vertx-websocket</artifactId> </dependency>", "from(\"vertx-websocket:/my-websocket-path\") .setBody().constant(\"Hello World\");", "from(\"vertx-websocket:/my-websocket-path\") .log(\"Got body: USD{body}\"); from(\"direct:sendToWebSocket\") .log(\"vertx-websocket:/my-websocket-path\");", "from(\"direct:sendToWebSocket\") .log(\"vertx-websocket:{{quarkus.http.host}}:{{quarkus.http.port}}/my-websocket-path\");", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-xj</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-xml-io-dsl</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-xml-jaxp</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-xpath</artifactId> </dependency>", "from(\"direct:start\").transform().xpath(\"resource:classpath:myxpath.txt\");", "quarkus.native.resources.includes = *.txt", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-xslt-saxon</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-xslt</artifactId> </dependency>", "quarkus.camel.xslt.sources = transform.xsl, classpath:path/to/my/file.xsl", "from(\"file:src/test/resources?noop=true&sortBy=file:name&antInclude=*.xml\") .routeId(\"aggregate\").noAutoStartup() .aggregate(new XsltSaxonAggregationStrategy(\"xslt/aggregate.xsl\")) .constant(true) .completionFromBatchConsumer() .log(\"after aggregate body: USD{body}\") .to(\"mock:transformed\");", "quarkus.camel.xslt.features.\"http\\://javax.xml.XMLConstants/feature/secure-processing\"=false", "@RegisterForReflection(targets = { my.Functions.class }) public class FunctionsConfiguration { }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-yaml-dsl</artifactId> </dependency>", "- beans: - name: \"greetingBean\" type: \"org.acme.GreetingBean\" properties: greeting: \"Hello World!\" - route: id: \"my-yaml-route\" from: uri: \"timer:from-yaml?period=1000\" steps: - to: \"bean:greetingBean\"", "@RegisterForReflection public class GreetingBean { }", "- on-exception: handled: constant: \"true\" exception: - \"org.acme.MyHandledException\" steps: - transform: constant: \"Sorry something went wrong\"", "@RegisterForReflection public class MyHandledException { }", "- route: id: \"my-yaml-route\" from: uri: \"direct:start\" steps: - choice: when: - simple: \"USD{body} == 'bad value'\" steps: - throw-exception: exception-type: \"org.acme.ForcedException\" message: \"Forced exception\" otherwise: steps: - to: \"log:end\"", "@RegisterForReflection public class ForcedException { }", "- route: id: \"my-yaml-route2\" from: uri: \"direct:tryCatch\" steps: - do-try: steps: - to: \"direct:readFile\" do-catch: - exception: - \"java.io.FileNotFoundException\" steps: - transform: constant: \"do-catch caught an exception\"", "@RegisterForReflection(targets = FileNotFoundException.class) public class MyClass { }", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-yaml-io</artifactId> </dependency>", "camel.main.dump-routes = yaml", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-zip-deflater</artifactId> </dependency>", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-zipfile</artifactId> </dependency>", "<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf</artifactId> </dependency>", "Parameters for foo.wsdl quarkus.cxf.codegen.wsdl2java.foo-params.includes = wsdl/foo.wsdl quarkus.cxf.codegen.wsdl2java.foo-params.wsdl-location = wsdl/foo.wsdl Parameters for bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.includes = wsdl/bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.wsdl-location = wsdl/bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.xjc = ts", "Parameters for the foo package quarkus.cxf.java2ws.foo-params.includes = org.foo.* quarkus.cxf.java2ws.foo-params.additional-params = -servicename,FruitService Parameters for the bar package quarkus.cxf.java2ws.bar-params.includes = org.bar.* quarkus.cxf.java2ws.bar-params.additional-params = -servicename,HelloService", "quarkus.cxf.decoupled-endpoint-base = https://api.example.com:USD{quarkus.http.ssl-port}USD{quarkus.cxf.path} or for plain HTTP quarkus.cxf.decoupled-endpoint-base = http://api.example.com:USD{quarkus.http.port}USD{quarkus.cxf.path}", "import java.util.Map; import jakarta.inject.Inject; import jakarta.ws.rs.POST; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.Context; import jakarta.ws.rs.core.MediaType; import jakarta.ws.rs.core.UriInfo; import jakarta.xml.ws.BindingProvider; import io.quarkiverse.cxf.annotation.CXFClient; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path(\"/my-rest\") public class MyRestEasyResource { @Inject @CXFClient(\"hello\") HelloService helloService; @ConfigProperty(name = \"quarkus.cxf.path\") String quarkusCxfPath; @POST @Path(\"/hello\") @Produces(MediaType.TEXT_PLAIN) public String hello(String body, @Context UriInfo uriInfo) throws IOException { // You may consider doing this only once if you are sure that your service is accessed // through a single hostname String decoupledEndpointBase = uriInfo.getBaseUriBuilder().path(quarkusCxfPath); Map>String, Object< requestContext = ((BindingProvider) helloService).getRequestContext(); requestContext.put(\"org.apache.cxf.ws.addressing.decoupled.endpoint.base\", decoupledEndpointBase); return wsrmHelloService.hello(body); } }", "Parameters for foo.wsdl quarkus.cxf.codegen.wsdl2java.foo-params.includes = wsdl/foo.wsdl quarkus.cxf.codegen.wsdl2java.foo-params.wsdl-location = wsdl/foo.wsdl Parameters for bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.includes = wsdl/bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.wsdl-location = wsdl/bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.xjc = ts", "Parameters for the foo package quarkus.cxf.java2ws.foo-params.includes = org.foo.* quarkus.cxf.java2ws.foo-params.additional-params = -servicename,FruitService Parameters for the bar package quarkus.cxf.java2ws.bar-params.includes = org.bar.* quarkus.cxf.java2ws.bar-params.additional-params = -servicename,HelloService", "quarkus.cxf.endpoint.\"/hello\".features = org.apache.cxf.ext.logging.LoggingFeature quarkus.cxf.endpoint.\"/fruit\".features = #myCustomLoggingFeature", "import org.apache.cxf.ext.logging.LoggingFeature; import javax.enterprise.context.ApplicationScoped; import javax.enterprise.inject.Produces; class Producers { @Produces @ApplicationScoped LoggingFeature myCustomLoggingFeature() { LoggingFeature loggingFeature = new LoggingFeature(); loggingFeature.setPrettyLogging(true); return loggingFeature; } }", "quarkus.cxf.endpoint.\"/my-endpoint\".features = org.apache.cxf.ext.logging.LoggingFeature", "<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-features-metrics</artifactId> </dependency>", "<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-features-metrics</artifactId> </dependency>", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency>", "quarkus.micrometer.export.json.enabled = true quarkus.micrometer.export.json.path = metrics/json quarkus.micrometer.export.prometheus.path = metrics/prometheus", "mvn quarkus:dev", "curl -d '<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"><soap:Body><ns2:helloResponse xmlns:ns2=\"http://it.server.metrics.cxf.quarkiverse.io/\"><return>Hello Joe!</return></ns2:helloResponse></soap:Body></soap:Envelope>' -H 'Content-Type: text/xml' -X POST http://localhost:8080/metrics/client/hello", "curl http://localhost:8080/q/metrics/json metrics: { \"cxf.server.requests\": { \"count;exception=None;faultCode=None;method=POST;operation=hello;outcome=SUCCESS;status=200;uri=/soap/hello\": 2, \"elapsedTime;exception=None;faultCode=None;method=POST;operation=hello;outcome=SUCCESS;status=200;uri=/soap/hello\": 64.0 }, }", "<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-integration-tracing-opentelemetry</artifactId> </dependency>", "<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-ws-security</artifactId> </dependency>", "<?xml version=\"1.0\" encoding=\"UTF-8\" ?> <wsp:Policy wsu:Id=\"SecurityServiceEncryptThenSignPolicy\" xmlns:wsp=\"http://www.w3.org/ns/ws-policy\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\"> <wsp:ExactlyOne> <wsp:All> 1 <sp:AsymmetricBinding xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\"> <wsp:Policy> 2 <sp:InitiatorToken> <wsp:Policy> <sp:X509Token sp:IncludeToken=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/AlwaysToRecipient\"> <wsp:Policy> <sp:WssX509V3Token11/> </wsp:Policy> </sp:X509Token> </wsp:Policy> </sp:InitiatorToken> <sp:RecipientToken> <wsp:Policy> <sp:X509Token sp:IncludeToken=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/Never\"> <wsp:Policy> <sp:WssX509V3Token11/> </wsp:Policy> </sp:X509Token> </wsp:Policy> </sp:RecipientToken> <sp:AlgorithmSuite> <wsp:Policy> <sp:Basic256/> </wsp:Policy> </sp:AlgorithmSuite> <sp:Layout> <wsp:Policy> <sp:Strict/> </wsp:Policy> </sp:Layout> <sp:IncludeTimestamp/> <sp:ProtectTokens/> <sp:OnlySignEntireHeadersAndBody/> <sp:EncryptBeforeSigning/> </wsp:Policy> </sp:AsymmetricBinding> 3 <sp:SignedParts xmlns:sp=\"http://schemas.xmlsoap.org/ws/2005/07/securitypolicy\"> <sp:Body/> </sp:SignedParts> 4 <sp:EncryptedParts xmlns:sp=\"http://schemas.xmlsoap.org/ws/2005/07/securitypolicy\"> <sp:Body/> </sp:EncryptedParts> <sp:Wss10 xmlns:sp=\"http://schemas.xmlsoap.org/ws/2005/07/securitypolicy\"> <wsp:Policy> <sp:MustSupportRefIssuerSerial/> </wsp:Policy> </sp:Wss10> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>", "@WebService(serviceName = \"EncryptSignPolicyHelloService\") @Policy(placement = Policy.Placement.BINDING, uri = \"encrypt-sign-policy.xml\") public interface EncryptSignPolicyHelloService extends AbstractHelloService { }", "A service with encrypt-sign-policy.xml set quarkus.cxf.endpoint.\"/helloEncryptSign\".implementor = io.quarkiverse.cxf.it.security.policy.EncryptSignPolicyHelloServiceImpl Signature settings quarkus.cxf.endpoint.\"/helloEncryptSign\".security.signature.username = bob quarkus.cxf.endpoint.\"/helloEncryptSign\".security.signature.password = bob-keystore-password quarkus.cxf.endpoint.\"/helloEncryptSign\".security.signature.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint.\"/helloEncryptSign\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.endpoint.\"/helloEncryptSign\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = bob-keystore-password quarkus.cxf.endpoint.\"/helloEncryptSign\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = bob quarkus.cxf.endpoint.\"/helloEncryptSign\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.file\" = bob-keystore.pkcs12 Encryption settings quarkus.cxf.endpoint.\"/helloEncryptSign\".security.encryption.username = alice quarkus.cxf.endpoint.\"/helloEncryptSign\".security.encryption.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint.\"/helloEncryptSign\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.endpoint.\"/helloEncryptSign\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = bob-keystore-password quarkus.cxf.endpoint.\"/helloEncryptSign\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = bob quarkus.cxf.endpoint.\"/helloEncryptSign\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.file\" = bob-keystore.pkcs12", "A client with encrypt-sign-policy.xml set quarkus.cxf.client.helloEncryptSign.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/helloEncryptSign quarkus.cxf.client.helloEncryptSign.service-interface = io.quarkiverse.cxf.it.security.policy.EncryptSignPolicyHelloService quarkus.cxf.client.helloEncryptSign.features = #messageCollector The client-endpoint-url above is HTTPS, so we have to setup the server's SSL certificates quarkus.cxf.client.helloEncryptSign.trust-store = client-truststore.pkcs12 quarkus.cxf.client.helloEncryptSign.trust-store-password = client-truststore-password Signature settings quarkus.cxf.client.helloEncryptSign.security.signature.username = alice quarkus.cxf.client.helloEncryptSign.security.signature.password = alice-keystore-password quarkus.cxf.client.helloEncryptSign.security.signature.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.client.helloEncryptSign.security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.client.helloEncryptSign.security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = alice-keystore-password quarkus.cxf.client.helloEncryptSign.security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = alice quarkus.cxf.client.helloEncryptSign.security.signature.properties.\"org.apache.ws.security.crypto.merlin.file\" = alice-keystore.pkcs12 Encryption settings quarkus.cxf.client.helloEncryptSign.security.encryption.username = bob quarkus.cxf.client.helloEncryptSign.security.encryption.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.client.helloEncryptSign.security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.client.helloEncryptSign.security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = alice-keystore-password quarkus.cxf.client.helloEncryptSign.security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = alice quarkus.cxf.client.helloEncryptSign.security.encryption.properties.\"org.apache.ws.security.crypto.merlin.file\" = alice-keystore.pkcs12", "Clone the repository git clone https://github.com/quarkiverse/quarkus-cxf.git -o upstream cd quarkus-cxf Build the whole source tree mvn clean install -DskipTests -Dquarkus.build.skip Run the test cd integration-tests/ws-security-policy mvn clean test -Dtest=EncryptSignPolicyTest", "[prefix].signature.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin [prefix].signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password [prefix].signature.properties.\"org.apache.ws.security.crypto.merlin.file\" = certs/alice.jks", "[prefix].encryption.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin [prefix].encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password [prefix].encryption.properties.\"org.apache.ws.security.crypto.merlin.file\" = certs/alice.jks", "<wsp:Policy wsu:Id=\"SecurityServiceEncryptThenSignPolicy\" xmlns:wsp=\"http://www.w3.org/ns/ws-policy\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\"> <wsp:ExactlyOne> <wsp:All> <sp:AsymmetricBinding xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\"> <wsp:Policy> <sp:AlgorithmSuite> <wsp:Policy> <sp:CustomAlgorithmSuite/> </wsp:Policy> </sp:AlgorithmSuite> </wsp:Policy> </sp:AsymmetricBinding> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>", "[prefix].encryption.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin [prefix].encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password [prefix].encryption.properties.\"org.apache.ws.security.crypto.merlin.file\" = certs/alice.jks", "[prefix].token.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin [prefix].token.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password [prefix].token.properties.\"org.apache.ws.security.crypto.merlin.file\" = certs/alice.jks", "[prefix].signature.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin [prefix].signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password [prefix].signature.properties.\"org.apache.ws.security.crypto.merlin.file\" = certs/alice.jks", "[prefix].encryption.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin [prefix].encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password [prefix].encryption.properties.\"org.apache.ws.security.crypto.merlin.file\" = certs/alice.jks", "<wsp:Policy wsu:Id=\"SecurityServiceEncryptThenSignPolicy\" xmlns:wsp=\"http://www.w3.org/ns/ws-policy\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\"> <wsp:ExactlyOne> <wsp:All> <sp:AsymmetricBinding xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\"> <wsp:Policy> <sp:AlgorithmSuite> <wsp:Policy> <sp:CustomAlgorithmSuite/> </wsp:Policy> </sp:AlgorithmSuite> </wsp:Policy> </sp:AsymmetricBinding> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>", "<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-ws-rm</artifactId> </dependency>", "cd test-util-parent/test-ws-rm-server-jvm mvn clean install", "cd ../../integration-tests/ws-rm-client mvn clean test", "<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-services-sts</artifactId> </dependency>", "<sp:IssuedToken sp:IncludeToken=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/AlwaysToRecipient\"> <sp:RequestSecurityTokenTemplate> <t:TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</t:TokenType> <t:KeyType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey</t:KeyType> </sp:RequestSecurityTokenTemplate> <wsp:Policy> <sp:RequireInternalReference /> </wsp:Policy> <sp:Issuer> <wsaws:Address>http://localhost:8081/services/sts</wsaws:Address> <wsaws:Metadata xmlns:wsdli=\"http://www.w3.org/2006/01/wsdl-instance\" wsdli:wsdlLocation=\"http://localhost:8081/services/sts?wsdl\"> <wsaw:ServiceName xmlns:wsaw=\"http://www.w3.org/2006/05/addressing/wsdl\" xmlns:stsns=\"http://docs.oasis-open.org/ws-sx/ws-trust/200512/\" EndpointName=\"UT_Port\">stsns:SecurityTokenService</wsaw:ServiceName> </wsaws:Metadata> </sp:Issuer> </sp:IssuedToken>", "@WebServiceProvider(serviceName = \"SecurityTokenService\", portName = \"UT_Port\", targetNamespace = \"http://docs.oasis-open.org/ws-sx/ws-trust/200512/\", wsdlLocation = \"ws-trust-1.4-service.wsdl\") public class Sts extends SecurityTokenServiceProvider { public Sts() throws Exception { super(); StaticSTSProperties props = new StaticSTSProperties(); props.setSignatureCryptoProperties(\"stsKeystore.properties\"); props.setSignatureUsername(\"sts\"); props.setCallbackHandlerClass(StsCallbackHandler.class.getName()); props.setIssuer(\"SampleSTSIssuer\"); List<ServiceMBean> services = new LinkedList<ServiceMBean>(); StaticService service = new StaticService(); final Config config = ConfigProvider.getConfig(); final int port = LaunchMode.current().equals(LaunchMode.TEST) ? config.getValue(\"quarkus.http.test-port\", Integer.class) : config.getValue(\"quarkus.http.port\", Integer.class); service.setEndpoints(Arrays.asList( \"http://localhost:\" + port + \"/services/hello-ws-trust\", \"http://localhost:\" + port + \"/services/hello-ws-trust-actas\", \"http://localhost:\" + port + \"/services/hello-ws-trust-onbehalfof\")); services.add(service); TokenIssueOperation issueOperation = new TokenIssueOperation(); issueOperation.setServices(services); issueOperation.getTokenProviders().add(new SAMLTokenProvider()); // required for OnBehalfOf issueOperation.getTokenValidators().add(new UsernameTokenValidator()); // added for OnBehalfOf and ActAs issueOperation.getDelegationHandlers().add(new UsernameTokenDelegationHandler()); issueOperation.setStsProperties(props); TokenValidateOperation validateOperation = new TokenValidateOperation(); validateOperation.getTokenValidators().add(new SAMLTokenValidator()); validateOperation.setStsProperties(props); this.setIssueOperation(issueOperation); this.setValidateOperation(validateOperation); } }", "quarkus.cxf.endpoint.\"/sts\".implementor = io.quarkiverse.cxf.it.ws.trust.sts.Sts quarkus.cxf.endpoint.\"/sts\".logging.enabled = pretty quarkus.cxf.endpoint.\"/sts\".security.signature.username = sts quarkus.cxf.endpoint.\"/sts\".security.signature.password = password quarkus.cxf.endpoint.\"/sts\".security.validate.token = false quarkus.cxf.endpoint.\"/sts\".security.signature.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint.\"/sts\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.endpoint.\"/sts\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password quarkus.cxf.endpoint.\"/sts\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.file\" = sts.pkcs12", "@WebService(portName = \"TrustHelloServicePort\", serviceName = \"TrustHelloService\", targetNamespace = \"https://quarkiverse.github.io/quarkiverse-docs/quarkus-cxf/test/ws-trust\", endpointInterface = \"io.quarkiverse.cxf.it.ws.trust.server.TrustHelloService\") public class TrustHelloServiceImpl implements TrustHelloService { @WebMethod @Override public String hello(String person) { return \"Hello \" + person + \"!\"; } }", "@WebService(targetNamespace = \"https://quarkiverse.github.io/quarkiverse-docs/quarkus-cxf/test/ws-trust\") @Policy(placement = Policy.Placement.BINDING, uri = \"classpath:/asymmetric-saml2-policy.xml\") public interface TrustHelloService { @WebMethod @Policies({ @Policy(placement = Policy.Placement.BINDING_OPERATION_INPUT, uri = \"classpath:/io-policy.xml\"), @Policy(placement = Policy.Placement.BINDING_OPERATION_OUTPUT, uri = \"classpath:/io-policy.xml\") }) String hello(String person); }", "quarkus.cxf.endpoint.\"/hello-ws-trust\".implementor = io.quarkiverse.cxf.it.ws.trust.server.TrustHelloServiceImpl quarkus.cxf.endpoint.\"/hello-ws-trust\".logging.enabled = pretty quarkus.cxf.endpoint.\"/hello-ws-trust\".security.signature.username = service quarkus.cxf.endpoint.\"/hello-ws-trust\".security.signature.password = password quarkus.cxf.endpoint.\"/hello-ws-trust\".security.signature.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint.\"/hello-ws-trust\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.endpoint.\"/hello-ws-trust\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password quarkus.cxf.endpoint.\"/hello-ws-trust\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = service quarkus.cxf.endpoint.\"/hello-ws-trust\".security.signature.properties.\"org.apache.ws.security.crypto.merlin.file\" = service.pkcs12 quarkus.cxf.endpoint.\"/hello-ws-trust\".security.encryption.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.endpoint.\"/hello-ws-trust\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.endpoint.\"/hello-ws-trust\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password quarkus.cxf.endpoint.\"/hello-ws-trust\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = service quarkus.cxf.endpoint.\"/hello-ws-trust\".security.encryption.properties.\"org.apache.ws.security.crypto.merlin.file\" = service.pkcs12", "quarkus.cxf.client.hello-ws-trust.security.sts.client.wsdl = http://localhost:USD{quarkus.http.test-port}/services/sts?wsdl quarkus.cxf.client.hello-ws-trust.security.sts.client.service-name = {http://docs.oasis-open.org/ws-sx/ws-trust/200512/}SecurityTokenService quarkus.cxf.client.hello-ws-trust.security.sts.client.endpoint-name = {http://docs.oasis-open.org/ws-sx/ws-trust/200512/}UT_Port quarkus.cxf.client.hello-ws-trust.security.sts.client.username = client quarkus.cxf.client.hello-ws-trust.security.sts.client.password = password quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.username = sts quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = client quarkus.cxf.client.hello-ws-trust.security.sts.client.encryption.properties.\"org.apache.ws.security.crypto.merlin.keystore.file\" = client.pkcs12 quarkus.cxf.client.hello-ws-trust.security.sts.client.token.username = client quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties.\"org.apache.ws.security.crypto.provider\" = org.apache.ws.security.components.crypto.Merlin quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties.\"org.apache.ws.security.crypto.merlin.keystore.type\" = pkcs12 quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties.\"org.apache.ws.security.crypto.merlin.keystore.password\" = password quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties.\"org.apache.ws.security.crypto.merlin.keystore.alias\" = client quarkus.cxf.client.hello-ws-trust.security.sts.client.token.properties.\"org.apache.ws.security.crypto.merlin.keystore.file\" = client.pkcs12 quarkus.cxf.client.hello-ws-trust.security.sts.client.token.usecert = true", "quarkus.cxf.client.hello-ws-trust-bean.security.sts.client = #stsClientBean", "import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.inject.Produces; import jakarta.inject.Named; import org.apache.cxf.ws.security.SecurityConstants; import io.quarkiverse.cxf.ws.security.sts.client.STSClientBean; public class BeanProducers { /** * Create and configure an STSClient for use by the TrustHelloService client. */ @Produces @ApplicationScoped @Named(\"stsClientBean\") STSClientBean createSTSClient() { /* * We cannot use org.apache.cxf.ws.security.trust.STSClient as a return type of this bean producer method * because it does not have a no-args constructor. STSClientBean is a subclass of STSClient having one. */ STSClientBean stsClient = STSClientBean.create(); stsClient.setWsdlLocation(\"http://localhost:8081/services/sts?wsdl\"); stsClient.setServiceQName(new QName(\"http://docs.oasis-open.org/ws-sx/ws-trust/200512/\", \"SecurityTokenService\")); stsClient.setEndpointQName(new QName(\"http://docs.oasis-open.org/ws-sx/ws-trust/200512/\", \"UT_Port\")); Map<String, Object> props = stsClient.getProperties(); props.put(SecurityConstants.USERNAME, \"client\"); props.put(SecurityConstants.PASSWORD, \"password\"); props.put(SecurityConstants.ENCRYPT_PROPERTIES, Thread.currentThread().getContextClassLoader().getResource(\"clientKeystore.properties\")); props.put(SecurityConstants.ENCRYPT_USERNAME, \"sts\"); props.put(SecurityConstants.STS_TOKEN_USERNAME, \"client\"); props.put(SecurityConstants.STS_TOKEN_PROPERTIES, Thread.currentThread().getContextClassLoader().getResource(\"clientKeystore.properties\")); props.put(SecurityConstants.STS_TOKEN_USE_CERT_FOR_KEYINFO, \"true\"); return stsClient; } }", "<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-transports-http-hc5</artifactId> </dependency>", "<?xml version=\"1.0\"?> <bindings xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns=\"https://jakarta.ee/xml/ns/jaxws\" wsdlLocation=\"CalculatorService.wsdl\"> <bindings node=\"wsdl:definitions\"> <enableAsyncMapping>true</enableAsyncMapping> </bindings> </bindings>", "quarkus.cxf.codegen.wsdl2java.includes = wsdl/*.wsdl quarkus.cxf.codegen.wsdl2java.additional-params = -b,src/main/resources/wsdl/async-binding.xml", "package io.quarkiverse.cxf.hc5.it; import java.util.concurrent.Future; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.QueryParam; import jakarta.ws.rs.core.MediaType; import org.apache.cxf.endpoint.Client; import org.apache.cxf.frontend.ClientProxy; import org.jboss.eap.quickstarts.wscalculator.calculator.AddResponse; import org.jboss.eap.quickstarts.wscalculator.calculator.CalculatorService; import io.quarkiverse.cxf.annotation.CXFClient; import io.smallrye.mutiny.Uni; @Path(\"/hc5\") public class Hc5Resource { @Inject @CXFClient(\"myCalculator\") // name used in application.properties CalculatorService myCalculator; @SuppressWarnings(\"unchecked\") @Path(\"/add-async\") @GET @Produces(MediaType.TEXT_PLAIN) public Uni<Integer> addAsync(@QueryParam(\"a\") int a, @QueryParam(\"b\") int b) { return Uni.createFrom() .future( (Future<AddResponse>) myCalculator .addAsync(a, b, res -> { })) .map(addResponse -> addResponse.getReturn()); } }", "<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-xjc-plugins</artifactId> </dependency>", "<project ...> <properties> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>io.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version><!-- Check the latest https://repo1.maven.org/maven2/io/quarkus/platform/quarkus-cxf-bom/ --></quarkus.platform.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>USD{quarkus.platform.artifact-id}</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-cxf-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>", "package io.quarkiverse.cxf.it.server; import jakarta.jws.WebMethod; import jakarta.jws.WebService; /** * The simplest Hello service. */ @WebService(name = \"HelloService\", serviceName = \"HelloService\") public interface HelloService { @WebMethod String hello(String text); }", "package io.quarkiverse.cxf.it.server; import jakarta.jws.WebMethod; import jakarta.jws.WebService; /** * The simplest Hello service implementation. */ @WebService(serviceName = \"HelloService\") public class HelloServiceImpl implements HelloService { @WebMethod @Override public String hello(String text) { return \"Hello \" + text + \"!\"; } }", "The context path under which all services will be available quarkus.cxf.path = /soap Publish \"HelloService\" under the context path /USD{quarkus.cxf.path}/hello quarkus.cxf.endpoint.\"/hello\".implementor = io.quarkiverse.cxf.it.server.HelloServiceImpl quarkus.cxf.endpoint.\"/hello\".features = org.apache.cxf.ext.logging.LoggingFeature", "mvn quarkus:dev", "curl http://localhost:8080/soap/hello?wsdl <?xml version='1.0' encoding='UTF-8'?> <wsdl:definitions xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:tns=\"http://server.it.cxf.quarkiverse.io/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:ns1=\"http://schemas.xmlsoap.org/soap/http\" name=\"HelloService\" targetNamespace=\"http://server.it.cxf.quarkiverse.io/\"> <wsdl:types> <xsd:schema xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:tns=\"http://server.it.cxf.quarkiverse.io/\" attributeFormDefault=\"unqualified\" elementFormDefault=\"unqualified\" targetNamespace=\"http://server.it.cxf.quarkiverse.io/\"> <xsd:element name=\"hello\" type=\"tns:hello\"/> <xsd:complexType name=\"hello\"> <xsd:sequence> <xsd:element minOccurs=\"0\" name=\"arg0\" type=\"xsd:string\"/> </xsd:sequence> </xsd:complexType> <xsd:element name=\"helloResponse\" type=\"tns:helloResponse\"/> <xsd:complexType name=\"helloResponse\"> <xsd:sequence> <xsd:element minOccurs=\"0\" name=\"return\" type=\"xsd:string\"/> </xsd:sequence> </xsd:complexType> </xsd:schema> </wsdl:types> <wsdl:message name=\"helloResponse\"> <wsdl:part element=\"tns:helloResponse\" name=\"parameters\"> </wsdl:part> </wsdl:message> <wsdl:message name=\"hello\"> <wsdl:part element=\"tns:hello\" name=\"parameters\"> </wsdl:part> </wsdl:message> <wsdl:portType name=\"HelloService\"> <wsdl:operation name=\"hello\"> <wsdl:input message=\"tns:hello\" name=\"hello\"> </wsdl:input> <wsdl:output message=\"tns:helloResponse\" name=\"helloResponse\"> </wsdl:output> </wsdl:operation> </wsdl:portType> <wsdl:binding name=\"HelloServiceSoapBinding\" type=\"tns:HelloService\"> <soap:binding style=\"document\" transport=\"http://schemas.xmlsoap.org/soap/http\"/> <wsdl:operation name=\"hello\"> <soap:operation soapAction=\"\" style=\"document\"/> <wsdl:input name=\"hello\"> <soap:body use=\"literal\"/> </wsdl:input> <wsdl:output name=\"helloResponse\"> <soap:body use=\"literal\"/> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name=\"HelloService\"> <wsdl:port binding=\"tns:HelloServiceSoapBinding\" name=\"HelloServicePort\"> <soap:address location=\"http://localhost:8080/soap/hello\"/> </wsdl:port> </wsdl:service> </wsdl:definitions>", "curl -v -X POST -H \"Content-Type: text/xml;charset=UTF-8\" -d '<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soap:Body><ns2:hello xmlns:ns2=\"http://server.it.cxf.quarkiverse.io/\"><arg0>World</arg0></ns2:hello></soap:Body> </soap:Envelope>' http://localhost:8080/soap/hello <soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soap:Body> <ns1:helloResponse xmlns:ns1=\"http://server.it.cxf.quarkiverse.io/\"> <return>Hello World!</return> </ns1:helloResponse> </soap:Body> </soap:Envelope>", "<dependency> <groupId>io.quarkiverse.cxf</groupId> <artifactId>quarkus-cxf-rt-features-logging</artifactId> </dependency>", "quarkus.cxf.endpoint.\"/hello\".features=org.apache.cxf.ext.logging.LoggingFeature", "2023-01-11 22:12:21,315 INFO [org.apa.cxf.ser.Hel.REQ_IN] (vert.x-worker-thread-0) REQ_IN Address: http://localhost:8080/soap/hello HttpMethod: POST Content-Type: text/xml;charset=UTF-8 ExchangeId: af10747a-8477-4c17-bf5f-2a4a3a95d61c ServiceName: HelloService PortName: HelloServicePort PortTypeName: HelloService Headers: {Accept=*/*, User-Agent=curl/7.79.1, content-type=text/xml;charset=UTF-8, Host=localhost:8080, Content-Length=203, x-quarkus-hot-deployment-done=true} Payload: <soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soap:Body><ns2:hello xmlns:ns2=\"http://server.it.cxf.quarkiverse.io/\"><arg0>World</arg0></ns2:hello></soap:Body> </soap:Envelope> 2023-01-11 22:12:21,327 INFO [org.apa.cxf.ser.Hel.RESP_OUT] (vert.x-worker-thread-0) RESP_OUT Address: http://localhost:8080/soap/hello Content-Type: text/xml ResponseCode: 200 ExchangeId: af10747a-8477-4c17-bf5f-2a4a3a95d61c ServiceName: HelloService PortName: HelloServicePort PortTypeName: HelloService Headers: {} Payload: <soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"><soap:Body><ns1:helloResponse xmlns:ns1=\"http://server.it.cxf.quarkiverse.io/\"><return>Hello World!</return></ns1:helloResponse></soap:Body></soap:Envelope>", "docker run -p 8082:8080 quay.io/l2x6/calculator-ws:1.0", "curl -s http://localhost:8082/calculator-ws/CalculatorService?wsdl <?xml version=\"1.0\" ?> <wsdl:definitions xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:tns=\"http://www.jboss.org/eap/quickstarts/wscalculator/Calculator\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:ns1=\"http://schemas.xmlsoap.org/soap/http\" name=\"CalculatorService\" targetNamespace=\"http://www.jboss.org/eap/quickstarts/wscalculator/Calculator\"> <wsdl:binding name=\"CalculatorServiceSoapBinding\" type=\"tns:CalculatorService\"> <soap:binding style=\"document\" transport=\"http://schemas.xmlsoap.org/soap/http\"></soap:binding> <wsdl:operation name=\"add\"> <soap:operation soapAction=\"\" style=\"document\"></soap:operation> <wsdl:input name=\"add\"> <soap:body use=\"literal\"></soap:body> </wsdl:input> <wsdl:output name=\"addResponse\"> <soap:body use=\"literal\"></soap:body> </wsdl:output> </wsdl:operation> <wsdl:operation name=\"subtract\"> <soap:operation soapAction=\"\" style=\"document\"></soap:operation> <wsdl:input name=\"subtract\"> <soap:body use=\"literal\"></soap:body> </wsdl:input> <wsdl:output name=\"subtractResponse\"> <soap:body use=\"literal\"></soap:body> </wsdl:output> </wsdl:operation> </wsdl:binding> </wsdl:definitions>", "curl -s -X POST -H \"Content-Type: text/xml;charset=UTF-8\" -d '<Envelope xmlns=\"http://schemas.xmlsoap.org/soap/envelope/\"> <Body> <add xmlns=\"http://www.jboss.org/eap/quickstarts/wscalculator/Calculator\"> <arg0 xmlns=\"\">7</arg0> 1 <arg1 xmlns=\"\">4</arg1> </add> </Body> </Envelope>' http://localhost:8082/calculator-ws/CalculatorService <soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soap:Body> <ns2:addResponse xmlns:ns2=\"http://www.jboss.org/eap/quickstarts/wscalculator/Calculator\"> <return>11</return> 2 </ns2:addResponse> </soap:Body> </soap:Envelope>", "package io.quarkiverse.cxf.client.it; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.QueryParam; import jakarta.ws.rs.core.MediaType; import org.jboss.eap.quickstarts.wscalculator.calculator.CalculatorService; import io.quarkiverse.cxf.annotation.CXFClient; @Path(\"/cxf/calculator-client\") public class CxfClientRestResource { @CXFClient(\"myCalculator\") 1 CalculatorService myCalculator; @GET @Path(\"/add\") @Produces(MediaType.TEXT_PLAIN) public int add(@QueryParam(\"a\") int a, @QueryParam(\"b\") int b) { return myCalculator.add(a, b); 2 } }", "cxf.it.calculator.baseUri=http://localhost:8082 quarkus.cxf.client.myCalculator.wsdl = USD{cxf.it.calculator.baseUri}/calculator-ws/CalculatorService?wsdl quarkus.cxf.client.myCalculator.client-endpoint-url = USD{cxf.it.calculator.baseUri}/calculator-ws/CalculatorService quarkus.cxf.client.myCalculator.service-interface = org.jboss.eap.quickstarts.wscalculator.calculator.CalculatorService", "mvn quarkus:dev INFO [io.quarkus] (Quarkus Main Thread) ... Listening on: http://localhost:8080", "curl -s 'http://localhost:8080/cxf/calculator-client/add?a=5&b=6' 11", "bean reference by type quarkus.cxf.endpoint.\"/hello\".features = org.apache.cxf.ext.logging.LoggingFeature", "bean reference by bean name quarkus.cxf.endpoint.\"/fruit\".features = #myCustomLoggingFeature", "import org.apache.cxf.ext.logging.LoggingFeature; import javax.enterprise.context.ApplicationScoped; import javax.enterprise.inject.Produces; class Producers { @Produces @ApplicationScoped @Named(\"myCustomLoggingFeature\") LoggingFeature myCustomLoggingFeature() { LoggingFeature loggingFeature = new LoggingFeature(); loggingFeature.setPrettyLogging(true); return loggingFeature; } }", "mvn package", "ls -lh target/quarkus-app drwxr-xr-x. 2 ppalaga ppalaga 4.0K Jan 12 22:29 app drwxr-xr-x. 4 ppalaga ppalaga 4.0K Jan 12 22:29 lib drwxr-xr-x. 2 ppalaga ppalaga 4.0K Jan 12 22:29 quarkus -rw-r-r--. 1 ppalaga ppalaga 6.1K Jan 12 22:29 quarkus-app-dependencies.txt -rw-r-r--. 1 ppalaga ppalaga 678 Jan 12 22:29 quarkus-run.jar", "java -jar target/quarkus-app/quarkus-run.jar", "<profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <properties> <skipITs>false</skipITs> <quarkus.package.type>native</quarkus.package.type> </properties> </profile>", "Make sure USDGRAALVM_HOME is set properly echo USDGRAALVM_HOME /home/{user}/.sdkman/candidates/java/{major}.{minor}.r{java-version}-grl Produce the native executable mvn package -Pnative", "Produce the native executable mvn package -Pnative -Dquarkus.native.container-build=true", "ls -l target -rwxr-xr-x. 1 ppalaga ppalaga 71M Jan 11 22:42 quarkus-cxf-integration-test-server-1.8.0-SNAPSHOT-runner", "target/*-runner INFO [io.quarkus] (main) quarkus-cxf-integration-test-server 1.8.0-SNAPSHOT native (powered by Quarkus 2.15.2.Final) started in 0.042s. Listening on: http://0.0.0.0:8080", "Global settings quarkus.cxf.logging.enabled-for = both quarkus.cxf.logging.pretty = true", "For a service: quarkus.cxf.endpoint.\"/hello\".logging.enabled = true quarkus.cxf.endpoint.\"/hello\".logging.pretty = true For a client: quarkus.cxf.client.hello.logging.enabled = true quarkus.cxf.client.hello.logging.pretty = true", "For a service: quarkus.cxf.endpoint.\"/hello\".features = org.apache.cxf.ext.logging.LoggingFeature For a client: quarkus.cxf.client.\"myClient\".features = org.apache.cxf.ext.logging.LoggingFeature", "@org.apache.cxf.feature.Features (features = {\"org.apache.cxf.ext.logging.LoggingFeature\"}) @WebService(endpointInterface = \"org.acme.SayHi\", targetNamespace = \"uri:org.acme\") public class SayHiImplementation implements SayHi { public long sayHi(long arg) { return arg; } // }", "import org.apache.cxf.ext.logging.LoggingFeature; import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.inject.Produces; class Producers { @Produces @ApplicationScoped @Named(\"limitedLoggingFeature\") // \"limitedLoggingFeature\" is redundant if the name of the method is the same LoggingFeature limitedLoggingFeature() { LoggingFeature loggingFeature = new LoggingFeature(); loggingFeature.setPrettyLogging(true); loggingFeature.setLimit(1024); return loggingFeature; } }", "For a service: quarkus.cxf.endpoint.\"/hello\".features = #limitedLoggingFeature For a client: quarkus.cxf.client.hello.features = #limitedLoggingFeature", "package io.quarkiverse.cxf.it.server; import java.util.Objects; import jakarta.xml.bind.annotation.XmlElement; import jakarta.xml.bind.annotation.XmlRootElement; import jakarta.xml.bind.annotation.XmlType; @XmlType(name = \"Fruit\") @XmlRootElement public class Fruit { private String name; private String description; public Fruit() { } public Fruit(String name, String description) { this.name = name; this.description = description; } public String getName() { return name; } @XmlElement public void setName(String name) { this.name = name; } public String getDescription() { return description; } @XmlElement public void setDescription(String description) { this.description = description; } @Override public boolean equals(Object obj) { if (!(obj instanceof Fruit)) { return false; } Fruit other = (Fruit) obj; return Objects.equals(other.getName(), this.getName()); } @Override public int hashCode() { return Objects.hash(this.getName()); } }", "package io.quarkiverse.cxf.it.server; import java.util.Set; import jakarta.jws.WebMethod; import jakarta.jws.WebService; @WebService public interface FruitService { @WebMethod Set<Fruit> list(); @WebMethod Set<Fruit> add(Fruit fruit); @WebMethod Set<Fruit> delete(Fruit fruit); }", "package io.quarkiverse.cxf.it.server; import java.util.Collections; import java.util.LinkedHashSet; import java.util.Set; import jakarta.jws.WebService; @WebService(serviceName = \"FruitService\") public class FruitServiceImpl implements FruitService { private Set<Fruit> fruits = Collections.synchronizedSet(new LinkedHashSet<>()); public FruitServiceImpl() { fruits.add(new Fruit(\"Apple\", \"Winter fruit\")); fruits.add(new Fruit(\"Pineapple\", \"Tropical fruit\")); } @Override public Set<Fruit> list() { return fruits; } @Override public Set<Fruit> add(Fruit fruit) { fruits.add(fruit); return fruits; } @Override public Set<Fruit> delete(Fruit fruit) { fruits.remove(fruit); return fruits; } }", "quarkus.cxf.endpoint.\"/fruits\".implementor = io.quarkiverse.cxf.it.server.FruitServiceImpl quarkus.cxf.endpoint.\"/fruits\".features = org.apache.cxf.ext.logging.LoggingFeature", "mvn quarkus:dev INFO [io.quarkus] (Quarkus Main Thread) ... Listening on: http://localhost:8080", "curl -v -X POST -H \"Content-Type: text/xml;charset=UTF-8\" -d '<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:ns1=\"http://server.it.cxf.quarkiverse.io/\"> <soapenv:Body> <ns1:list/> </soapenv:Body> </soapenv:Envelope>' http://localhost:8080/soap/fruits <soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soap:Body> <ns1:listResponse xmlns:ns1=\"http://server.it.cxf.quarkiverse.io/\"> <return xmlns:ns2=\"http://server.it.cxf.quarkiverse.io/\"> <description>Winter fruit</description> <name>Apple</name> </return> <return xmlns:ns2=\"http://server.it.cxf.quarkiverse.io/\"> <description>Tropical fruit</description> <name>Pineapple</name> </return> </ns1:listResponse> </soap:Body> </soap:Envelope>", "curl -v -X POST -H \"Content-Type: text/xml;charset=UTF-8\" -d '<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soap:Body> <ns2:add xmlns:ns2=\"http://server.it.cxf.quarkiverse.io/\"> <arg0> <description>Mediterranean fruit</description> <name>Orange</name> </arg0> </ns2:add> </soap:Body></soap:Envelope>' http://localhost:8080/soap/fruits <soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soap:Body> <ns1:addResponse xmlns:ns1=\"http://server.it.cxf.quarkiverse.io/\"> <return xmlns:ns2=\"http://server.it.cxf.quarkiverse.io/\"> <description>Winter fruit</description> <name>Apple</name> </return> <return xmlns:ns2=\"http://server.it.cxf.quarkiverse.io/\"> <description>Tropical fruit</description> <name>Pineapple</name> </return> <return xmlns:ns2=\"http://server.it.cxf.quarkiverse.io/\"> <description>Mediterranean fruit</description> <name>Orange</name> </return> </ns1:addResponse> </soap:Body> </soap:Envelope>", "<plugin> <groupId>io.quarkus</groupId> <artifactId>quarkus-maven-plugin</artifactId> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> </goals> </execution> </executions> </plugin>", "quarkus.cxf.codegen.wsdl2java.includes = wsdl/*.wsdl", "Parameters for foo.wsdl quarkus.cxf.codegen.wsdl2java.foo-params.includes = wsdl/foo.wsdl quarkus.cxf.codegen.wsdl2java.foo-params.wsdl-location = wsdl/foo.wsdl Parameters for bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.includes = wsdl/bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.wsdl-location = wsdl/bar.wsdl quarkus.cxf.codegen.wsdl2java.bar-params.xjc = ts", "[quarkus-dalkia-ticket-loader-1.0.0-SNAPSHOT-runner:26] compile: 161 459,15 ms, 8,54 GB [quarkus-dalkia-ticket-loader-1.0.0-SNAPSHOT-runner:26] image: 158 272,73 ms, 8,43 GB [quarkus-dalkia-ticket-loader-1.0.0-SNAPSHOT-runner:26] write: 205,82 ms, 8,43 GB Fatal error:com.oracle.svm.core.util.VMErrorUSDHostedError: java.lang.RuntimeException: oops : expected ASCII string! com.oracle.svm.reflect.OperationOrderStatusType_CREE_f151156b0d42ecdbdfb919501d8a86dda8733012_1456.hashCode at com.oracle.svm.core.util.VMError.shouldNotReachHere(VMError.java:72)", "@XmlType(name = \"OperationOrderStatusType\") @XmlEnum public enum OperationOrderStatusType { @XmlEnumValue(\"Cr\\u00e9\\u00e9\") CREE(\"Cr\\u00e9\\u00e9\"), @XmlEnumValue(\"A communiquer\") A_COMMUNIQUER(\"A communiquer\"), @XmlEnumValue(\"En attente de r\\u00e9ponse\") EN_ATTENTE_DE_REPONSE(\"En attente de r\\u00e9ponse\"), @XmlEnumValue(\"Attribu\\u00e9\") ATTRIBUE(\"Attribu\\u00e9\"), @XmlEnumValue(\"Clotur\\u00e9\") CLOTURE(\"Clotur\\u00e9\"), @XmlEnumValue(\"Annul\\u00e9\") ANNULE(\"Annul\\u00e9\"); private final String value; OperationOrderStatusType(String v) { value = v; } public String value() { return value; } public static OperationOrderStatusType fromValue(String v) { for (OperationOrderStatusType c: OperationOrderStatusType.values()) { if (c.value.equals(v)) { return c; } } throw new IllegalArgumentException(v); } }", "@XmlType(name = \"OperationOrderStatusType\") @XmlEnum public enum OperationOrderStatusType { @XmlEnumValue(\"Cree\") CREE(\"Cree\"), @XmlEnumValue(\"A communiquer\") A_COMMUNIQUER(\"A communiquer\"), @XmlEnumValue(\"En attente de reponse\") EN_ATTENTE_DE_REPONSE(\"En attente de reponse\"), @XmlEnumValue(\"Attribue\") ATTRIBUE(\"Attribue\"), @XmlEnumValue(\"Cloture\") CLOTURE(\"Cloture\"), @XmlEnumValue(\"Annule\") ANNULE(\"Annule\"); private final String value; OperationOrderStatusType(String v) { value = v; } public String value() { return value; } public static OperationOrderStatusType fromValue(String v) { for (OperationOrderStatusType c: OperationOrderStatusType.values()) { if (c.value.equals(v)) { return c; } } throw new IllegalArgumentException(v); } }", "quarkus.cxf.java2ws.includes = io.quarkiverse.cxf.it.server.HelloServiceImpl,io.quarkiverse.cxf.it.server.FaultyHelloServiceImpl quarkus.cxf.java2ws.wsdl-name-template = %TARGET_DIR%/Java2wsTest/%SIMPLE_CLASS_NAME%-from-java2ws.wsdl", "@org.apache.cxf.feature.Features (features = {\"org.apache.cxf.ext.logging.LoggingFeature\"}) @org.apache.cxf.interceptor.InInterceptors (interceptors = {\"org.acme.Test1Interceptor\" }) @org.apache.cxf.interceptor.InFaultInterceptors (interceptors = {\"org.acme.Test2Interceptor\" }) @org.apache.cxf.interceptor.OutInterceptors (interceptors = {\"org.acme.Test1Interceptor\" }) @org.apache.cxf.interceptor.InFaultInterceptors (interceptors = {\"org.acme.Test2Interceptor\",\"org.acme.Test3Intercetpor\" }) @WebService(endpointInterface = \"org.acme.SayHi\", targetNamespace = \"uri:org.acme\") public class SayHiImplementation implements SayHi { public long sayHi(long arg) { return arg; } // }", "quarkus.cxf.endpoint.\"/greeting-service\".features=org.apache.cxf.ext.logging.LoggingFeature quarkus.cxf.endpoint.\"/greeting-service\".in-interceptors=org.acme.Test1Interceptor quarkus.cxf.endpoint.\"/greeting-service\".out-interceptors=org.acme.Test1Interceptor quarkus.cxf.endpoint.\"/greeting-service\".in-fault-interceptors=org.acme.Test2Interceptor,org.acme.Test3Intercetpor quarkus.cxf.endpoint.\"/greeting-service\".out-fault-interceptors=org.acme.Test1Intercetpor", "A web service endpoint with multiple Handler classes quarkus.cxf.endpoint.\"/greeting-service\".handlers=org.acme.MySOAPHandler,org.acme.AnotherSOAPHandler A web service client with a single Handler quarkus.cxf.client.\"greeting-client\".handlers=org.acme.MySOAPHandler", "import jakarta.xml.ws.handler.soap.SOAPHandler; import jakarta.xml.ws.handler.soap.SOAPMessageContext; public class MySOAPHandler implements SOAPHandler<SOAPMessageContext> { public boolean handleMessage(SOAPMessageContext messageContext) { SOAPMessage msg = messageContext.getMessage(); return true; } // other methods }", "The context path under which all services will be available quarkus.cxf.path = /soap Publish \"HelloService\" under the context path /USD{quarkus.cxf.path}/hello quarkus.cxf.endpoint.\"/hello\".implementor = io.quarkiverse.cxf.it.server.HelloServiceImpl quarkus.cxf.endpoint.\"/hello\".features = org.apache.cxf.ext.logging.LoggingFeature", "package io.quarkiverse.cxf.it.annotation.cxfendpoint; import jakarta.jws.WebService; import io.quarkiverse.cxf.annotation.CXFEndpoint; import io.quarkiverse.cxf.it.HelloService; @CXFEndpoint(\"/path-annotation\") 1 @WebService(serviceName = \"HelloService\", targetNamespace = HelloService.NS) public class PathAnnotationHelloServiceImpl implements HelloService { @Override public String hello(String person) { return \"Hello \" + person + \" from PathAnnotationHelloServiceImpl!\"; } }", "package io.quarkiverse.cxf.it.annotation.cxfendpoint; import org.assertj.core.api.Assertions; import org.junit.jupiter.api.Test; import org.mockito.Mockito; import io.quarkiverse.cxf.annotation.CXFClient; import io.quarkiverse.cxf.annotation.CXFEndpoint; import io.quarkiverse.cxf.it.HelloService; import io.quarkus.test.junit.QuarkusTest; @QuarkusTest public class MockedEndpointTest { @CXFEndpoint(\"/helloMock\") 1 HelloService helloMockService() { final HelloService result = Mockito.mock(HelloService.class); Mockito.when(result.hello(\"Mock\")).thenReturn(\"Hello Mock!\"); return result; } @CXFClient(\"helloMock\") 2 HelloService helloMockClient; @Test void helloMock() { Assertions.assertThat(helloMockClient.hello(\"Mock\")).isEqualTo(\"Hello Mock!\"); 3 } }", "import jakarta.xml.transform.stream.StreamSource; import jakarta.xml.ws.BindingType; import jakarta.xml.ws.Provider; import jakarta.xml.ws.Service; import jakarta.xml.ws.ServiceMode; import jakarta.xml.ws.WebServiceProvider; import java.io.StringReader; @WebServiceProvider @ServiceMode(value = Service.Mode.PAYLOAD) public class StreamSourcePayloadProvider implements Provider<StreamSource> { public StreamSourcePayloadProvider() { } public StreamSource invoke(StreamSource request) { String payload = StaxUtils.toString(request); // Do some interesting things StreamSource response = new StreamSource(new StringReader(payload)); return response; } }", "A web service endpoint with the Provider implementation class quarkus.cxf.endpoint.\"/stream-source\".implementor=org.acme.StreamSourcePayloadProvider", "package org.acme.cxf; import @Slf4j @WebService(endpointInterface = \"org.acme.cxf.WeatherWebService\") public class WeatherWebServiceImpl implements WeatherWebService { @Inject BackEndWeatherService backEndWeatherService; private Map<String, DailyTemperature> dailyTempByZipCode = Collections.synchronizedMap(new LinkedHashMap<>()); public WeatherWebServiceImpl() { this.dailyTempByZipCode.addAll( this.backEndWeatherService.getDailyForecast(Instant.now())); } @Override public DailyTemperature estimationTemperatures(String zipCode) { log.info(\"Daily estimation temperatures forecast called with '{}' zip code paramter\", zipCode); return this.dailyTempByZipCode.get(zipCode); } }", "quarkus.cxf.path=/soap quarkus.cxf.endpoint.\"/weather\".implementor=org.acme.cxf.WeatherWebServiceImpl", "package org.acme.reasteasy; import @Slf4j @Path(\"/healthcheck\") public class HealthCheckResource { @Inject BackEndWeatherService backEndWeatherService; @GET public Response doHealthCheck() { if(this.backEndWeatherService.isAvailable()) { return Response.ok().build(); } else { return Response.status(Response.Status.SERVICE_UNAVAILABLE); } } }", "quarkus.resteasy.path=/rest", "quarkus.http.proxy.proxy-address-forwarding = true quarkus.http.proxy.enable-forwarded-host = true quarkus.http.proxy.enable-forwarded-prefix = true", "X-Forwarded-Proto: https X-Forwarded-Host: api.example.com X-Forwarded-Port: 443 X-Forwarded-Prefix: /my-prefix", "<soap:address location=\"https://api.example.com:443/my-prefix/services/my-service\"/>", "cxf.it.calculator.baseUri = http://localhost:8082 quarkus.cxf.client.myCalculator.wsdl = USD{cxf.it.calculator.baseUri}/calculator-ws/CalculatorService?wsdl quarkus.cxf.client.myCalculator.client-endpoint-url = USD{cxf.it.calculator.baseUri}/calculator-ws/CalculatorService quarkus.cxf.client.myCalculator.service-interface = org.jboss.eap.quickstarts.wscalculator.calculator.CalculatorService another client quarkus.cxf.client.anotherCalculator.wsdl = https://acme.com/ws/WeatherService?wsdl quarkus.cxf.client.anotherCalculator.client-endpoint-url = https://acme.com/ws/WeatherService quarkus.cxf.client.anotherCalculator.service-interface = org.jboss.eap.quickstarts.wscalculator.calculator.CalculatorService", "import io.quarkus.runtime.StartupEvent; import jakarta.enterprise.event.Observes; import org.apache.cxf.Bus; import org.apache.cxf.BusFactory; import org.apache.cxf.transport.http.HTTPConduit; import org.apache.cxf.transport.http.HTTPConduitConfigurer; void onStart(@Observes StartupEvent ev) { HTTPConduitConfigurer httpConduitConfigurer = new HTTPConduitConfigurer() { public void configure(String name, String address, HTTPConduit conduit) { conduit.getClient().setAllowChunking(false); conduit.getClient().setAutoRedirect(true); } }; final Bus bus = BusFactory.getDefaultBus(); bus.setExtension(httpConduitConfigurer, HTTPConduitConfigurer.class); }", "package io.quarkiverse.cxf.client.it; import java.util.Map; import jakarta.enterprise.context.RequestScoped; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.QueryParam; import jakarta.ws.rs.core.MediaType; import jakarta.xml.ws.BindingProvider; import org.jboss.eap.quickstarts.wscalculator.calculator.CalculatorService; import io.quarkiverse.cxf.annotation.CXFClient; /* * The @RequestScoped annotation causes that the REST resource is instantiated * anew for every call of the add() method. Therefore also a new client instance * is injected into the calculator field for every request served by add(). */ @RequestScoped @Path(\"/cxf/dynamic-client\") public class DynamicClientConfigRestResource { @CXFClient(\"requestScopedVertxHttpClient\") CalculatorService calculator; @GET @Path(\"/add\") @Produces(MediaType.TEXT_PLAIN) public int add(@QueryParam(\"a\") int a, @QueryParam(\"b\") int b, @QueryParam(\"baseUri\") String baseUri) { Map<String, Object> ctx = ((BindingProvider) calculator).getRequestContext(); /* We are setting the remote URL safely, because the client is associated exclusively with the current request */ ctx.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, baseUri + \"/calculator-ws/CalculatorService\"); return calculator.add(a, b); } }", "quarkus.http.host-enabled = false", "import java.net.URL; import javax.xml.namespace.QName; import jakarta.xml.ws.Service; import java.io.Closeable; final URL serviceUrl = new URL(\"http://localhost/myService?wsdl\"); final QName qName = new QName(\"http://acme.org/myNamespace\", \"MyService\"); final Service service = jakarta.xml.ws.Service.create(serviceUrl, qName); final MyService proxy = service.getPort(MyService.class); try { proxy.doSomething(); } finally { ((Closeable) proxy).close(); }", "Client side SSL quarkus.cxf.client.hello.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/hello quarkus.cxf.client.hello.service-interface = io.quarkiverse.cxf.it.security.policy.HelloService 1 quarkus.cxf.client.hello.trust-store-type = pkcs12 2 quarkus.cxf.client.hello.trust-store = client-truststore.pkcs12 quarkus.cxf.client.hello.trust-store-password = client-truststore-password", "Server side SSL quarkus.tls.key-store.p12.path = localhost-keystore.pkcs12 quarkus.tls.key-store.p12.password = localhost-keystore-password quarkus.tls.key-store.p12.alias = localhost quarkus.tls.key-store.p12.alias-password = localhost-keystore-password", "Server keystore for Simple TLS quarkus.tls.localhost-pkcs12.key-store.p12.path = localhost-keystore.pkcs12 quarkus.tls.localhost-pkcs12.key-store.p12.password = localhost-keystore-password quarkus.tls.localhost-pkcs12.key-store.p12.alias = localhost quarkus.tls.localhost-pkcs12.key-store.p12.alias-password = localhost-keystore-password Server truststore for Mutual TLS quarkus.tls.localhost-pkcs12.trust-store.p12.path = localhost-truststore.pkcs12 quarkus.tls.localhost-pkcs12.trust-store.p12.password = localhost-truststore-password Select localhost-pkcs12 as the TLS configuration for the HTTP server quarkus.http.tls-configuration-name = localhost-pkcs12 Do not allow any clients which do not prove their indentity through an SSL certificate quarkus.http.ssl.client-auth = required CXF service quarkus.cxf.endpoint.\"/mTls\".implementor = io.quarkiverse.cxf.it.auth.mtls.MTlsHelloServiceImpl CXF client with a properly set certificate for mTLS quarkus.cxf.client.mTls.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/mTls quarkus.cxf.client.mTls.service-interface = io.quarkiverse.cxf.it.security.policy.HelloService quarkus.cxf.client.mTls.key-store = target/classes/client-keystore.pkcs12 quarkus.cxf.client.mTls.key-store-type = pkcs12 quarkus.cxf.client.mTls.key-store-password = client-keystore-password quarkus.cxf.client.mTls.key-password = client-keystore-password quarkus.cxf.client.mTls.trust-store = target/classes/client-truststore.pkcs12 quarkus.cxf.client.mTls.trust-store-type = pkcs12 quarkus.cxf.client.mTls.trust-store-password = client-truststore-password Include the keystores in the native executable quarkus.native.resources.includes = *.pkcs12,*.jks", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <wsp:Policy wsp:Id=\"HttpsSecurityServicePolicy\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <wsp:ExactlyOne> <wsp:All> <sp:TransportBinding> <wsp:Policy> <sp:TransportToken> <wsp:Policy> <sp:HttpsToken RequireClientCertificate=\"false\" /> </wsp:Policy> </sp:TransportToken> <sp:IncludeTimestamp /> <sp:AlgorithmSuite> <wsp:Policy> <sp:Basic128 /> </wsp:Policy> </sp:AlgorithmSuite> </wsp:Policy> </sp:TransportBinding> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>", "package io.quarkiverse.cxf.it.security.policy; import jakarta.jws.WebMethod; import jakarta.jws.WebService; import org.apache.cxf.annotations.Policy; /** * A service implementation with a transport policy set */ @WebService(serviceName = \"HttpsPolicyHelloService\") @Policy(placement = Policy.Placement.BINDING, uri = \"https-policy.xml\") public interface HttpsPolicyHelloService extends AbstractHelloService { @WebMethod @Override public String hello(String text); }", "ERROR [org.apa.cxf.ws.pol.PolicyVerificationInInterceptor] Inbound policy verification failed: These policy alternatives can not be satisfied: {http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702}TransportBinding: TLS is not enabled", "quarkus.cxf.client.basicAuth.wsdl = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth?wsdl quarkus.cxf.client.basicAuth.client-endpoint-url = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth quarkus.cxf.client.basicAuth.username = bob quarkus.cxf.client.basicAuth.password = bob234", "quarkus.cxf.client.basicAuthSecureWsdl.wsdl = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth?wsdl quarkus.cxf.client.basicAuthSecureWsdl.client-endpoint-url = http://localhost:USD{quarkus.http.test-port}/soap/basicAuthSecureWsdl quarkus.cxf.client.basicAuthSecureWsdl.username = bob quarkus.cxf.client.basicAuthSecureWsdl.password = USD{client-server.bob.password} quarkus.cxf.client.basicAuthSecureWsdl.secure-wsdl-access = true", "quarkus.http.auth.basic = true quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.alice = alice123 quarkus.security.users.embedded.roles.alice = admin quarkus.security.users.embedded.users.bob = bob234 quarkus.security.users.embedded.roles.bob = app-user", "package io.quarkiverse.cxf.it.auth.basic; import jakarta.annotation.security.RolesAllowed; import jakarta.jws.WebService; import io.quarkiverse.cxf.it.HelloService; @WebService(serviceName = \"HelloService\", targetNamespace = HelloService.NS) @RolesAllowed(\"app-user\") public class BasicAuthHelloServiceImpl implements HelloService { @Override public String hello(String person) { return \"Hello \" + person + \"!\"; } }", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <wsp:Policy wsp:Id=\"UsernameTokenSecurityServicePolicy\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\" xmlns:sp13=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200802\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <wsp:ExactlyOne> <wsp:All> <sp:SupportingTokens> <wsp:Policy> <sp:UsernameToken sp:IncludeToken=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/AlwaysToRecipient\"> <wsp:Policy> <sp:WssUsernameToken11 /> <sp13:Created /> <sp13:Nonce /> </wsp:Policy> </sp:UsernameToken> </wsp:Policy> </sp:SupportingTokens> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>", "@WebService(serviceName = \"UsernameTokenPolicyHelloService\") @Policy(placement = Policy.Placement.BINDING, uri = \"username-token-policy.xml\") public interface UsernameTokenPolicyHelloService extends AbstractHelloService { }", "A service with a UsernameToken policy assertion quarkus.cxf.endpoint.\"/helloUsernameToken\".implementor = io.quarkiverse.cxf.it.security.policy.UsernameTokenPolicyHelloServiceImpl quarkus.cxf.endpoint.\"/helloUsernameToken\".security.callback-handler = #usernameTokenPasswordCallback These properties are used in UsernameTokenPasswordCallback and in the configuration of the helloUsernameToken below wss.user = cxf-user wss.password = secret A client with a UsernameToken policy assertion quarkus.cxf.client.helloUsernameToken.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/helloUsernameToken quarkus.cxf.client.helloUsernameToken.service-interface = io.quarkiverse.cxf.it.security.policy.UsernameTokenPolicyHelloService quarkus.cxf.client.helloUsernameToken.security.username = USD{wss.user} quarkus.cxf.client.helloUsernameToken.security.password = USD{wss.password}", "package io.quarkiverse.cxf.it.security.policy; import java.io.IOException; import javax.security.auth.callback.Callback; import javax.security.auth.callback.CallbackHandler; import javax.security.auth.callback.UnsupportedCallbackException; import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import org.apache.wss4j.common.ext.WSPasswordCallback; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped @Named(\"usernameTokenPasswordCallback\") /* We refer to this bean by this name from application.properties */ public class UsernameTokenPasswordCallback implements CallbackHandler { /* These two configuration properties are set in application.properties */ @ConfigProperty(name = \"wss.password\") String password; @ConfigProperty(name = \"wss.user\") String user; @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { if (callbacks.length < 1) { throw new IllegalStateException(\"Expected a \" + WSPasswordCallback.class.getName() + \" at possition 0 of callbacks. Got array of length \" + callbacks.length); } if (!(callbacks[0] instanceof WSPasswordCallback)) { throw new IllegalStateException( \"Expected a \" + WSPasswordCallback.class.getName() + \" at possition 0 of callbacks. Got an instance of \" + callbacks[0].getClass().getName() + \" at possition 0\"); } final WSPasswordCallback pc = (WSPasswordCallback) callbacks[0]; if (user.equals(pc.getIdentifier())) { pc.setPassword(password); } else { throw new IllegalStateException(\"Unexpected user \" + user); } } }", "package io.quarkiverse.cxf.it.security.policy; import org.assertj.core.api.Assertions; import org.junit.jupiter.api.Test; import io.quarkiverse.cxf.annotation.CXFClient; import io.quarkus.test.junit.QuarkusTest; @QuarkusTest public class UsernameTokenTest { @CXFClient(\"helloUsernameToken\") UsernameTokenPolicyHelloService helloUsernameToken; @Test void helloUsernameToken() { Assertions.assertThat(helloUsernameToken.hello(\"CXF\")).isEqualTo(\"Hello CXF from UsernameToken!\"); } }", "<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soap:Header> <wsse:Security xmlns:wsse=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\" soap:mustUnderstand=\"1\"> <wsse:UsernameToken xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" wsu:Id=\"UsernameToken-bac4f255-147e-42a4-aeec-e0a3f5cd3587\"> <wsse:Username>cxf-user</wsse:Username> <wsse:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">secret</wsse:Password> <wsse:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">3uX15dZT08jRWFWxyWmfhg==</wsse:Nonce> <wsu:Created>2024-10-02T17:32:10.497Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soap:Header> <soap:Body> <ns2:hello xmlns:ns2=\"http://policy.security.it.cxf.quarkiverse.io/\"> <arg0>CXF</arg0> </ns2:hello> </soap:Body> </soap:Envelope>", "export USDCAMEL_VAULT_AWS_ACCESS_KEY=accessKey export USDCAMEL_VAULT_AWS_SECRET_KEY=secretKey export USDCAMEL_VAULT_AWS_REGION=region", "camel.vault.aws.accessKey = accessKey camel.vault.aws.secretKey = secretKey camel.vault.aws.region = region", "export USDCAMEL_VAULT_AWS_USE_DEFAULT_CREDENTIALS_PROVIDER=true export USDCAMEL_VAULT_AWS_REGION=region", "camel.vault.aws.defaultCredentialsProvider = true camel.vault.aws.region = region", "export USDCAMEL_VAULT_AWS_USE_PROFILE_CREDENTIALS_PROVIDER=true export USDCAMEL_VAULT_AWS_PROFILE_NAME=test-account export USDCAMEL_VAULT_AWS_REGION=region", "camel.vault.aws.profileCredentialsProvider = true camel.vault.aws.profileName = test-account camel.vault.aws.region = region", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{aws:route}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{aws:route:default}}\"/> </route> </camelContext>", "{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{aws:database/username}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{aws:database/username:admin}}\"/> </route> </camelContext>", "export USDCAMEL_VAULT_GCP_SERVICE_ACCOUNT_KEY=file:////path/to/service.accountkey export USDCAMEL_VAULT_GCP_PROJECT_ID=projectId", "camel.vault.gcp.serviceAccountKey = accessKey camel.vault.gcp.projectId = secretKey", "export USDCAMEL_VAULT_GCP_USE_DEFAULT_INSTANCE=true export USDCAMEL_VAULT_GCP_PROJECT_ID=projectId", "camel.vault.gcp.useDefaultInstance = true camel.vault.aws.projectId = region", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{gcp:route}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{gcp:route:default}}\"/> </route> </camelContext>", "{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{gcp:database/username}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{gcp:database/username:admin}}\"/> </route> </camelContext>", "export USDCAMEL_VAULT_AZURE_TENANT_ID=tenantId export USDCAMEL_VAULT_AZURE_CLIENT_ID=clientId export USDCAMEL_VAULT_AZURE_CLIENT_SECRET=clientSecret export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName", "camel.vault.azure.tenantId = accessKey camel.vault.azure.clientId = clientId camel.vault.azure.clientSecret = clientSecret camel.vault.azure.vaultName = vaultName", "export USDCAMEL_VAULT_AZURE_IDENTITY_ENABLED=true export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName", "camel.vault.azure.azureIdentityEnabled = true camel.vault.azure.vaultName = vaultName", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{azure:route}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{azure:route:default}}\"/> </route> </camelContext>", "{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{azure:database/username}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{azure:database/username:admin}}\"/> </route> </camelContext>", "export USDCAMEL_VAULT_HASHICORP_TOKEN=token export USDCAMEL_VAULT_HASHICORP_HOST=host export USDCAMEL_VAULT_HASHICORP_PORT=port export USDCAMEL_VAULT_HASHICORP_SCHEME=http/https", "camel.vault.hashicorp.token = token camel.vault.hashicorp.host = host camel.vault.hashicorp.port = port camel.vault.hashicorp.scheme = scheme", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:secret:route}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:secret:route:default}}\"/> </route> </camelContext>", "{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{hashicorp:secret:database/username}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{hashicorp:secret:database/username:admin}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:secret:route@2}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:route:default@2}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{hashicorp:secret:database/username:admin@2}}\"/> </route> </camelContext>", "export USDCAMEL_VAULT_AWS_USE_DEFAULT_CREDENTIALS_PROVIDER=accessKey export USDCAMEL_VAULT_AWS_REGION=region", "camel.vault.aws.useDefaultCredentialProvider = true camel.vault.aws.region = region", "camel.vault.aws.refreshEnabled=true camel.vault.aws.refreshPeriod=60000 camel.vault.aws.secrets=Secret camel.main.context-reload-enabled = true", "{ \"source\": [\"aws.secretsmanager\"], \"detail-type\": [\"AWS API Call via CloudTrail\"], \"detail\": { \"eventSource\": [\"secretsmanager.amazonaws.com\"] } }", "{ \"Policy\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Id\\\":\\\"<queue_arn>/SQSDefaultPolicy\\\",\\\"Statement\\\":[{\\\"Sid\\\": \\\"EventsToMyQueue\\\", \\\"Effect\\\": \\\"Allow\\\", \\\"Principal\\\": {\\\"Service\\\": \\\"events.amazonaws.com\\\"}, \\\"Action\\\": \\\"sqs:SendMessage\\\", \\\"Resource\\\": \\\"<queue_arn>\\\", \\\"Condition\\\": {\\\"ArnEquals\\\": {\\\"aws:SourceArn\\\": \\\"<eventbridge_rule_arn>\\\"}}}]}\" }", "aws sqs set-queue-attributes --queue-url <queue_url> --attributes file://policy.json", "camel.vault.aws.refreshEnabled=true camel.vault.aws.refreshPeriod=60000 camel.vault.aws.secrets=Secret camel.main.context-reload-enabled = true camel.vault.aws.useSqsNotification=true camel.vault.aws.sqsQueueUrl=<queue_url>", "export USDCAMEL_VAULT_GCP_USE_DEFAULT_INSTANCE=true export USDCAMEL_VAULT_GCP_PROJECT_ID=projectId", "camel.vault.gcp.useDefaultInstance = true camel.vault.aws.projectId = projectId", "camel.vault.gcp.projectId= projectId camel.vault.gcp.refreshEnabled=true camel.vault.gcp.refreshPeriod=60000 camel.vault.gcp.secrets=hello* camel.vault.gcp.subscriptionName=subscriptionName camel.main.context-reload-enabled = true", "export USDCAMEL_VAULT_AZURE_TENANT_ID=tenantId export USDCAMEL_VAULT_AZURE_CLIENT_ID=clientId export USDCAMEL_VAULT_AZURE_CLIENT_SECRET=clientSecret export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName", "camel.vault.azure.tenantId = accessKey camel.vault.azure.clientId = clientId camel.vault.azure.clientSecret = clientSecret camel.vault.azure.vaultName = vaultName", "export USDCAMEL_VAULT_AZURE_IDENTITY_ENABLED=true export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName", "camel.vault.azure.azureIdentityEnabled = true camel.vault.azure.vaultName = vaultName", "camel.vault.azure.refreshEnabled=true camel.vault.azure.refreshPeriod=60000 camel.vault.azure.secrets=Secret camel.vault.azure.eventhubConnectionString=eventhub_conn_string camel.vault.azure.blobAccountName=blob_account_name camel.vault.azure.blobContainerName=blob_container_name camel.vault.azure.blobAccessKey=blob_access_key camel.main.context-reload-enabled = true", "<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-jta</artifactId> </dependency>", "<dependency> <groupId>io.quarkiverse.messaginghub</groupId> <artifactId>quarkus-pooled-jms</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html-single/red_hat_build_of_apache_camel_for_quarkus_reference//camel-quarkus-extensions-overview
Chapter 4. Reviewing and managing policies
Chapter 4. Reviewing and managing policies You can review and manage all created policies (enabled and disabled) by navigating to Operations > Policies . You can filter the list of policies by name and by active state. You can click the options menu to a policy to perform the following operations: Enable and disable Edit Duplicate Delete Additionally, you can perform the following operations in bulk by selecting multiple policies from the list of policies and clicking the options menu located to the Create policy button at the top: Delete policies Enable policies Disable policies Note If you see a warning message about email alerts not opted in, set your User preferences to receive email from your policies.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/monitoring_and_reacting_to_configuration_changes_using_policies/managing-policies_intro-policies
Chapter 13. Enabling the Red Hat OpenShift Data Foundation console plugin
Chapter 13. Enabling the Red Hat OpenShift Data Foundation console plugin The Data Foundation console plugin is enabled by default. In case, this option was unchecked during OpenShift Data Foundation Operator installation, use the following instructions to enable the console plugin post-deployment either from the graphical user interface (GUI) or command-line interface. Prerequisites You have administrative access to the OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Procedure From user interface In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator. Enable the console plugin option. In the Details tab, click the pencil icon under the Console plugin . Select Enable , and click Save . From command-line interface Execute the following command to enable the console plugin option: Verification steps After the console plugin option is enabled, a pop-up with a message, Web console update is available appears on the GUI. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available.
[ "oc patch console.operator cluster -n openshift-storage --type json -p '[{\"op\": \"add\", \"path\": \"/spec/plugins\", \"value\": [\"odf-console\"]}]'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/troubleshooting_openshift_data_foundation/enabling-the-red-hat-openshift-data-foundation-console-plugin-option_rhodf
Chapter 10. Using the Stream Control Transmission Protocol (SCTP) on a bare metal cluster
Chapter 10. Using the Stream Control Transmission Protocol (SCTP) on a bare metal cluster As a cluster administrator, you can use the Stream Control Transmission Protocol (SCTP) on a cluster. 10.1. Support for Stream Control Transmission Protocol (SCTP) on OpenShift Container Platform As a cluster administrator, you can enable SCTP on the hosts in the cluster. On Red Hat Enterprise Linux CoreOS (RHCOS), the SCTP module is disabled by default. SCTP is a reliable message based protocol that runs on top of an IP network. When enabled, you can use SCTP as a protocol with pods, services, and network policy. A Service object must be defined with the type parameter set to either the ClusterIP or NodePort value. 10.1.1. Example configurations using SCTP protocol You can configure a pod or service to use SCTP by setting the protocol parameter to the SCTP value in the pod or service object. In the following example, a pod is configured to use SCTP: apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ... ports: - containerPort: 30100 name: sctpserver protocol: SCTP In the following example, a service is configured to use SCTP: apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ... ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP In the following example, a NetworkPolicy object is configured to apply to SCTP network traffic on port 80 from any pods with a specific label: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80 10.2. Enabling Stream Control Transmission Protocol (SCTP) As a cluster administrator, you can load and enable the blacklisted SCTP kernel module on worker nodes in your cluster. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Create a file named load-sctp-module.yaml that contains the following YAML definition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp To create the MachineConfig object, enter the following command: USD oc create -f load-sctp-module.yaml Optional: To watch the status of the nodes while the MachineConfig Operator applies the configuration change, enter the following command. When the status of a node transitions to Ready , the configuration update is applied. USD oc get nodes 10.3. Verifying Stream Control Transmission Protocol (SCTP) is enabled You can verify that SCTP is working on a cluster by creating a pod with an application that listens for SCTP traffic, associating it with a service, and then connecting to the exposed service. Prerequisites Access to the internet from the cluster to install the nc package. Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Create a pod starts an SCTP listener: Create a file named sctp-server.yaml that defines a pod with the following YAML: apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi8/ubi command: ["/bin/sh", "-c"] args: ["dnf install -y nc && sleep inf"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP Create the pod by entering the following command: USD oc create -f sctp-server.yaml Create a service for the SCTP listener pod. Create a file named sctp-service.yaml that defines a service with the following YAML: apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102 To create the service, enter the following command: USD oc create -f sctp-service.yaml Create a pod for the SCTP client. Create a file named sctp-client.yaml with the following YAML: apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi8/ubi command: ["/bin/sh", "-c"] args: ["dnf install -y nc && sleep inf"] To create the Pod object, enter the following command: USD oc apply -f sctp-client.yaml Run an SCTP listener on the server. To connect to the server pod, enter the following command: USD oc rsh sctpserver To start the SCTP listener, enter the following command: USD nc -l 30102 --sctp Connect to the SCTP listener on the server. Open a new terminal window or tab in your terminal program. Obtain the IP address of the sctpservice service. Enter the following command: USD oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{"\n"}}' To connect to the client pod, enter the following command: USD oc rsh sctpclient To start the SCTP client, enter the following command. Replace <cluster_IP> with the cluster IP address of the sctpservice service. # nc <cluster_IP> 30102 --sctp
[ "apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ports: - containerPort: 30100 name: sctpserver protocol: SCTP", "apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp", "oc create -f load-sctp-module.yaml", "oc get nodes", "apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi8/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP", "oc create -f sctp-server.yaml", "apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102", "oc create -f sctp-service.yaml", "apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi8/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"]", "oc apply -f sctp-client.yaml", "oc rsh sctpserver", "nc -l 30102 --sctp", "oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{\"\\n\"}}'", "oc rsh sctpclient", "nc <cluster_IP> 30102 --sctp" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/networking/using-sctp
Chapter 6. Best practices for running containers using local sources
Chapter 6. Best practices for running containers using local sources You can access content hosted in an internal registry that requires a custom Transport Layer Security (TLS) root certificate, when running RHEL bootc images. There are two options available to install content to a container by using only local resources: Bind mounts: Use for example -v /etc/pki:/etc/pki to override the container's store with the host's. Derived image: Create a new container image with your custom certificates by building it using a Containerfile . You can use the same techniques to run a bootc-image-builder` container or a bootc container when appropriate. 6.1. Importing custom certificate to a container by using bind mounts Use bound mounts to override the container's store with the host's. Procedure Run RHEL bootc image and use bind mount, for example -v /etc/pki:/etc/pki , to override the container's store with the host's: Verification List certificates inside the container: 6.2. Importing custom certificates to a container by using Containerfile Create a new container image with your custom certificates by building it using a Containerfile . Procedure Create a Containerfile : Build the custom image: Run the <your_image> : Verification List the certificates inside the container:
[ "podman run --rm -it --privileged --pull=newer --security-opt label=type:unconfined_t -v USD(pwd)/output:/output -v /etc/pki:/etc/pki localhost/ <image> --type iso --config /config.toml quay.io/ <namespace>/<image>:<tag>", "ls -l /etc/pki", "FROM <internal_repository>/<image> RUN mkdir -p /etc/pki/ca-trust/extracted/pem/ COPY tls-ca-bundle.pem /etc/pki/ca-trust/extracted/pem/ RUN rm -rf /etc/yum.repos.d/* COPY echo-rhel9_4.repo /etc/yum.repos.d/", "podman build -t <your_image> .", "podman run -it --rm <your_image>", "ls -l /etc/pki/ca-trust/extracted/pem/ tls-ca-bundle.pem" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_image_mode_for_rhel_to_build_deploy_and_manage_operating_systems/best-practices-for-running-containers-using-local-sources_using-image-mode-for-rhel-to-build-deploy-and-manage-operating-systems
Managing resources
Managing resources Red Hat OpenShift AI Self-Managed 2.18 Manage administration tasks from the OpenShift AI dashboard
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/managing_resources/index
Preface
Preface The Red Hat Enterprise Linux 6.2 Technical Notes list and document the changes made to the Red Hat Enterprise Linux 6 operating system and its accompanying applications between minor release Red Hat Enterprise Linux 6.1 and minor release Red Hat Enterprise Linux 6.2. For system administrators and others planning Red Hat Enterprise Linux 6.2 upgrades and deployments, the Technical Notes provide a single, organized record of the bugs fixed in, features added to, and Technology Previews included with this new release of Red Hat Enterprise Linux. For auditors and compliance officers, the Red Hat Enterprise Linux 6.2 Technical Notes provide a single, organized source for change tracking and compliance testing. For every user, the Red Hat Enterprise Linux 6.2 Technical Notes provide details of what has changed in this new release. Note The Package Manifest is available as a separate document.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/pref-test
13.2.20. Creating Domains: Proxy
13.2.20. Creating Domains: Proxy A proxy with SSSD is just a relay, an intermediary configuration. SSSD connects to its proxy service, and then that proxy loads the specified libraries. This allows SSSD to use some resources that it otherwise would not be able to use. For example, SSSD only supports LDAP and Kerberos as authentication providers, but using a proxy allows SSSD to use alternative authentication methods like a fingerprint scanner or smart card. Table 13.9. Proxy Domain Configuration Parameters Parameter Description proxy_pam_target Specifies the target to which PAM must proxy as an authentication provider. The PAM target is a file containing PAM stack information in the default PAM directory, /etc/pam.d/ . This is used to proxy an authentication provider. Important Ensure that the proxy PAM stack does not recursively include pam_sss.so . proxy_lib_name Specifies which existing NSS library to proxy identity requests through. This is used to proxy an identity provider. Example 13.10. Proxy Identity and Kerberos Authentication The proxy library is loaded using the proxy_lib_name parameter. This library can be anything as long as it is compatible with the given authentication service. For a Kerberos authentication provider, it must be a Kerberos-compatible library, like NIS. Example 13.11. LDAP Identity and Proxy Authentication The proxy library is loaded using the proxy_pam_target parameter. This library must be a PAM module that is compatible with the given identity provider. For example, this uses a PAM fingerprint module with LDAP: After the SSSD domain is configured, make sure that the specified PAM files are configured. In this example, the target is sssdpamproxy , so create a /etc/pam.d/sssdpamproxy file and load the PAM/LDAP modules: Example 13.12. Proxy Identity and Authentication SSSD can have a domain with both identity and authentication proxies. The only configuration given then are the proxy settings, proxy_pam_target for the authentication PAM module and proxy_lib_name for the service, like NIS or LDAP. This example illustrates a possible configuration, but this is not a realistic configuration. If LDAP is used for identity and authentication, then both the identity and authentication providers should be set to the LDAP configuration, not a proxy. Once the SSSD domain is added, then update the system settings to configure the proxy service: Create a /etc/pam.d/sssdproxyldap file which requires the pam_ldap.so module: Make sure the nss-pam-ldapd package is installed. Edit the /etc/nslcd.conf file, the configuration file for the LDAP name service daemon, to contain the information for the LDAP directory:
[ "[domain/PROXY_KRB5] auth_provider = krb5 krb5_server = kdc.example.com krb5_realm = EXAMPLE.COM id_provider = proxy proxy_lib_name = nis cache_credentials = true", "[domain/LDAP_PROXY] id_provider = ldap ldap_uri = ldap://example.com ldap_search_base = dc=example,dc=com auth_provider = proxy proxy_pam_target = sssdpamproxy cache_credentials = true", "auth required pam_frprint.so account required pam_frprint.so password required pam_frprint.so session required pam_frprint.so", "[domain/PROXY_PROXY] auth_provider = proxy id_provider = proxy proxy_lib_name = ldap proxy_pam_target = sssdproxyldap cache_credentials = true", "auth required pam_ldap.so account required pam_ldap.so password required pam_ldap.so session required pam_ldap.so", "~]# yum install nss-pam-ldapd", "uid nslcd gid ldap uri ldaps://ldap.example.com:636 base dc=example,dc=com ssl on tls_cacertdir /etc/openldap/cacerts" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/Domain_Configuration_Options-Configuring_a_Proxy_Domain
23.9. Synchronize to PTP or NTP Time Using timemaster
23.9. Synchronize to PTP or NTP Time Using timemaster When there are multiple PTP domains available on the network, or fallback to NTP is needed, the timemaster program can be used to synchronize the system clock to all available time sources. The PTP time is provided by phc2sys and ptp4l via shared memory driver ( SHM reference clocks to chronyd or ntpd (depending on the NTP daemon that has been configured on the system). The NTP daemon can then compare all time sources, both PTP and NTP , and use the best sources to synchronize the system clock. On start, timemaster reads a configuration file that specifies the NTP and PTP time sources, checks which network interfaces have their own or share a PTP hardware clock (PHC), generates configuration files for ptp4l and chronyd or ntpd , and starts the ptp4l , phc2sys , and chronyd or ntpd processes as needed. It will remove the generated configuration files on exit. It writes configuration files for chronyd , ntpd , and ptp4l to /var/run/timemaster/ . 23.9.1. Starting timemaster as a Service To start timemaster as a service, issue the following command as root : This will read the options in /etc/timemaster.conf . For more information on managing system services in Red Hat Enterprise Linux 6, see Managing Services with systemd.
[ "~]# service timemaster start" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-synchronize_to_ptp_or_ntp_time_using_timemaster
Appendix D. Producer configuration parameters
Appendix D. Producer configuration parameters key.serializer Type: class Importance: high Serializer class for key that implements the org.apache.kafka.common.serialization.Serializer interface. value.serializer Type: class Importance: high Serializer class for value that implements the org.apache.kafka.common.serialization.Serializer interface. acks Type: string Default: 1 Valid Values: [all, -1, 0, 1] Importance: high The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed: acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1 . acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting. bootstrap.servers Type: list Default: "" Valid Values: non-null string Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). buffer.memory Type: long Default: 33554432 Valid Values: [0,... ] Importance: high The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block for max.block.ms after which it will throw an exception. This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests. compression.type Type: string Default: none Importance: high The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid values are none , gzip , snappy , lz4 , or zstd . Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression). retries Type: int Default: 2147483647 Valid Values: [0,... ,2147483647] Importance: high Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries without setting max.in.flight.requests.per.connection to 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first. Note additionally that produce requests will be failed before the number of retries has been exhausted if the timeout configured by delivery.timeout.ms expires first before successful acknowledgement. Users should generally prefer to leave this config unset and instead use delivery.timeout.ms to control retry behavior. ssl.key.password Type: password Default: null Importance: high The password of the private key in the key store file or the PEM key specified in `ssl.keystore.key'. This is required for clients only if two-way authentication is configured. ssl.keystore.certificate.chain Type: password Default: null Importance: high Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: high Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: high The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: high The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.truststore.certificates Type: password Default: null Importance: high Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: high The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: high The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. batch.size Type: int Default: 16384 Valid Values: [0,... ] Importance: medium The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. No attempt will be made to batch records larger than this size. Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records. client.dns.lookup Type: string Default: use_all_dns_ips Valid Values: [default, use_all_dns_ips, resolve_canonical_bootstrap_servers_only] Importance: medium Controls how the client uses DNS lookups. If set to use_all_dns_ips , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips . If set to default (deprecated), attempt to connect to the first IP address returned by the lookup, even if the lookup returns multiple IP addresses. client.id Type: string Default: "" Importance: medium An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. connections.max.idle.ms Type: long Default: 540000 (9 minutes) Importance: medium Close idle connections after the number of milliseconds specified by this config. delivery.timeout.ms Type: int Default: 120000 (2 minutes) Valid Values: [0,... ] Importance: medium An upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures. The producer may report failure to send a record earlier than this config if either an unrecoverable error is encountered, the retries have been exhausted, or the record is added to a batch which reached an earlier delivery expiration deadline. The value of this config should be greater than or equal to the sum of request.timeout.ms and linger.ms . linger.ms Type: long Default: 0 Valid Values: [0,... ] Importance: medium The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay-that is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting linger.ms=5 , for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load. max.block.ms Type: long Default: 60000 (1 minute) Valid Values: [0,... ] Importance: medium The configuration controls how long the KafkaProducer's `send() , partitionsFor() , initTransactions() , sendOffsetsToTransaction() , commitTransaction() and abortTransaction() methods will block. For send() this timeout bounds the total time waiting for both metadata fetch and buffer allocation (blocking in the user-supplied serializers or partitioner is not counted against this timeout). For partitionsFor() this timeout bounds the time spent waiting for metadata if it is unavailable. The transaction-related methods always block, but may timeout if the transaction coordinator could not be discovered or did not respond within the timeout. max.request.size Type: int Default: 1048576 Valid Values: [0,... ] Importance: medium The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum uncompressed record batch size. Note that the server has its own cap on the record batch size (after compression if compression is enabled) which may be different from this. partitioner.class Type: class Default: org.apache.kafka.clients.producer.internals.DefaultPartitioner Importance: medium Partitioner class that implements the org.apache.kafka.clients.producer.Partitioner interface. receive.buffer.bytes Type: int Default: 32768 (32 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. request.timeout.ms Type: int Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: medium The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. This should be larger than replica.lag.time.max.ms (a broker configuration) to reduce the possibility of message duplication due to unnecessary producer retries. sasl.client.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.jaas.config Type: password Default: null Importance: medium JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: 'loginModuleClass controlFlag (optionName=optionValue)*;'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.service.name Type: string Default: null Importance: medium The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.login.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.mechanism Type: string Default: GSSAPI Importance: medium SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. security.protocol Type: string Default: PLAINTEXT Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. socket.connection.setup.timeout.max.ms Type: long Default: 127000 (127 seconds) Importance: medium The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Importance: medium The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.keystore.type Type: string Default: JKS Importance: medium The file format of the key store file. This is optional for client. ssl.protocol Type: string Default: TLSv1.3 Importance: medium The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.truststore.type Type: string Default: JKS Importance: medium The file format of the trust store file. enable.idempotence Type: boolean Default: false Importance: low When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream. Note that enabling idempotence requires max.in.flight.requests.per.connection to be less than or equal to 5, retries to be greater than 0 and acks must be 'all'. If these values are not explicitly set by the user, suitable values will be chosen. If incompatible values are set, a ConfigException will be thrown. interceptor.classes Type: list Default: "" Valid Values: non-null string Importance: low A list of classes to use as interceptors. Implementing the org.apache.kafka.clients.producer.ProducerInterceptor interface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster. By default, there are no interceptors. max.in.flight.requests.per.connection Type: int Default: 5 Valid Values: [1,... ] Importance: low The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled). metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metadata.max.idle.ms Type: long Default: 300000 (5 minutes) Valid Values: [5000,... ] Importance: low Controls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced to exceeds the metadata idle duration, then the topic's metadata is forgotten and the access to it will force a metadata fetch request. metric.reporters Type: list Default: "" Valid Values: non-null string Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG, TRACE] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: low Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: low Login thread sleep time between refresh attempts. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: low Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: low Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.refresh.buffer.seconds Type: short Default: 300 Valid Values: [0,... ,3600] Importance: low The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Valid Values: [0,... ,900] Importance: low The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Valid Values: [0.5,... ,1.0] Importance: low Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Valid Values: [0.0,... ,0.25] Importance: low The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. security.providers Type: string Default: null Importance: low A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface. ssl.cipher.suites Type: list Default: null Importance: low A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: low The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.secure.random.implementation Type: string Default: null Importance: low The SecureRandom PRNG implementation to use for SSL cryptography operations. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: low The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. transaction.timeout.ms Type: int Default: 60000 (1 minute) Importance: low The maximum amount of time in ms that the transaction coordinator will wait for a transaction status update from the producer before proactively aborting the ongoing transaction.If this value is larger than the transaction.max.timeout.ms setting in the broker, the request will fail with a InvalidTxnTimeoutException error. transactional.id Type: string Default: null Valid Values: non-empty string Importance: low The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions. If no TransactionalId is provided, then the producer is limited to idempotent delivery. If a TransactionalId is configured, enable.idempotence is implied. By default the TransactionId is not configured, which means transactions cannot be used. Note that, by default, transactions require a cluster of at least three brokers which is the recommended setting for production; for development you can change this, by adjusting broker setting transaction.state.log.replication.factor .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_rhel/producer-configuration-parameters-str
Chapter 46. Kafka Topic Name Matches Filter Action
Chapter 46. Kafka Topic Name Matches Filter Action Filter based on kafka topic value compared to regex 46.1. Configuration Options The following table summarizes the configuration options available for the topic-name-matches-filter-action Kamelet: Property Name Description Type Default Example regex * Regex The Regex to Evaluate against the Kafka topic name string Note Fields marked with an asterisk (*) are mandatory. 46.2. Dependencies At runtime, the topic-name-matches-filter-action Kamelet relies upon the presence of the following dependencies: camel:core camel:kamelet 46.3. Usage This section describes how you can use the topic-name-matches-filter-action . 46.3.1. Kafka Action You can use the topic-name-matches-filter-action Kamelet as an intermediate step in a Kafka binding. topic-name-matches-filter-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: topic-name-matches-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: topic-name-matches-filter-action properties: regex: "The Regex" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 46.3.1.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 46.3.1.2. Procedure for using the cluster CLI Save the topic-name-matches-filter-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f topic-name-matches-filter-action-binding.yaml 46.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step topic-name-matches-filter-action -p "step-0.regex=The Regex" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 46.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/topic-name-matches-filter-action.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: topic-name-matches-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: topic-name-matches-filter-action properties: regex: \"The Regex\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f topic-name-matches-filter-action-binding.yaml", "kamel bind timer-source?message=Hello --step topic-name-matches-filter-action -p \"step-0.regex=The Regex\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/kafka-topic-name-matches-filter-action
Chapter 1. Extension APIs
Chapter 1. Extension APIs 1.1. APIService [apiregistration.k8s.io/v1] Description APIService represents a server for a particular GroupVersion. Name must be "version.group". Type object 1.2. CustomResourceDefinition [apiextensions.k8s.io/v1] Description CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format <.spec.name>.<.spec.group>. Type object 1.3. MutatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object. Type object 1.4. ValidatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it. Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/extension_apis/extension-apis
Chapter 7. Installation configuration parameters for AWS
Chapter 7. Installation configuration parameters for AWS Before you deploy an OpenShift Container Platform cluster on AWS, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 7.1. Available installation configuration parameters for AWS The following tables specify the required, optional, and AWS-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 7.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Configures the IPv4 join subnet that is used internally by ovn-kubernetes . This subnet must not overlap with any other subnet that OpenShift Container Platform is using, including the node network. The size of the subnet must be larger than the number of nodes. You cannot change the value after installation. An IP network block in CIDR notation. The default value is 100.64.0.0/16 . 7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings Required to set the NLB load balancer type in AWS. Valid values are Classic or NLB . If no value is specified, the installation program defaults to Classic . The installation program sets the value provided here in the ingress cluster configuration object. If you do not specify a load balancer type for other Ingress Controllers, they use the type set in this parameter. Classic or NLB . The default value is Classic . How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . + Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough , or Manual . + Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 7.1.4. Optional AWS configuration parameters Optional AWS configuration parameters are described in the following table: Table 7.4. Optional AWS parameters Parameter Description Values The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. The name of the IAM instance profile that you use for the machine. If you want the installation program to create the IAM instance profile for you, do not use the iamProfile parameter. You can specify either the iamProfile or iamRole parameter, but you cannot specify both. String The name of the IAM instance role that you use for the machine. When you specify an IAM role, the installation program creates an instance profile. If you want the installation program to create the IAM instance role for you, do not select the iamRole parameter. You can specify either the iamRole or iamProfile parameter, but you cannot specify both. String The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. Integer, for example 4000 . The size in GiB of the root volume. Integer, for example 500 . The type of the root volume. Valid AWS EBS volume type , such as io1 . The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of worker nodes with a specific KMS key. Valid key ID or the key ARN . The EC2 instance type for the compute machines. Valid AWS instance type, such as m4.2xlarge . See the Supported AWS machine types table that follows. The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . The AWS region that the installation program creates compute resources in. Any valid AWS region , such as us-east-1 . You can use the AWS CLI to access the regions available based on your selected instance type. For example: aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. Currently, AWS Graviton3 processors are only available in some regions. The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. The name of the IAM instance profile that you use for the machine. If you want the installation program to create the IAM instance profile for you, do not use the iamProfile parameter. You can specify either the iamProfile or iamRole parameter, but you cannot specify both. String The name of the IAM instance role that you use for the machine. When you specify an IAM role, the installation program creates an instance profile. If you want the installation program to create the IAM instance role for you, do not use the iamRole parameter. You can specify either the iamRole or iamProfile parameter, but you cannot specify both. String The Input/Output Operations Per Second (IOPS) that is reserved for the root volume on control plane machines. Integer, for example 4000 . The size in GiB of the root volume for control plane machines. Integer, for example 500 . The type of the root volume for control plane machines. Valid AWS EBS volume type , such as io1 . The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of control plane nodes with a specific KMS key. Valid key ID and the key ARN . The EC2 instance type for the control plane machines. Valid AWS instance type, such as m6i.xlarge . See the Supported AWS machine types table that follows. The availability zones where the installation program creates machines for the control plane machine pool. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . The AWS region that the installation program creates control plane resources in. Valid AWS region , such as us-east-1 . The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. String, for example Z3URY6TWQ91KVV . An Amazon Resource Name (ARN) for an existing IAM role in the account containing the specified hosted zone. The installation program and cluster operators will assume this role when performing operations on the hosted zone. This parameter should only be used if you are installing a cluster into a shared VPC. String, for example arn:aws:iam::1234567890:role/shared-vpc-role . The AWS service endpoint name and URL. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. Valid AWS service endpoint name and valid AWS service endpoint URL. A map of keys and values that the installation program adds as tags to all resources that it creates. Any valid YAML map, such as key value pairs in the <key>: <value> format. For more information about AWS tags, see Tagging Your Amazon EC2 Resources in the AWS documentation. Note You can add up to 25 user defined tags during installation. The remaining 25 tags are reserved for OpenShift Container Platform. A flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. Boolean values, for example true or false . If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same machineNetwork[].cidr ranges that you specify. For a standard cluster, specify a public and a private subnet for each availability zone. For a private cluster, specify a private subnet for each availability zone. For clusters that use AWS Local Zones, you must add AWS Local Zone subnets to this list to ensure edge machine pool creation. Valid subnet IDs. The public IPv4 pool ID that is used to allocate Elastic IPs (EIPs) when publish is set to External . You must provision and advertise the pool in the same AWS account and region of the cluster. You must ensure that you have 2n + 1 IPv4 available in the pool where n is the total number of AWS zones used to deploy the Network Load Balancer (NLB) for API, NAT gateways, and bootstrap node. For more information about bring your own IP addresses (BYOIP) in AWS, see Onboard your BYOIP . A valid public IPv4 pool id Note BYOIP can be enabled only for customized installations that have no network restrictions. Prevents the S3 bucket from being deleted after completion of bootstrapping. true or false . The default value is false , which results in the S3 bucket being deleted.
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "networking: ovnKubernetesConfig: ipv4: internalJoinSubnet:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "platform: aws: lbType:", "publish:", "sshKey:", "compute: platform: aws: amiID:", "compute: platform: aws: iamProfile:", "compute: platform: aws: iamRole:", "compute: platform: aws: rootVolume: iops:", "compute: platform: aws: rootVolume: size:", "compute: platform: aws: rootVolume: type:", "compute: platform: aws: rootVolume: kmsKeyARN:", "compute: platform: aws: type:", "compute: platform: aws: zones:", "compute: aws: region:", "aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge", "controlPlane: platform: aws: amiID:", "controlPlane: platform: aws: iamProfile:", "controlPlane: platform: aws: iamRole:", "controlPlane: platform: aws: rootVolume: iops:", "controlPlane: platform: aws: rootVolume: size:", "controlPlane: platform: aws: rootVolume: type:", "controlPlane: platform: aws: rootVolume: kmsKeyARN:", "controlPlane: platform: aws: type:", "controlPlane: platform: aws: zones:", "controlPlane: aws: region:", "platform: aws: amiID:", "platform: aws: hostedZone:", "platform: aws: hostedZoneRole:", "platform: aws: serviceEndpoints: - name: url:", "platform: aws: userTags:", "platform: aws: propagateUserTags:", "platform: aws: subnets:", "platform: aws: publicIpv4Pool:", "platform: aws: preserveBootstrapIgnition:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_aws/installation-config-parameters-aws
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_jlink_to_customize_java_runtime_environment/making-open-source-more-inclusive
Chapter 38. General Updates
Chapter 38. General Updates The systemd-importd VM and container image import and export service Latest systemd version now contains the systemd-importd daemon that was not enabled in the earlier build, which caused the machinectl pull-* commands to fail. Note that the systemd-importd daemon is offered as a Technology Preview and should not be considered stable. (BZ# 1284974 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/technology_previews_general_updates
3.2. Required Maven Repositories
3.2. Required Maven Repositories Red Hat JBoss Data Grid Quickstarts require the following Maven repositories to be set up as a prerequisite: The JBoss Data Grid Maven Repository The techpreview-all-repository ( https://maven.repository.redhat.com/techpreview/all/ ) Both Maven repositories are installed in the same way. As a result, the subsequent instructions are for both repositories. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/required_maven_repositories
7.2. Performing Remote Queries via the Hot Rod Java Client
7.2. Performing Remote Queries via the Hot Rod Java Client Remote querying over Hot Rod can be enabled once the RemoteCacheManager has been configured with the Protobuf marshaller. The following procedure describes how to enable remote querying over its caches. Prerequisites RemoteCacheManager must be configured to use the Protobuf Marshaller. Procedure 7.1. Enabling Remote Querying via Hot Rod Add the infinispan-remote.jar The infinispan-remote.jar is an uberjar, and therefore no other dependencies are required for this feature. Enable indexing on the cache configuration. Indexing is not mandatory for Remote Queries, but it is highly recommended because it makes searches on caches that contain large amounts of data significantly faster. Indexing can be configured at any time. Enabling and configuring indexing is the same as for Library mode. Add the following configuration within the cache-container element loated inside the Infinispan subsystem element. Register the Protobuf schema definition files Register the Protobuf schema definition files by adding them in the ___protobuf_metadata system cache. The cache key is a string that denotes the file name and the value is .proto file, as a string. Alternatively, protobuf schemas can also be registered by invoking the registerProtofile methods of the server's ProtobufMetadataManager MBean. There is one instance of this MBean per cache container and is backed by the ___protobuf_metadata , so that the two approaches are equivalent. For an example of providing the protobuf schema via ___protobuf_metadata system cache, see Example 7.6, "Registering a Protocol Buffers schema file" . The following example demonstrates how to invoke the registerProtofile methods of the ProtobufMetadataManager MBean. Example 7.1. Registering Protobuf schema definition files via JMX Result All data placed in the cache is immediately searchable, whether or not indexing is in use. Entries do not need to be annotated, unlike embedded queries. The entity classes are only meaningful to the Java client and do not exist on the server. Once remote querying has been enabled, the QueryFactory can be obtained using the following: Example 7.2. Obtaining the QueryFactory Queries can now be run over Hot Rod similar to Library mode. Report a bug
[ "<!-- A basic example of an indexed local cache that uses the RAM Lucene directory provider --> <local-cache name=\"an-indexed-cache\" start=\"EAGER\"> <!-- Enable indexing using the RAM Lucene directory provider --> <indexing index=\"ALL\"> <property name=\"default.directory_provider\">ram</property> </indexing> </local-cache>", "import javax.management.MBeanServerConnection; import javax.management.ObjectName; import javax.management.remote.JMXConnector; import javax.management.remote.JMXServiceURL; String serverHost = ... // The address of your JDG server int serverJmxPort = ... // The JMX port of your server String cacheContainerName = ... // The name of your cache container String schemaFileName = ... // The name of the schema file String schemaFileContents = ... // The Protobuf schema file contents JMXConnector jmxConnector = JMXConnectorFactory.connect(new JMXServiceURL( \"service:jmx:remoting-jmx://\" + serverHost + \":\" + serverJmxPort)); MBeanServerConnection jmxConnection = jmxConnector.getMBeanServerConnection(); ObjectName protobufMetadataManagerObjName = new ObjectName(\"jboss.infinispan:type=RemoteQuery,name=\" + ObjectName.quote(cacheContainerName) + \",component=ProtobufMetadataManager\"); jmxConnection.invoke(protobufMetadataManagerObjName, \"registerProtofile\", new Object[]{schemaFileName, schemaFileContents}, new String[]{String.class.getName(), String.class.getName()}); jmxConnector.close();", "import org.infinispan.client.hotrod.Search; import org.infinispan.query.dsl.QueryFactory; import org.infinispan.query.dsl.Query; import org.infinispan.query.dsl.SortOrder; remoteCache.put(2, new User(\"John\", 33)); remoteCache.put(3, new User(\"Alfred\", 40)); remoteCache.put(4, new User(\"Jack\", 56)); remoteCache.put(4, new User(\"Jerry\", 20)); QueryFactory qf = Search.getQueryFactory(remoteCache); Query query = qf.from(User.class) .orderBy(\"age\", SortOrder.ASC) .having(\"name\").like(\"J%\") .and().having(\"age\").gte(33) .toBuilder().build(); List<User> list = query.list(); assertEquals(2, list.size()); assertEquals(\"John\", list.get(0).getName()); assertEquals(33, list.get(0).getAge()); assertEquals(\"Jack\", list.get(1).getName()); assertEquals(56, list.get(1).getAge());" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/remote_querying_via_hot_rod
B.2. Red Hat Access GUI
B.2. Red Hat Access GUI Another highly recommended source of information is a desktop application Red Hat Access GUI , which lets you find help, answers, and utilize diagnostic services using Red Hat Knowledgebase, resources, and functionality. If you have an active account on the Red Hat Customer Portal , you can access additional information and tips of the Knowledgebase easily browsable by keywords. Red Hat Access GUI is already installed if you select to have the GNOME Desktop installed. For more information on the benefits, installation, and usage of this tool, see Red Hat Access GUI .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/red-hat-access-gui
Chapter 13. Data Grid Modules for Red Hat JBoss EAP
Chapter 13. Data Grid Modules for Red Hat JBoss EAP To use Data Grid inside applications deployed to Red Hat JBoss EAP, you should install Data Grid modules that: Let you deploy applications without packaging Data Grid JAR files in your WAR or EAR file. Allow you to use a Data Grid version that is independent to the one bundled with Red Hat JBoss EAP. Important Red Hat JBoss EAP (EAP) applications can directly handle the infinispan subsystem without the need to separately install Data Grid modules. Red Hat provides support for this functionality since EAP version 7.4. However, your deployment requires the EAP modules to use advanced capabilities such as indexing and querying. 13.1. Installing Data Grid Modules Download and install Data Grid modules for Red Hat JBoss EAP. Prerequisites JDK 8 or later. An existing Red Hat JBoss EAP installation. Procedure Log in to the Red Hat customer portal. Download the ZIP archive for the modules from the Data Grid software downloads . Extract the ZIP archive and copy the contents of modules to the modules directory of your Red Hat JBoss EAP installation so that you get the resulting structure: USDEAP_HOME/modules/system/add-ons/rhdg/org/infinispan/rhdg-8.4 13.2. Configuring Applications to Use Data Grid Modules After you install Data Grid modules for Red Hat JBoss EAP, configure your application to use Data Grid functionality. Procedure In your project pom.xml file, mark the required Data Grid dependencies as provided . Configure your artifact archiver to generate the appropriate MANIFEST.MF file. pom.xml <dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-cachestore-jdbc</artifactId> <scope>provided</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <configuration> <archive> <manifestEntries> <Dependencies>org.infinispan:rhdg-8.4 services</Dependencies> </manifestEntries> </archive> </configuration> </plugin> </plugins> </build> Data Grid functionality is packaged as a single module, org.infinispan , that you can add as an entry to your application's manifest as follows: MANIFEST.MF AWS dependencies If you require AWS dependencies, such as S3_PING, add the following module to your application's manifest:
[ "<dependencies> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-cachestore-jdbc</artifactId> <scope>provided</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <configuration> <archive> <manifestEntries> <Dependencies>org.infinispan:rhdg-8.4 services</Dependencies> </manifestEntries> </archive> </configuration> </plugin> </plugins> </build>", "Manifest-Version: 1.0 Dependencies: org.infinispan:rhdg-8.4 services", "Manifest-Version: 1.0 Dependencies: com.amazonaws.aws-java-sdk:rhdg-8.4 services" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/embedding_data_grid_in_java_applications/ispn_modules
Chapter 10. Scaling storage nodes
Chapter 10. Scaling storage nodes To scale the storage capacity of OpenShift Data Foundation, you can do either of the following: Scale up storage nodes - Add storage capacity to the existing OpenShift Data Foundation worker nodes Scale out storage nodes - Add new worker nodes containing storage capacity 10.1. Requirements for scaling storage nodes Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Data Foundation instance: Platform requirements Storage device requirements Dynamic storage devices Capacity planning Warning Always ensure that you have plenty of storage capacity. If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space. Completely full storage is very difficult to recover. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. If you do run out of storage space completely, contact Red Hat Customer Support. 10.2. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on Red Hat OpenStack Platform infrastructure To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. The storage class should be set to standard if you are using the default storage class generated during deployment. If you have created other storage classes, select whichever is appropriate. + The Raw Capacity field shows the size set during storage class creation. The total amount of storage consumed is three times this amount, because OpenShift Data Foundation uses a replica count of 3. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 10.3. Scaling out storage capacity by adding new nodes To scale out storage capacity, you need to perform the following: Add a new node to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs, which is the increment of 3 OSDs of the capacity selected during initial configuration. Verify that the new node is added successfully Scale up the storage capacity after the node is added 10.3.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 10.3.2. Scaling up storage capacity After you add a new node to OpenShift Data Foundation, you must scale up the storage capacity as described in Scaling up storage by adding capacity .
[ "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/scaling-storage-nodes_osp
Chapter 3. Optimize workload performance domains
Chapter 3. Optimize workload performance domains One of the key benefits of Ceph storage is the ability to support different types of workloads within the same cluster using Ceph performance domains. Dramatically different hardware configurations can be associated with each performance domain. Ceph system administrators can deploy storage pools on the appropriate performance domain, providing applications with storage tailored to specific performance and cost profiles. Selecting appropriately sized and optimized servers for these performance domains is an essential aspect of designing a Red Hat Ceph Storage cluster. The following lists provide the criteria Red Hat uses to identify optimal Red Hat Ceph Storage cluster configurations on storage servers. These categories are provided as general guidelines for hardware purchases and configuration decisions, and can be adjusted to satisfy unique workload blends. Actual hardware configurations chosen will vary depending on specific workload mix and vendor capabilities. IOPS optimized An IOPS-optimized storage cluster typically has the following properties: Lowest cost per IOPS. Highest IOPS per GB. 99th percentile latency consistency. Typically uses for an IOPS-optimized storage cluster are: Typically block storage. 3x replication for hard disk drives (HDDs) or 2x replication for solid state drives (SSDs). MySQL on OpenStack clouds. Throughput optimized A throughput-optimized storage cluster typically has the following properties: Lowest cost per MBps (throughput). Highest MBps per TB. Highest MBps per BTU. Highest MBps per Watt. 97th percentile latency consistency. Typically uses for an throughput-optimized storage cluster are: Block or object storage. 3x replication. Active performance storage for video, audio, and images. Streaming media. Cost and capacity optimized A cost- and capacity-optimized storage cluster typically has the following properties: Lowest cost per TB. Lowest BTU per TB. Lowest Watts required per TB. Typically uses for an cost- and capacity-optimized storage cluster are: Typically object storage. Erasure coding common for maximizing usable capacity Object archive. Video, audio, and image object repositories. How performance domains work To the Ceph client interface that reads and writes data, a Ceph storage cluster appears as a simple pool where the client stores data. However, the storage cluster performs many complex operations in a manner that is completely transparent to the client interface. Ceph clients and Ceph object storage daemons (Ceph OSDs, or simply OSDs) both use the controlled replication under scalable hashing (CRUSH) algorithm for storage and retrieval of objects. OSDs run on OSD hosts-the storage servers within the cluster. A CRUSH map describes a topography of cluster resources, and the map exists both on client nodes as well as Ceph Monitor (MON) nodes within the cluster. Ceph clients and Ceph OSDs both use the CRUSH map and the CRUSH algorithm. Ceph clients communicate directly with OSDs, eliminating a centralized object lookup and a potential performance bottleneck. With awareness of the CRUSH map and communication with their peers, OSDs can handle replication, backfilling, and recovery-allowing for dynamic failure recovery. Ceph uses the CRUSH map to implement failure domains. Ceph also uses the CRUSH map to implement performance domains, which simply take the performance profile of the underlying hardware into consideration. The CRUSH map describes how Ceph stores data, and it is implemented as a simple hierarchy (acyclic graph) and a ruleset. The CRUSH map can support multiple hierarchies to separate one type of hardware performance profile from another. The following examples describe performance domains. Hard disk drives (HDDs) are typically appropriate for cost- and capacity-focused workloads. Throughput-sensitive workloads typically use HDDs with Ceph write journals on solid state drives (SSDs). IOPS-intensive workloads such as MySQL and MariaDB often use SSDs. All of these performance domains can coexist in a Ceph storage cluster.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/hardware_guide/optimize-workload-performance-domains_hw
Chapter 14. Configuring virtual GPUs for instances
Chapter 14. Configuring virtual GPUs for instances To support GPU-based rendering on your instances, you can define and manage virtual GPU (vGPU) resources according to your available physical GPU devices and your hypervisor type. You can use this configuration to divide the rendering workloads between all your physical GPU devices more effectively, and to have more control over scheduling your vGPU-enabled instances. To enable vGPU in the Compute (nova) service, create flavors that your cloud users can use to create Red Hat Enterprise Linux (RHEL) instances with vGPU devices. Each instance can then support GPU workloads with virtual GPU devices that correspond to the physical GPU devices. The Compute service tracks the number of vGPU devices that are available for each GPU profile you define on each host. The Compute service schedules instances to these hosts based on the flavor, attaches the devices, and monitors usage on an ongoing basis. When an instance is deleted, the Compute service adds the vGPU devices back to the available pool. Important Red Hat enables the use of NVIDIA vGPU in RHOSP without the requirement for support exceptions. However, Red Hat does not provide technical support for the NVIDIA vGPU drivers. The NVIDIA vGPU drivers are shipped and supported by NVIDIA. You require an NVIDIA Certified Support Services subscription to obtain NVIDIA Enterprise Support for NVIDIA vGPU software. For issues that result from the use of NVIDIA vGPUs where you are unable to reproduce the issue on a supported component, the following support policies apply: When Red Hat does not suspect that the third-party component is involved in the issue, the normal Scope of Support and Red Hat SLA apply. When Red Hat suspects that the third-party component is involved in the issue, the customer will be directed to NVIDIA in line with the Red Hat third party support and certification policies . For more information, see the Knowledge Base article Obtaining Support from NVIDIA . 14.1. Supported configurations and limitations Supported GPU cards For a list of supported NVIDIA GPU cards, see Virtual GPU Software Supported Products on the NVIDIA website. Limitations when using vGPU devices You can enable only one vGPU type on each Compute node. Each instance can use only one vGPU resource. Live migration of vGPU instances between hosts is not supported. Evacuation of vGPU instances is not supported. If you need to reboot the Compute node that hosts the vGPU instances, the vGPUs are not automatically reassigned to the recreated instances. You must either cold migrate the instances before you reboot the Compute node, or manually allocate each vGPU to the correct instance after reboot. To manually allocate each vGPU, you must retrieve the mdev UUID from the instance XML for each vGPU instance that runs on the Compute node before you reboot. You can use the following command to discover the mdev UUID for each instance: Replace <instance_name> with the libvirt instance name, OS-EXT-SRV-ATTR:instance_name , returned in a /servers request to the Compute API. Suspend operations on a vGPU-enabled instance is not supported due to a libvirt limitation. Instead, you can snapshot or shelve the instance. By default, vGPU types on Compute hosts are not exposed to API users. To grant access, add the hosts to a host aggregate. For more information, see Creating and managing host aggregates . If you use NVIDIA accelerator hardware, you must comply with the NVIDIA licensing requirements. For example, NVIDIA vGPU GRID requires a licensing server. For more information about the NVIDIA licensing requirements, see NVIDIA License Server Release Notes on the NVIDIA website. 14.2. Configuring vGPU on the Compute nodes To enable your cloud users to create instances that use a virtual GPU (vGPU), you must configure the Compute nodes that have the physical GPUs: Designate Compute nodes for vGPU. Configure the Compute node for vGPU. Deploy the overcloud. Create a vGPU flavor for launching instances that have vGPU. Tip If the GPU hardware is limited, you can also configure a host aggregate to optimize scheduling on the vGPU Compute nodes. To schedule only instances that request vGPUs on the vGPU Compute nodes, create a host aggregate of the vGPU Compute nodes, and configure the Compute scheduler to place only vGPU instances on the host aggregate. For more information, see Creating and managing host aggregates and Filtering by isolating host aggregates . Note To use an NVIDIA GRID vGPU, you must comply with the NVIDIA GRID licensing requirements and you must have the URL of your self-hosted license server. For more information, see the NVIDIA License Server Release Notes web page. 14.2.1. Prerequisites You have downloaded the NVIDIA GRID host driver RPM package that corresponds to your GPU device from the NVIDIA website. To determine which driver you need, see the NVIDIA Driver Downloads Portal . You must be a registered NVIDIA customer to download the drivers from the portal. You have built a custom overcloud image that has the NVIDIA GRID host driver installed. 14.2.2. Designating Compute nodes for vGPU To designate Compute nodes for vGPU workloads, you must create a new role file to configure the vGPU role, and configure a new overcloud flavor and resource class to use to tag the GPU-enabled Compute nodes. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate a new roles data file named roles_data_gpu.yaml that includes the Controller , Compute , and ComputeGpu roles: Open roles_data_gpu.yaml and edit or add the following parameters and sections: Section/Parameter Current value New value Role comment Role: Compute Role: ComputeGpu Role name name: Compute name: ComputeGpu description Basic Compute Node role GPU Compute Node role ImageDefault n/a overcloud-full-gpu HostnameFormatDefault -compute- -computegpu- deprecated_nic_config_name compute.yaml compute-gpu.yaml Register the GPU-enabled Compute nodes for the overcloud by adding them to your node definition template, node.json or node.yaml . For more information, see Registering nodes for the overcloud in the Director Installation and Usage guide. Inspect the node hardware: For more information, see Creating an inventory of the bare-metal node hardware in the Director Installation and Usage guide. Create the compute-vgpu-nvidia overcloud flavor for vGPU Compute nodes: Replace <ram_size_mb> with the RAM of the bare metal node, in MB. Replace <disk_size_gb> with the size of the disk on the bare metal node, in GB. Replace <no_vcpus> with the number of CPUs on the bare metal node. Note These properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size. Tag each bare metal node that you want to designate for GPU workloads with a custom GPU resource class: Replace <node> with the ID of the baremetal node. Associate the compute-vgpu-nvidia flavor with the custom GPU resource class: To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal service node, convert the resource class to uppercase, replace all punctuation with an underscore, and prefix with CUSTOM_ . Note A flavor can request only one instance of a bare metal resource class. Set the following flavor properties to prevent the Compute scheduler from using the bare metal flavor properties to schedule instances: To verify that the role was created, enter the following command: 14.2.3. Configuring the Compute node for vGPU and deploying the overcloud You need to retrieve and assign the vGPU type that corresponds to the physical GPU device in your environment, and prepare the environment files to configure the Compute node for vGPU. Procedure Install Red Hat Enterprise Linux and the NVIDIA GRID driver on a temporary Compute node and launch the node. On the Compute node, locate the vGPU type of the physical GPU device that you want to enable. For libvirt, virtual GPUs are mediated devices, or mdev type devices. To discover the supported mdev devices, enter the following command: Register the Net::SoftwareConfig of the ComputeGpu role in your network-environment.yaml file: Add the following parameters to the node-info.yaml file to specify the number of GPU Compute nodes, and the flavor to use for the GPU-designated Compute nodes: Create a gpu.yaml file to specify the vGPU type of your GPU device: Note Each physical GPU supports only one virtual GPU type. If you specify multiple vGPU types in this property, only the first type is used. Save the updates to your Compute environment file. Add your new role and environment files to the stack with your other environment files and deploy the overcloud: 14.3. Creating a custom GPU instance image To enable your cloud users to create instances that use a virtual GPU (vGPU), you can create a custom vGPU-enabled image for launching instances. Use the following procedure to create a custom vGPU-enabled instance image with the NVIDIA GRID guest driver and license file. Prerequisites You have configured and deployed the overcloud with GPU-enabled Compute nodes. Procedure Log in to the undercloud as the stack user. Source the overcloudrc credential file: Create an instance with the hardware and software profile that your vGPU instances require: Replace <flavor> with the name or ID of the flavor that has the hardware profile that your vGPU instances require. For information about creating a vGPU flavor, see Creating a vGPU flavor for instances . Replace <image> with the name or ID of the image that has the software profile that your vGPU instances require. For information about downloading RHEL cloud images, see Image service . Log in to the instance as a cloud-user. Create the gridd.conf NVIDIA GRID license file on the instance, following the NVIDIA guidance: Licensing an NVIDIA vGPU on Linux by Using a Configuration File . Install the GPU driver on the instance. For more information about installing an NVIDIA driver, see Installing the NVIDIA vGPU Software Graphics Driver on Linux . Note Use the hw_video_model image property to define the GPU driver type. You can choose none if you want to disable the emulated GPUs for your vGPU instances. For more information about supported drivers, see Image metadata . Create an image snapshot of the instance: Optional: Delete the instance. 14.4. Creating a vGPU flavor for instances To enable your cloud users to create instances for GPU workloads, you can create a GPU flavor that can be used to launch vGPU instances, and assign the vGPU resource to that flavor. Prerequisites You have configured and deployed the overcloud with GPU-designated Compute nodes. Procedure Create an NVIDIA GPU flavor, for example: Assign a vGPU resource to the flavor that you created. You can assign only one vGPU for each instance. 14.5. Launching a vGPU instance You can create a GPU-enabled instance for GPU workloads. Procedure Create an instance using a GPU flavor and image, for example: Log in to the instance as a cloud-user. To verify that the GPU is accessible from the instance, enter the following command from the instance: 14.6. Enabling PCI passthrough for a GPU device You can use PCI passthrough to attach a physical PCI device, such as a graphics card, to an instance. If you use PCI passthrough for a device, the instance reserves exclusive access to the device for performing tasks, and the device is not available to the host. Prerequisites The pciutils package is installed on the physical servers that have the PCI cards. The driver for the GPU device must be installed on the instance that the device is passed through to. Therefore, you need to have created a custom instance image that has the required GPU driver installed. For more information about how to create a custom instance image with the GPU driver installed, see Creating a custom GPU instance image . Procedure To determine the vendor ID and product ID for each passthrough device type, enter the following command on the physical server that has the PCI cards: For example, to determine the vendor and product ID for an NVIDIA GPU, enter the following command: To determine if each PCI device has Single Root I/O Virtualization (SR-IOV) capabilities, enter the following command on the physical server that has the PCI cards: To configure the Controller node on the overcloud for PCI passthrough, create an environment file, for example, pci_passthru_controller.yaml . Add PciPassthroughFilter to the NovaSchedulerDefaultFilters parameter in pci_passthru_controller.yaml : To specify the PCI alias for the devices on the Controller node, add the following configuration to pci_passthru_controller.yaml : If the PCI device has SR-IOV capabilities: If the PCI device does not have SR-IOV capabilities: For more information on configuring the device_type field, see PCI passthrough device type field . Note If the nova-api service is running in a role other than the Controller, then replace ControllerExtraConfig with the user role, in the format <Role>ExtraConfig . To configure the Compute node on the overcloud for PCI passthrough, create an environment file, for example, pci_passthru_compute.yaml . To specify the available PCIs for the devices on the Compute node, add the following to pci_passthru_compute.yaml : You must create a copy of the PCI alias on the Compute node for instance migration and resize operations. To specify the PCI alias for the devices on the Compute node, add the following to pci_passthru_compute.yaml : If the PCI device has SR-IOV capabilities: If the PCI device does not have SR-IOV capabilities: Note The Compute node aliases must be identical to the aliases on the Controller node. To enable IOMMU in the server BIOS of the Compute nodes to support PCI passthrough, add the KernelArgs parameter to pci_passthru_compute.yaml : Note When you first add the KernelArgs parameter to the configuration of a role, the overcloud nodes are automatically rebooted. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment. For more information, see Configuring manual node reboot to define KernelArgs . Add your custom environment files to the stack with your other environment files and deploy the overcloud: Configure a flavor to request the PCI devices. The following example requests two devices, each with a vendor ID of 10de and a product ID of 13f2 : Verification Create an instance with a PCI passthrough device: Replace <custom_gpu> with the name of your custom instance image that has the required GPU driver installed. Log in to the instance as a cloud user. To verify that the GPU is accessible from the instance, enter the following command from the instance: To check the NVIDIA System Management Interface status, enter the following command from the instance: Example output:
[ "virsh dumpxml <instance_name> | grep mdev", "[stack@director ~]USD source ~/stackrc", "(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_gpu.yaml Compute:ComputeGpu Compute Controller", "(undercloud)USD openstack overcloud node introspect --all-manageable --provide", "(undercloud)USD openstack flavor create --id auto --ram <ram_size_mb> --disk <disk_size_gb> --vcpus <no_vcpus> compute-vgpu-nvidia", "(undercloud)USD openstack baremetal node set --resource-class baremetal.GPU <node>", "(undercloud)USD openstack flavor set --property resources:CUSTOM_BAREMETAL_GPU=1 compute-vgpu-nvidia", "(undercloud)USD openstack flavor set --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 compute-vgpu-nvidia", "(undercloud)USD openstack overcloud profiles list", "ls /sys/class/mdev_bus/0000\\:06\\:00.0/mdev_supported_types/ nvidia-11 nvidia-12 nvidia-13 nvidia-14 nvidia-15 nvidia-16 nvidia-17 nvidia-18 nvidia-19 nvidia-20 nvidia-21 nvidia-210 nvidia-22 cat /sys/class/mdev_bus/0000\\:06\\:00.0/mdev_supported_types/nvidia-18/description num_heads=4, frl_config=60, framebuffer=2048M, max_resolution=4096x2160, max_instance=4", "resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml OS::TripleO::ComputeGpu::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute-gpu.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml", "parameter_defaults: OvercloudControllerFlavor: control OvercloudComputeFlavor: compute OvercloudComputeGpuFlavor: compute-vgpu-nvidia ControllerCount: 1 ComputeCount: 0 ComputeGpuCount: 1", "parameter_defaults: ComputeGpuExtraConfig: nova::compute::vgpu::enabled_vgpu_types: - nvidia-18", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -r /home/stack/templates/roles_data_gpu.yaml -e /home/stack/templates/network-environment.yaml -e /home/stack/templates/gpu.yaml -e /home/stack/templates/node-info.yaml", "source ~/overcloudrc", "(overcloud)USD openstack server create --flavor <flavor> --image <image> temp_vgpu_instance", "(overcloud)USD openstack server image create --name vgpu_image temp_vgpu_instance", "(overcloud)USD openstack flavor create --vcpus 6 --ram 8192 --disk 100 m1.small-gpu +----------------------------+--------------------------------------+ | Field | Value | +----------------------------+--------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 100 | | id | a27b14dd-c42d-4084-9b6a-225555876f68 | | name | m1.small-gpu | | os-flavor-access:is_public | True | | properties | | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 6 | +----------------------------+--------------------------------------+", "(overcloud)USD openstack flavor set m1.small-gpu --property \"resources:VGPU=1\" (overcloud)USD openstack flavor show m1.small-gpu +----------------------------+--------------------------------------+ | Field | Value | +----------------------------+--------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | access_project_ids | None | | disk | 100 | | id | a27b14dd-c42d-4084-9b6a-225555876f68 | | name | m1.small-gpu | | os-flavor-access:is_public | True | | properties | resources:VGPU='1' | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 6 | +----------------------------+--------------------------------------+", "(overcloud)USD openstack server create --flavor m1.small-gpu --image vgpu_image --security-group web --nic net-id=internal0 --key-name lambda vgpu-instance", "lspci -nn | grep <gpu_name>", "lspci -nn | grep -i <gpu_name>", "lspci -nn | grep -i nvidia 3b:00.0 3D controller [0302]: NVIDIA Corporation TU104GL [Tesla T4] [10de:1eb8] (rev a1) d8:00.0 3D controller [0302]: NVIDIA Corporation TU104GL [Tesla T4] [10de:1db4] (rev a1)", "lspci -v -s 3b:00.0 3b:00.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1) Capabilities: [bcc] Single Root I/O Virtualization (SR-IOV)", "parameter_defaults: NovaSchedulerDefaultFilters: ['AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter']", "ControllerExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" device_type: \"type-PF\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\" device_type: \"type-PF\"", "ControllerExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\"", "parameter_defaults: NovaPCIPassthrough: - vendor_id: \"10de\" product_id: \"1eb8\"", "ComputeExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" device_type: \"type-PF\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\" device_type: \"type-PF\"", "ComputeExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\"", "parameter_defaults: ComputeParameters: KernelArgs: \"intel_iommu=on iommu=pt\"", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/pci_passthru_controller.yaml -e /home/stack/templates/pci_passthru_compute.yaml", "openstack flavor set m1.large --property \"pci_passthrough:alias\"=\"t4:2\"", "openstack server create --flavor m1.large --image <custom_gpu> --wait test-pci", "lspci -nn | grep <gpu_name>", "nvidia-smi", "----------------------------------------------------------------------------- | NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 | |------------------------------- ---------------------- ----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |=============================== ====================== ======================| | 0 Tesla T4 Off | 00000000:01:00.0 Off | 0 | | N/A 43C P0 20W / 70W | 0MiB / 15109MiB | 0% Default | ------------------------------- ---------------------- ---------------------- ----------------------------------------------------------------------------- | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | -----------------------------------------------------------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-virtual-gpus-for-instances_vgpu
Chapter 3. Installer-provisioned infrastructure
Chapter 3. Installer-provisioned infrastructure 3.1. Preparing to install a cluster on Azure Stack Hub You prepare to install an OpenShift Container Platform cluster on Azure Stack Hub by completing the following steps: Verifying internet connectivity for your cluster. Configuring an Azure Stack Hub account . Generating an SSH key pair. You can use this key pair to authenticate into the OpenShift Container Platform cluster's nodes after it is deployed. Downloading the installation program. Installing the OpenShift CLI ( oc ). The Cloud Credential Operator (CCO) only supports your cloud provider in manual mode. As a result, you must manually manage clould credentials by specifying the identity and access management (IAM) secrets for your cloud provider. 3.1.1. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.1.2. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.1.3. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.1.4. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.1.5. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 3.2. Installing a cluster on Azure Stack Hub with customizations In OpenShift Container Platform version 4.17, you can install a cluster on Microsoft Azure Stack Hub with an installer-provisioned infrastructure. However, you must manually configure the install-config.yaml file to specify values that are specific to Azure Stack Hub. Note While you can select azure when using the installation program to deploy a cluster using installer-provisioned infrastructure, this option is only supported for the Azure Public Cloud. 3.2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure Stack Hub account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You verified that you have approximately 16 GB of local disk space. Installing the cluster requires that you download the RHCOS virtual hard drive (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Decompressing the VHD files requires this amount of local disk space. 3.2.2. Uploading the RHCOS cluster image You must download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Prerequisites Generate the Ignition config files for your cluster. Procedure Obtain the RHCOS VHD cluster image: Export the URL of the RHCOS VHD to an environment variable. USD export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') Download the compressed RHCOS VHD file locally. USD curl -O -L USD{COMPRESSED_VHD_URL} Decompress the VHD file. Note The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. The VHD file can be deleted once you have uploaded it. Upload the local VHD to the Azure Stack Hub environment, making sure that the blob is publicly available. For example, you can upload the VHD to a blob using the az cli or the web portal. 3.2.3. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Make the following modifications: Specify the required installation parameters. Update the platform.azure section to specify the parameters that are specific to Azure Stack Hub. Optional: Update one or more of the default configuration parameters to customize the installation. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure Stack Hub 3.2.3.1. Sample customized install-config.yaml file for Azure Stack Hub You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 10 11 baseDomainResourceGroupName: resource_group 12 13 region: azure_stack_local_region 14 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzureStackCloud 17 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 18 19 pullSecret: '{"auths": ...}' 20 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 7 10 12 14 17 18 20 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 4 6 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 8 The name of the cluster. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 11 The Azure Resource Manager endpoint that your Azure Stack Hub operator provides. 13 The name of the resource group that contains the DNS zone for your base domain. 15 The name of your Azure Stack Hub local region. 16 The name of an existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 19 The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. 21 The pull secret required to authenticate your cluster. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 If the Azure Stack Hub environment is using an internal Certificate Authority (CA), adding the CA certificate is required. 3.2.4. Manually manage cloud credentials The Cloud Credential Operator (CCO) only supports your cloud provider in manual mode. As a result, you must specify the identity and access management (IAM) secrets for your cloud provider. Procedure If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating cloud provider resources with manually maintained credentials 3.2.5. Configuring the cluster to use an internal CA If the Azure Stack Hub environment is using an internal Certificate Authority (CA), update the cluster-proxy-01-config.yaml file to configure the cluster to use the internal CA. Prerequisites Create the install-config.yaml file and specify the certificate trust bundle in .pem format. Create the cluster manifests. Procedure From the directory in which the installation program creates files, go to the manifests directory. Add user-ca-bundle to the spec.trustedCA.name field. Example cluster-proxy-01-config.yaml file apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {} Optional: Back up the manifests/ cluster-proxy-01-config.yaml file. The installation program consumes the manifests/ directory when you deploy the cluster. 3.2.6. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.2.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.2.8. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 3.2.9. steps Validating an installation Customize your cluster Optional: Opt out of remote health reporting Optional: Remove cloud provider credentials 3.3. Installing a cluster on Azure Stack Hub with network customizations In OpenShift Container Platform version 4.17, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Azure Stack Hub. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. Note While you can select azure when using the installation program to deploy a cluster using installer-provisioned infrastructure, this option is only supported for the Azure Public Cloud. 3.3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure Stack Hub account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You verified that you have approximately 16 GB of local disk space. Installing the cluster requires that you download the RHCOS virtual hard drive (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Decompressing the VHD files requires this amount of local disk space. 3.3.2. Uploading the RHCOS cluster image You must download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Prerequisites Generate the Ignition config files for your cluster. Procedure Obtain the RHCOS VHD cluster image: Export the URL of the RHCOS VHD to an environment variable. USD export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') Download the compressed RHCOS VHD file locally. USD curl -O -L USD{COMPRESSED_VHD_URL} Decompress the VHD file. Note The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. The VHD file can be deleted once you have uploaded it. Upload the local VHD to the Azure Stack Hub environment, making sure that the blob is publicly available. For example, you can upload the VHD to a blob using the az cli or the web portal. 3.3.3. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Make the following modifications: Specify the required installation parameters. Update the platform.azure section to specify the parameters that are specific to Azure Stack Hub. Optional: Update one or more of the default configuration parameters to customize the installation. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure Stack Hub 3.3.3.1. Sample customized install-config.yaml file for Azure Stack Hub You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 10 11 baseDomainResourceGroupName: resource_group 12 13 region: azure_stack_local_region 14 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzureStackCloud 17 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 18 19 pullSecret: '{"auths": ...}' 20 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 7 10 12 14 17 18 20 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 4 6 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 8 The name of the cluster. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 11 The Azure Resource Manager endpoint that your Azure Stack Hub operator provides. 13 The name of the resource group that contains the DNS zone for your base domain. 15 The name of your Azure Stack Hub local region. 16 The name of an existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 19 The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. 21 The pull secret required to authenticate your cluster. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 If the Azure Stack Hub environment is using an internal Certificate Authority (CA), adding the CA certificate is required. 3.3.4. Manually manage cloud credentials The Cloud Credential Operator (CCO) only supports your cloud provider in manual mode. As a result, you must specify the identity and access management (IAM) secrets for your cloud provider. Procedure If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating a cluster using the web console Updating a cluster using the CLI 3.3.5. Configuring the cluster to use an internal CA If the Azure Stack Hub environment is using an internal Certificate Authority (CA), update the cluster-proxy-01-config.yaml file to configure the cluster to use the internal CA. Prerequisites Create the install-config.yaml file and specify the certificate trust bundle in .pem format. Create the cluster manifests. Procedure From the directory in which the installation program creates files, go to the manifests directory. Add user-ca-bundle to the spec.trustedCA.name field. Example cluster-proxy-01-config.yaml file apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {} Optional: Back up the manifests/ cluster-proxy-01-config.yaml file. The installation program consumes the manifests/ directory when you deploy the cluster. 3.3.6. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 3.3.7. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute MachineSets : USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 3.3.8. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 3.3.8.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 3.1. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 3.2. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 3.3. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 3.4. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 3.5. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 3.6. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 3.7. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . Note The default value of Restricted sets the IP forwarding to drop. ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 3.8. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Important For OpenShift Container Platform 4.17 and later versions, clusters use 169.254.0.0/17 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 3.9. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Important For OpenShift Container Platform 4.17 and later versions, clusters use fd69::/112 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 3.10. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full 3.3.9. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations. Note This configuration is necessary to run both Linux and Windows nodes in the same cluster. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Note For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . 3.3.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.3.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.3.12. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 3.3.13. steps Validating an installation Customize your cluster Optional: Opt out of remote health reporting Optional: Remove cloud provider credentials
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')", "curl -O -L USD{COMPRESSED_VHD_URL}", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 10 11 baseDomainResourceGroupName: resource_group 12 13 region: azure_stack_local_region 14 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzureStackCloud 17 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 18 19 pullSecret: '{\"auths\": ...}' 20 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {}", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')", "curl -O -L USD{COMPRESSED_VHD_URL}", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 10 11 baseDomainResourceGroupName: resource_group 12 13 region: azure_stack_local_region 14 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzureStackCloud 17 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 18 19 pullSecret: '{\"auths\": ...}' 20 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {}", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_azure_stack_hub/installer-provisioned-infrastructure
Standalone CLIs
Standalone CLIs Red Hat Trusted Application Pipeline 1.4 Explore the standalone CLIs you can use with Red Hat Trusted Application Pipeline. Red Hat Trusted Application Pipeline Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html-single/standalone_clis/index
Integrating Microsoft Azure data into cost management
Integrating Microsoft Azure data into cost management Cost Management Service 1-latest Learn how to add your Microsoft Azure integration and RHEL metering Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_microsoft_azure_data_into_cost_management/index
Chapter 1. Preparing to install on IBM Power
Chapter 1. Preparing to install on IBM Power 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Choosing a method to install OpenShift Container Platform on IBM Power You can install a cluster on IBM Power(R) infrastructure that you provision, by using one of the following methods: Installing a cluster on IBM Power(R) : You can install OpenShift Container Platform on IBM Power(R) infrastructure that you provision. Installing a cluster on IBM Power(R) in a restricted network : You can install OpenShift Container Platform on IBM Power(R) infrastructure that you provision in a restricted or disconnected network, by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_power/preparing-to-install-on-ibm-power
Chapter 19. Backing up and restoring Data Grid clusters
Chapter 19. Backing up and restoring Data Grid clusters Data Grid Operator lets you back up and restore Data Grid cluster state for disaster recovery and to migrate Data Grid resources between clusters. 19.1. Backup and Restore CRs Backup and Restore CRs save in-memory data at runtime so you can easily recreate Data Grid clusters. Applying a Backup or Restore CR creates a new pod that joins the Data Grid cluster as a zero-capacity member, which means it does not require cluster rebalancing or state transfer to join. For backup operations, the pod iterates over cache entries and other resources and creates an archive, a .zip file, in the /opt/infinispan/backups directory on the persistent volume (PV). Note Performing backups does not significantly impact performance because the other pods in the Data Grid cluster only need to respond to the backup pod as it iterates over cache entries. For restore operations, the pod retrieves Data Grid resources from the archive on the PV and applies them to the Data Grid cluster. When either the backup or restore operation completes, the pod leaves the cluster and is terminated. Reconciliation Data Grid Operator does not reconcile Backup and Restore CRs which mean that backup and restore operations are "one-time" events. Modifying an existing Backup or Restore CR instance does not perform an operation or have any effect. If you want to update .spec fields, you must create a new instance of the Backup or Restore CR. 19.2. Backing up Data Grid clusters Create a backup file that stores Data Grid cluster state to a persistent volume. Prerequisites Create an Infinispan CR with spec.service.type: DataGrid . Ensure there are no active client connections to the Data Grid cluster. Data Grid backups do not provide snapshot isolation and data modifications are not written to the archive after the cache is backed up. To archive the exact state of the cluster, you should always disconnect any clients before you back it up. Procedure Name the Backup CR with the metadata.name field. Specify the Data Grid cluster to backup with the spec.cluster field. Configure the persistent volume claim (PVC) that adds the backup archive to the persistent volume (PV) with the spec.volume.storage and spec.volume.storage.storageClassName fields. Optionally include spec.resources fields to specify which Data Grid resources you want to back up. If you do not include any spec.resources fields, the Backup CR creates an archive that contains all Data Grid resources. If you do specify spec.resources fields, the Backup CR creates an archive that contains those resources only. You can also use the * wildcard character as in the following example: Apply your Backup CR. Verification Check that the status.phase field has a status of Succeeded in the Backup CR and that Data Grid logs have the following message: Run the following command to check that the backup is successfully created: 19.3. Restoring Data Grid clusters Restore Data Grid cluster state from a backup archive. Prerequisites Create a Backup CR on a source cluster. Create a target Data Grid cluster of Data Grid service pods. Note If you restore an existing cache, the operation overwrites the data in the cache but not the cache configuration. For example, you back up a distributed cache named mycache on the source cluster. You then restore mycache on a target cluster where it already exists as a replicated cache. In this case, the data from the source cluster is restored and mycache continues to have a replicated configuration on the target cluster. Ensure there are no active client connections to the target Data Grid cluster you want to restore. Cache entries that you restore from a backup can overwrite more recent cache entries. For example, a client performs a cache.put(k=2) operation and you then restore a backup that contains k=1 . Procedure Name the Restore CR with the metadata.name field. Specify a Backup CR to use with the spec.backup field. Specify the Data Grid cluster to restore with the spec.cluster field. Optionally add the spec.resources field to restore specific resources only. Apply your Restore CR. Verification Check that the status.phase field has a status of Succeeded in the Restore CR and that Data Grid logs have the following message: You should then open the Data Grid Console or establish a CLI connection to verify data and Data Grid resources are restored as expected. 19.4. Backup and restore status Backup and Restore CRs include a status.phase field that provides the status for each phase of the operation. Status Description Initializing The system has accepted the request and the controller is preparing the underlying resources to create the pod. Initialized The controller has prepared all underlying resources successfully. Running The pod is created and the operation is in progress on the Data Grid cluster. Succeeded The operation has completed successfully on the Data Grid cluster and the pod is terminated. Failed The operation did not successfully complete and the pod is terminated. Unknown The controller cannot obtain the status of the pod or determine the state of the operation. This condition typically indicates a temporary communication error with the pod. 19.4.1. Handling failed backup and restore operations If the status.phase field of the Backup or Restore CR is Failed , you should examine pod logs to determine the root cause before you attempt the operation again. Procedure Examine the logs for the pod that performed the failed operation. Pods are terminated but remain available until you delete the Backup or Restore CR. Resolve any error conditions or other causes of failure as indicated by the pod logs. Create a new instance of the Backup or Restore CR and attempt the operation again.
[ "apiVersion: infinispan.org/v2alpha1 kind: Backup metadata: name: my-backup spec: cluster: source-cluster volume: storage: 1Gi storageClassName: my-storage-class", "spec: resources: templates: - distributed-sync-prod - distributed-sync-dev caches: - cache-one - cache-two counters: - counter-name protoSchemas: - authors.proto - books.proto tasks: - wordStream.js", "spec: resources: caches: - \"*\" protoSchemas: - \"*\"", "apply -f my-backup.yaml", "ISPN005044: Backup file created 'my-backup.zip'", "describe Backup my-backup", "apiVersion: infinispan.org/v2alpha1 kind: Restore metadata: name: my-restore spec: backup: my-backup cluster: target-cluster", "spec: resources: templates: - distributed-sync-prod - distributed-sync-dev caches: - cache-one - cache-two counters: - counter-name protoSchemas: - authors.proto - books.proto tasks: - wordStream.js", "apply -f my-restore.yaml", "ISPN005045: Restore 'my-backup' complete", "logs <backup|restore_pod_name>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_guide/backing-up-restoring
Chapter 3. Test environment
Chapter 3. Test environment The test environment is the platform where you run both the product undergoing certification and the certification tests. It must comply with the following requirements: Requirement Justification Red Hat Enterprise Linux (RHEL) must be installed on a certified platform (hardware, hypervisor, or cloud instance). Ensures that the underlying physical or virtual platform does not introduce issues that might impact testing. The test environment must not make any modifications to RHEL kernel and user packages beyond those identified as acceptable configuration changes in the RHEL documentation. Any non-Red Hat kernel modules are subject to further inspection. Changes to Red Hat components might impact supportability for our customers. RHEL must not contain components with critical or important vulnerabilities. Ensures that the product undergoing certification is compatible with the security updates that customers are expected to install in their environments. SELinux must be enabled and running in enforcing mode. Ensures that the product undergoing certification is compatible with the recommended security settings. Red Hat Insights must be installed and running. Ensures compatibility with the platform's solution for proactive risk management.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_enterprise_linux_software_certification_policy_guide/assembly_test-environment_container-requirements
Chapter 3. Managing role-based access controls (RBAC) using the Red Hat Developer Hub Web UI
Chapter 3. Managing role-based access controls (RBAC) using the Red Hat Developer Hub Web UI Policy administrators can use the Developer Hub web interface (Web UI) to allocate specific roles and permissions to individual users or groups. Allocating roles ensures that access to resources and functionalities is regulated across the Developer Hub. With the policy administrator role in Developer Hub, you can assign permissions to users and groups. This role allows you to view, create, modify, and delete the roles using Developer Hub Web UI. 3.1. Creating a role in the Red Hat Developer Hub Web UI You can create a role in the Red Hat Developer Hub using the Web UI. Prerequisites You have enabled RBAC, have a policy administrator role in Developer Hub, and have added plugins with permission . Procedure Go to Administration at the bottom of the sidebar in the Developer Hub. The RBAC tab appears, displaying all the created roles in the Developer Hub. (Optional) Click any role to view the role information on the OVERVIEW page. Click CREATE to create a role. Enter the name and description of the role in the given fields and click . Add users and groups using the search field, and click . Select Plugin and Permission from the drop-downs in the Add permission policies section. Select or clear the Policy that you want to set in the Add permission policies section, and click . Review the added information in the Review and create section. Click CREATE . Verification The created role appears in the list available in the RBAC tab. 3.2. Editing a role in the Red Hat Developer Hub Web UI You can edit a role in the Red Hat Developer Hub using the Web UI. Note The policies generated from a policy.csv or ConfigMap file cannot be edited or deleted using the Developer Hub Web UI. Prerequisites You have enabled RBAC, have a policy administrator role in Developer Hub, and have added plugins with permission . The role that you want to edit is created in the Developer Hub. Procedure Go to Administration at the bottom of the sidebar in the Developer Hub. The RBAC tab appears, displaying all the created roles in the Developer Hub. (Optional) Click any role to view the role information on the OVERVIEW page. Select the edit icon for the role that you want to edit. Edit the details of the role, such as name, description, users and groups, and permission policies, and click . Review the edited details of the role and click SAVE . After editing a role, you can view the edited details of a role on the OVERVIEW page of a role. You can also edit a role's users and groups or permissions by using the edit icon on the respective cards on the OVERVIEW page. 3.3. Deleting a role in the Red Hat Developer Hub Web UI You can delete a role in the Red Hat Developer Hub using the Web UI. Note The policies generated from a policy.csv or ConfigMap file cannot be edited or deleted using the Developer Hub Web UI. Prerequisites You have enabled RBAC and have a policy administrator role in Developer Hub . The role that you want to delete is created in the Developer Hub. Procedure Go to Administration at the bottom of the sidebar in the Developer Hub. The RBAC tab appears, displaying all the created roles in the Developer Hub. (Optional) Click any role to view the role information on the OVERVIEW page. Select the delete icon from the Actions column for the role that you want to delete. Delete this role? pop-up appears on the screen. Click DELETE .
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/authorization/managing-authorizations-by-using-the-web-ui
Chapter 10. SELinux systemd Access Control
Chapter 10. SELinux systemd Access Control In Red Hat Enterprise Linux 7, system services are controlled by the systemd daemon. In releases of Red Hat Enterprise Linux, daemons could be started in two ways: At boot time, the System V init daemon launched an init.rc script and then this script launched the required daemon. For example, the Apache server, which was started at boot, got the following SELinux label: An administrator launched the init.rc script manually, causing the daemon to run. For example, when the service httpd restart command was invoked on the Apache server, the resulting SELinux label looked as follows: When launched manually, the process adopted the user portion of the SELinux label that started it, making the labeling in the two scenarios above inconsistent. With the systemd daemon, the transitions are very different. As systemd handles all the calls to start and stop daemons on the system, using the init_t type, it can override the user part of the label when a daemon is restarted manually. As a result, the labels in both scenarios above are system_u:system_r:httpd_t:s0 as expected and the SELinux policy could be improved to govern which domains are able to control which units. 10.1. SELinux Access Permissions for Services In versions of Red Hat Enterprise Linux, an administrator was able to control, which users or applications were able to start or stop services based on the label of the System V Init script. Now, systemd starts and stops all services, and users and processes communicate with systemd using the systemctl utility. The systemd daemon has the ability to consult the SELinux policy and check the label of the calling process and the label of the unit file that the caller tries to manage, and then ask SELinux whether or not the caller is allowed the access. This approach strengthens access control to critical system capabilities, which include starting and stopping system services. For example, previously, administrators had to allow NetworkManager to execute systemctl to send a D-Bus message to systemd , which would in turn start or stop whatever service NetworkManager requested. In fact, NetworkManager was allowed to do everything systemctl could do. It was also impossible to setup confined administrators so that they could start or stop just particular services. To fix these issues, systemd also works as an SELinux Access Manager. It can retrieve the label of the process running systemctl or the process that sent a D-Bus message to systemd . The daemon then looks up the label of the unit file that the process wanted to configure. Finally, systemd can retrieve information from the kernel if the SELinux policy allows the specific access between the process label and the unit file label. This means a compromised application that needs to interact with systemd for a specific service can now be confined by SELinux. Policy writers can also use these fine-grained controls to confine administrators. Policy changes involve a new class called service , with the following permissions: For example, a policy writer can now allow a domain to get the status of a service or start and stop a service, but not enable or disable a service. Access control operations in SELinux and systemd do not match in all cases. A mapping was defined to line up systemd method calls with SELinux access checks. Table 10.1, "Mapping of systemd unit file method calls on SELinux access checks" maps access checks on unit files while Table 10.2, "Mapping of systemd general system calls on SELinux access checks" covers access checks for the system in general. If no match is found in either table, then the undefined system check is called. Table 10.1. Mapping of systemd unit file method calls on SELinux access checks systemd unit file method SELinux access check DisableUnitFiles disable EnableUnitFiles enable GetUnit status GetUnitByPID status GetUnitFileState status Kill stop KillUnit stop LinkUnitFiles enable ListUnits status LoadUnit status MaskUnitFiles disable PresetUnitFiles enable ReenableUnitFiles enable Reexecute start Reload reload ReloadOrRestart start ReloadOrRestartUnit start ReloadOrTryRestart start ReloadOrTryRestartUnit start ReloadUnit reload ResetFailed stop ResetFailedUnit stop Restart start RestartUnit start Start start StartUnit start StartUnitReplace start Stop stop StopUnit stop TryRestart start TryRestartUnit start UnmaskUnitFiles enable Table 10.2. Mapping of systemd general system calls on SELinux access checks systemd general system call SELinux access check ClearJobs reboot FlushDevices halt Get status GetAll status GetJob status GetSeat status GetSession status GetSessionByPID status GetUser status Halt halt Introspect status KExec reboot KillSession halt KillUser halt ListJobs status ListSeats status ListSessions status ListUsers status LockSession halt PowerOff halt Reboot reboot SetUserLinger halt TerminateSeat halt TerminateSession halt TerminateUser halt Example 10.1. SELinux Policy for a System Service By using the sesearch utility, you can list policy rules for a system service. For example, calling the sesearch -A -s NetworkManager_t -c service command returns:
[ "system_u:system_r:httpd_t:s0", "unconfined_u:system_r:httpd_t:s0", "class service { start stop status reload kill load enable disable }", "allow NetworkManager_t dnsmasq_unit_file_t : service { start stop status reload kill load } ; allow NetworkManager_t nscd_unit_file_t : service { start stop status reload kill load } ; allow NetworkManager_t ntpd_unit_file_t : service { start stop status reload kill load } ; allow NetworkManager_t pppd_unit_file_t : service { start stop status reload kill load } ; allow NetworkManager_t polipo_unit_file_t : service { start stop status reload kill load } ;" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-security-enhanced_linux-systemd_access_control
Part II. Part II: Setting up certificate services
Part II. Part II: Setting up certificate services Important For accountability purposes, direct modification of CS.cfg , server.xml or any configuration file post-installation is expressly prohibited in a certified environment. Such actions (direct modification of plain text configuration files) are allowed only during installation and the post-installation that comes immediately after the installation prior to going live.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/part_ii_setting_up_certificate_services
Chapter 5. Allocating additional resources to OpenShift AI users
Chapter 5. Allocating additional resources to OpenShift AI users As a cluster administrator, you can allocate additional resources to a cluster to support compute-intensive data science work. This support includes increasing the number of nodes in the cluster and changing the cluster's allocated machine pool. Prerequisites You have credentials for administering clusters in OpenShift Cluster Manager ( https://console.redhat.com/openshift/ ). For more information about configuring administrative access in OpenShift Cluster Manager, see Configuring access to clusters in OpenShift Cluster Manager . If you intend to increase the size of a machine pool by using accelerators, you have enabled GPU support in OpenShift AI. This includes installing the Node Feature Discovery operator and NVIDIA GPU Operators. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs . You have an AWS or GCP instance with the capacity to create larger container sizes. For compute-intensive operations, your AWS or GCP instance has enough capacity to accommodate the largest container size, XL . Procedure Log in to OpenShift Cluster Manager ( https://console.redhat.com/openshift/ ). Click Clusters . The Clusters page opens. Click the name of the cluster you want to allocate additional resources to. Click Actions Edit node count . Select a Machine pool from the list. Select the number of nodes assigned to the machine pool from the Node count list. Click Apply . Verification The additional resources that you allocated to the cluster appear on the Machine Pools tab.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/managing_openshift_ai/allocating-additional-resources-to-data-science-users_managing-rhoai
Chapter 1. Validating an installation
Chapter 1. Validating an installation You can check the status of an OpenShift Container Platform cluster after an installation by following the procedures in this document. 1.1. Reviewing the installation log You can review a summary of an installation in the OpenShift Container Platform installation log. If an installation succeeds, the information required to access the cluster is included in the log. Prerequisites You have access to the installation host. Procedure Review the .openshift_install.log log file in the installation directory on your installation host: USD cat <install_dir>/.openshift_install.log Example output Cluster credentials are included at the end of the log if the installation is successful, as outlined in the following example: ... time="2020-12-03T09:50:47Z" level=info msg="Install complete!" time="2020-12-03T09:50:47Z" level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'" time="2020-12-03T09:50:47Z" level=info msg="Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com" time="2020-12-03T09:50:47Z" level=info msg="Login to the console with user: \"kubeadmin\", and password: \"password\"" time="2020-12-03T09:50:47Z" level=debug msg="Time elapsed per stage:" time="2020-12-03T09:50:47Z" level=debug msg=" Infrastructure: 6m45s" time="2020-12-03T09:50:47Z" level=debug msg="Bootstrap Complete: 11m30s" time="2020-12-03T09:50:47Z" level=debug msg=" Bootstrap Destroy: 1m5s" time="2020-12-03T09:50:47Z" level=debug msg=" Cluster Operators: 17m31s" time="2020-12-03T09:50:47Z" level=info msg="Time elapsed: 37m26s" 1.2. Viewing the image pull source For clusters with unrestricted network connectivity, you can view the source of your pulled images by using a command on a node, such as crictl images . However, for disconnected installations, to view the source of pulled images, you must review the CRI-O logs to locate the Trying to access log entry, as shown in the following procedure. Other methods to view the image pull source, such as the crictl images command, show the non-mirrored image name, even though the image is pulled from the mirrored location. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Review the CRI-O logs for a master or worker node: USD oc adm node-logs <node_name> -u crio Example output The Trying to access log entry indicates where the image is being pulled from. ... Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time="2021-08-05 10:33:21.594930907Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.194341109Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.226788351Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" ... The log might show the image pull source twice, as shown in the preceding example. If your ImageContentSourcePolicy object lists multiple mirrors, OpenShift Container Platform attempts to pull the images in the order listed in the configuration, for example: 1.3. Getting cluster version, status, and update details You can view the cluster version and status by running the oc get clusterversion command. If the status shows that the installation is still progressing, you can review the status of the Operators for more information. You can also list the current update channel and review the available cluster updates. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Obtain the cluster version and overall status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4 The example output indicates that the cluster has been installed successfully. If the cluster status indicates that the installation is still progressing, you can obtain more detailed progress information by checking the status of the Operators: USD oc get clusteroperators.config.openshift.io View a detailed summary of cluster specifications, update availability, and update history: USD oc describe clusterversion List the current update channel: USD oc get clusterversion -o jsonpath='{.items[0].spec}{"\n"}' Example output {"channel":"stable-4.6","clusterID":"245539c1-72a3-41aa-9cec-72ed8cf25c5c"} Review the available cluster updates: USD oc adm upgrade Example output Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39 Additional resources See Querying Operator status after installation for more information about querying Operator status if your installation is still progressing. See Troubleshooting Operator issues for information about investigating issues with Operators. See Updating a cluster using the web console for more information on updating your cluster. See Understanding update channels and releases for an overview about update release channels. 1.4. Verifying that a cluster uses short-term credentials You can verify that a cluster uses short-term security credentials for individual components by checking the Cloud Credential Operator (CCO) configuration and other values in the cluster. Prerequisites You deployed an OpenShift Container Platform cluster using the Cloud Credential Operator utility ( ccoctl ) to implement short-term credentials. You installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Verify that the CCO is configured to operate in manual mode by running the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output confirms that the CCO is operating in manual mode: Example output Manual Verify that the cluster does not have root credentials by running the following command: USD oc get secrets \ -n kube-system <secret_name> where <secret_name> is the name of the root secret for your cloud provider. Platform Secret name Amazon Web Services (AWS) aws-creds Microsoft Azure azure-credentials Google Cloud Platform (GCP) gcp-credentials An error confirms that the root secret is not present on the cluster. Example output for an AWS cluster Error from server (NotFound): secrets "aws-creds" not found Verify that the components are using short-term security credentials for individual components by running the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the cluster is using manual mode with short-term credentials that are created and managed from outside of the cluster. Azure clusters: Verify that the components are assuming the Azure client ID that is specified in the secret manifests by running the following command: USD oc get secrets \ -n openshift-image-registry installer-cloud-credentials \ -o jsonpath='{.data}' An output that contains the azure_client_id and azure_federated_token_file felids confirms that the components are assuming the Azure client ID. Azure clusters: Verify that the pod identity webhook is running by running the following command: USD oc get pods \ -n openshift-cloud-credential-operator Example output NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m 1.5. Querying the status of the cluster nodes by using the CLI You can verify the status of the cluster nodes after an installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the status of the cluster nodes. Verify that the output lists all of the expected control plane and compute nodes and that each node has a Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.28.5 control-plane-1.example.com Ready master 41m v1.28.5 control-plane-2.example.com Ready master 45m v1.28.5 compute-2.example.com Ready worker 38m v1.28.5 compute-3.example.com Ready worker 33m v1.28.5 control-plane-3.example.com Ready master 41m v1.28.5 Review CPU and memory resource availability for each cluster node: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27% Additional resources See Verifying node health for more details about reviewing node health and investigating node issues. 1.6. Reviewing the cluster status from the OpenShift Container Platform web console You can review the following information in the Overview page in the OpenShift Container Platform web console: The general status of your cluster The status of the control plane, cluster Operators, and storage CPU, memory, file system, network transfer, and pod availability The API address of the cluster, the cluster ID, and the name of the provider Cluster version information Cluster update status, including details of the current update channel and available updates A cluster inventory detailing node, pod, storage class, and persistent volume claim (PVC) information A list of ongoing cluster activities and recent events Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Home Overview . 1.7. Reviewing the cluster status from Red Hat OpenShift Cluster Manager From the OpenShift Container Platform web console, you can review detailed information about the status of your cluster on OpenShift Cluster Manager. Prerequisites You are logged in to OpenShift Cluster Manager . You have access to the cluster as a user with the cluster-admin role. Procedure Go to the Clusters list in OpenShift Cluster Manager and locate your OpenShift Container Platform cluster. Click the Overview tab for your cluster. Review the following information about your cluster: vCPU and memory availability and resource usage The cluster ID, status, type, region, and the provider name Node counts by node type Cluster version details, the creation date of the cluster, and the name of the cluster owner The life cycle support status of the cluster Subscription information, including the service level agreement (SLA) status, the subscription unit type, the production status of the cluster, the subscription obligation, and the service level Tip To view the history for your cluster, click the Cluster history tab. Navigate to the Monitoring page to review the following information: A list of any issues that have been detected A list of alerts that are firing The cluster Operator status and version The cluster's resource usage Optional: You can view information about your cluster that Red Hat Insights collects by navigating to the Overview menu. From this menu you can view the following information: Potential issues that your cluster might be exposed to, categorized by risk level Health-check status by category Additional resources See Using Insights to identify issues with your cluster for more information about reviewing potential issues with your cluster. 1.8. Checking cluster resource availability and utilization OpenShift Container Platform provides a comprehensive set of monitoring dashboards that help you understand the state of cluster components. In the Administrator perspective, you can access dashboards for core OpenShift Container Platform components, including: etcd Kubernetes compute resources Kubernetes network resources Prometheus Dashboards relating to cluster and node performance Figure 1.1. Example compute resources dashboard Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective in the OpenShift Container Platform web console, navigate to Observe Dashboards . Choose a dashboard in the Dashboard list. Some dashboards, such as the etcd dashboard, produce additional sub-menus when selected. Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by selecting Custom time range in the Time Range list. Input or select the From and To dates and times. Click Save to save the custom time range. Optional: Select a Refresh Interval . Hover over each of the graphs within a dashboard to display detailed information about specific items. Additional resources See About OpenShift Container Platform monitoring for more information about the OpenShift Container Platform monitoring stack. 1.9. Listing alerts that are firing Alerts provide notifications when a set of defined conditions are true in an OpenShift Container Platform cluster. You can review the alerts that are firing in your cluster by using the Alerting UI in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to the Observe Alerting Alerts page. Review the alerts that are firing, including their Severity , State , and Source . Select an alert to view more detailed information in the Alert Details page. Additional resources See Managing alerts as an Administrator for further details about alerting in OpenShift Container Platform. 1.10. steps See Troubleshooting installations if you experience issues when installing your cluster. After installing OpenShift Container Platform, you can further expand and customize your cluster .
[ "cat <install_dir>/.openshift_install.log", "time=\"2020-12-03T09:50:47Z\" level=info msg=\"Install complete!\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Login to the console with user: \\\"kubeadmin\\\", and password: \\\"password\\\"\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Time elapsed per stage:\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Infrastructure: 6m45s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Bootstrap Complete: 11m30s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Bootstrap Destroy: 1m5s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Cluster Operators: 17m31s\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Time elapsed: 37m26s\"", "oc adm node-logs <node_name> -u crio", "Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time=\"2021-08-05 10:33:21.594930907Z\" level=info msg=\"Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le\" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.194341109Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.226788351Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\"", "Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\" Trying to access \\\"li0317gcp2.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4", "oc get clusteroperators.config.openshift.io", "oc describe clusterversion", "oc get clusterversion -o jsonpath='{.items[0].spec}{\"\\n\"}'", "{\"channel\":\"stable-4.6\",\"clusterID\":\"245539c1-72a3-41aa-9cec-72ed8cf25c5c\"}", "oc adm upgrade", "Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39", "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "Manual", "oc get secrets -n kube-system <secret_name>", "Error from server (NotFound): secrets \"aws-creds\" not found", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o jsonpath='{.data}'", "oc get pods -n openshift-cloud-credential-operator", "NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m", "oc get nodes", "NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.28.5 control-plane-1.example.com Ready master 41m v1.28.5 control-plane-2.example.com Ready master 45m v1.28.5 compute-2.example.com Ready worker 38m v1.28.5 compute-3.example.com Ready worker 33m v1.28.5 control-plane-3.example.com Ready master 41m v1.28.5", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27%" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/validation_and_troubleshooting/validating-an-installation
Chapter 3. Network verification for ROSA clusters
Chapter 3. Network verification for ROSA clusters Network verification checks run automatically when you deploy a Red Hat OpenShift Service on AWS (ROSA) cluster into an existing Virtual Private Cloud (VPC) or create an additional machine pool with a subnet that is new to your cluster. The checks validate your network configuration and highlight errors, enabling you to resolve configuration issues prior to deployment. You can also run the network verification checks manually to validate the configuration for an existing cluster. 3.1. Understanding network verification for ROSA clusters When you deploy a Red Hat OpenShift Service on AWS (ROSA) cluster into an existing Virtual Private Cloud (VPC) or create an additional machine pool with a subnet that is new to your cluster, network verification runs automatically. This helps you identify and resolve configuration issues prior to deployment. When you prepare to install your cluster by using Red Hat OpenShift Cluster Manager, the automatic checks run after you input a subnet into a subnet ID field on the Virtual Private Cloud (VPC) subnet settings page. If you create your cluster by using the ROSA CLI ( rosa ) with the interactive mode, the checks run after you provide the required VPC network information. If you use the CLI without the interactive mode, the checks begin immediately prior to the cluster creation. When you add a machine pool with a subnet that is new to your cluster, the automatic network verification checks the subnet to ensure that network connectivity is available before the machine pool is provisioned. After automatic network verification completes, a record is sent to the service log. The record provides the results of the verification check, including any network configuration errors. You can resolve the identified issues before a deployment and the deployment has a greater chance of success. You can also run the network verification manually for an existing cluster. This enables you to verify the network configuration for your cluster after making configuration changes. For steps to run the network verification checks manually, see Running the network verification manually . 3.2. Scope of the network verification checks The network verification includes checks for each of the following requirements: The parent Virtual Private Cloud (VPC) exists. All specified subnets belong to the VPC. The VPC has enableDnsSupport enabled. The VPC has enableDnsHostnames enabled. Egress is available to the required domain and port combinations that are specified in the AWS firewall prerequisites section. 3.3. Automatic network verification bypassing You can bypass the automatic network verification if you want to deploy a Red Hat OpenShift Service on AWS (ROSA) cluster with known network configuration issues into an existing Virtual Private Cloud (VPC). If you bypass the network verification when you create a cluster, the cluster has a limited support status. After installation, you can resolve the issues and then manually run the network verification. The limited support status is removed after the verification succeeds. Bypassing automatic network verification by using OpenShift Cluster Manager When you install a cluster into an existing VPC by using Red Hat OpenShift Cluster Manager, you can bypass the automatic verification by selecting Bypass network verification on the Virtual Private Cloud (VPC) subnet settings page. 3.4. Running the network verification manually After installing a Red Hat OpenShift Service on AWS (ROSA) cluster, you can run the network verification checks manually by using Red Hat OpenShift Cluster Manager or the ROSA CLI ( rosa ). Running the network verification manually using OpenShift Cluster Manager You can manually run the network verification checks for an existing Red Hat OpenShift Service on AWS (ROSA) cluster by using Red Hat OpenShift Cluster Manager. Prerequisites You have an existing ROSA cluster. You are the cluster owner or you have the cluster editor role. Procedure Navigate to OpenShift Cluster Manager and select your cluster. Select Verify networking from the Actions drop-down menu. Running the network verification manually using the CLI You can manually run the network verification checks for an existing Red Hat OpenShift Service on AWS (ROSA) cluster by using the ROSA CLI ( rosa ). When you run the network verification, you can specify a set of VPC subnet IDs or a cluster name. Prerequisites You have installed and configured the latest ROSA CLI ( rosa ) on your installation host. You have an existing ROSA cluster. You are the cluster owner or you have the cluster editor role. Procedure Verify the network configuration by using one of the following methods: Verify the network configuration by specifying the cluster name. The subnet IDs are automatically detected: USD rosa verify network --cluster <cluster_name> 1 1 Replace <cluster_name> with the name of your cluster. Example output I: Verifying the following subnet IDs are configured correctly: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: pending I: subnet-03146b9b52b2034cc: passed I: Run the following command to wait for verification to all subnets to complete: rosa verify network --watch --status-only --region us-east-1 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc Ensure that verification to all subnets has been completed: USD rosa verify network --watch \ 1 --status-only \ 2 --region <region_name> \ 3 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc 4 1 The watch flag causes the command to complete after all the subnets under test are in a failed or passed state. 2 The status-only flag does not trigger a run of network verification but returns the current state, for example, subnet-123 (verification still in-progress) . By default, without this option, a call to this command always triggers a verification of the specified subnets. 3 Use a specific AWS region that overrides the AWS_REGION environment variable. 4 Enter a list of subnet IDs separated by commas to verify. If any of the subnets do not exist, the error message Network verification for subnet 'subnet-<subnet_number> not found displays and no subnets are checked. Example output I: Checking the status of the following subnet IDs: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: passed I: subnet-03146b9b52b2034cc: passed Tip To output the full list of verification tests, you can include the --debug argument when you run the rosa verify network command. Verify the network configuration by specifying the VPC subnets IDs. Replace <region_name> with your AWS region and <AWS_account_ID> with your AWS account ID: USD rosa verify network --subnet-ids 03146b9b52b6024cb,subnet-03146b9b52b2034cc --region <region_name> --role-arn arn:aws:iam::<AWS_account_ID>:role/my-Installer-Role Example output I: Verifying the following subnet IDs are configured correctly: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: pending I: subnet-03146b9b52b2034cc: passed I: Run the following command to wait for verification to all subnets to complete: rosa verify network --watch --status-only --region us-east-1 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc Ensure that verification to all subnets has been completed: USD rosa verify network --watch --status-only --region us-east-1 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc Example output I: Checking the status of the following subnet IDs: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: passed I: subnet-03146b9b52b2034cc: passed
[ "rosa verify network --cluster <cluster_name> 1", "I: Verifying the following subnet IDs are configured correctly: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: pending I: subnet-03146b9b52b2034cc: passed I: Run the following command to wait for verification to all subnets to complete: rosa verify network --watch --status-only --region us-east-1 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc", "rosa verify network --watch \\ 1 --status-only \\ 2 --region <region_name> \\ 3 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc 4", "I: Checking the status of the following subnet IDs: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: passed I: subnet-03146b9b52b2034cc: passed", "rosa verify network --subnet-ids 03146b9b52b6024cb,subnet-03146b9b52b2034cc --region <region_name> --role-arn arn:aws:iam::<AWS_account_ID>:role/my-Installer-Role", "I: Verifying the following subnet IDs are configured correctly: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: pending I: subnet-03146b9b52b2034cc: passed I: Run the following command to wait for verification to all subnets to complete: rosa verify network --watch --status-only --region us-east-1 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc", "rosa verify network --watch --status-only --region us-east-1 --subnet-ids subnet-03146b9b52b6024cb,subnet-03146b9b52b2034cc", "I: Checking the status of the following subnet IDs: [subnet-03146b9b52b6024cb subnet-03146b9b52b2034cc] I: subnet-03146b9b52b6024cb: passed I: subnet-03146b9b52b2034cc: passed" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/networking/rosa-network-verification_ingress-node-firewall-operator
Chapter 30. Getting started with an ext4 file system
Chapter 30. Getting started with an ext4 file system As a system administrator, you can create, mount, resize, backup, and restore an ext4 file system. The ext4 file system is a scalable extension of the ext3 file system. With Red Hat Enterprise Linux 9, it can support a maximum individual file size of 16 terabytes, and file system to a maximum of 50 terabytes. 30.1. Features of an ext4 file system Following are the features of an ext4 file system: Using extents: The ext4 file system uses extents, which improves performance when using large files and reduces metadata overhead for large files. Ext4 labels unallocated block groups and inode table sections accordingly, which allows the block groups and table sections to be skipped during a file system check. It leads to a quick file system check, which becomes more beneficial as the file system grows in size. Metadata checksum: By default, this feature is enabled in Red Hat Enterprise Linux 9. Allocation features of an ext4 file system: Persistent pre-allocation Delayed allocation Multi-block allocation Stripe-aware allocation Extended attributes ( xattr ): This allows the system to associate several additional name and value pairs per file. Quota journaling: This avoids the need for lengthy quota consistency checks after a crash. Note The only supported journaling mode in ext4 is data=ordered (default). For more information, see the Red Hat Knowledgebase solution Is the EXT journaling option "data=writeback" supported in RHEL? . Subsecond timestamps - This gives timestamps to the subsecond. Additional resources ext4 man page on your system 30.2. Creating an ext4 file system As a system administrator, you can create an ext4 file system on a block device using mkfs.ext4 command. Prerequisites A partition on your disk. For information about creating MBR or GPT partitions, see Creating a partition table on a disk with parted . Alternatively, use an LVM or MD volume. Procedure To create an ext4 file system: For a regular-partition device, an LVM volume, an MD volume, or a similar device, use the following command: Replace /dev/ block_device with the path to a block device. For example, /dev/sdb1 , /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a , or /dev/my-volgroup/my-lv . In general, the default options are optimal for most usage scenarios. For striped block devices (for example, RAID5 arrays), the stripe geometry can be specified at the time of file system creation. Using proper stripe geometry enhances the performance of an ext4 file system. For example, to create a file system with a 64k stride (that is, 16 x 4096) on a 4k-block file system, use the following command: In the given example: stride=value: Specifies the RAID chunk size stripe-width=value: Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe. Note To specify a UUID when creating a file system: Replace UUID with the UUID you want to set: for example, 7cd65de3-e0be-41d9-b66d-96d749c02da7 . Replace /dev/ block_device with the path to an ext4 file system to have the UUID added to it: for example, /dev/sda8 . To specify a label when creating a file system: To view the created ext4 file system: Additional resources ext4 and mkfs.ext4 man pages on your system 30.3. Mounting an ext4 file system As a system administrator, you can mount an ext4 file system using the mount utility. Prerequisites An ext4 file system. For information about creating an ext4 file system, see Creating an ext4 file system . Procedure To create a mount point to mount the file system: Replace /mount/point with the directory name where mount point of the partition must be created. To mount an ext4 file system: To mount an ext4 file system with no extra options: To mount the file system persistently, see Persistently mounting file systems . To view the mounted file system: Additional resources mount , ext4 , and fstab man pages on your system Mounting file systems 30.4. Resizing an ext4 file system As a system administrator, you can resize an ext4 file system using the resize2fs utility. The resize2fs utility reads the size in units of file system block size, unless a suffix indicating a specific unit is used. The following suffixes indicate specific units: s (sectors) - 512 byte sectors K (kilobytes) - 1,024 bytes M (megabytes) - 1,048,576 bytes G (gigabytes) - 1,073,741,824 bytes T (terabytes) - 1,099,511,627,776 bytes Prerequisites An ext4 file system. For information about creating an ext4 file system, see Creating an ext4 file system . An underlying block device of an appropriate size to hold the file system after resizing. Procedure To resize an ext4 file system, take the following steps: To shrink and grow the size of an unmounted ext4 file system: Replace /dev/block_device with the path to the block device, for example /dev/sdb1 . Replace size with the required resize value using s , K , M , G , and T suffixes. An ext4 file system may be grown while mounted using the resize2fs command: Note The size parameter is optional (and often redundant) when expanding. The resize2fs automatically expands to fill the available space of the container, usually a logical volume or partition. To view the resized file system: Additional resources resize2fs , e2fsck , and ext4 man pages on your system 30.5. Comparison of tools used with ext4 and XFS This section compares which tools to use to accomplish common tasks on the ext4 and XFS file systems. Task ext4 XFS Create a file system mkfs.ext4 mkfs.xfs File system check e2fsck xfs_repair Resize a file system resize2fs xfs_growfs Save an image of a file system e2image xfs_metadump and xfs_mdrestore Label or tune a file system tune2fs xfs_admin Back up a file system tar and rsync xfsdump and xfsrestore Quota management quota xfs_quota File mapping filefrag xfs_bmap Note If you want a complete client-server solution for backups over network, you can use bacula backup utility that is available in RHEL 9. For more information about Bacula, see Bacula backup solution .
[ "mkfs.ext4 /dev/ block_device", "mkfs.ext4 -E stride=16,stripe-width=64 /dev/ block_device", "mkfs.ext4 -U UUID /dev/ block_device", "mkfs.ext4 -L label-name /dev/ block_device", "blkid", "mkdir /mount/point", "mount /dev/ block_device /mount/point", "df -h", "umount /dev/ block_device e2fsck -f /dev/ block_device resize2fs /dev/ block_device size", "resize2fs /mount/device size", "df -h" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_file_systems/getting-started-with-an-ext4-file-system_managing-file-systems
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/updating_openshift_data_foundation/providing-feedback-on-red-hat-documentation_rhodf
Red Hat build of Apache Camel for Spring Boot Reference
Red Hat build of Apache Camel for Spring Boot Reference Red Hat build of Apache Camel 4.0 Red Hat build of Apache Camel for Spring Boot Reference Red Hat build of Apache Camel Documentation Team [email protected] Red Hat build of Apache Camel Support Team http://access.redhat.com/support
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/index
10.7. Cluster-Controlled Services Fails to Migrate
10.7. Cluster-Controlled Services Fails to Migrate If a cluster-controlled service fails to migrate to another node but the service will start on some specific node, check for the following conditions. Ensure that the resources required to run a given service are present on all nodes in the cluster that may be required to run that service. For example, if your clustered service assumes a script file in a specific location or a file system mounted at a specific mount point then you must ensure that those resources are available in the expected places on all nodes in the cluster. Ensure that failover domains, service dependency, and service exclusivity are not configured in such a way that you are unable to migrate services to nodes as you would expect. If the service in question is a virtual machine resource, check the documentation to ensure that all of the correct configuration work has been completed. Increase the resource group manager's logging, as described in Section 10.6, "Cluster Service Will Not Start" , and then read the messages logs to determine what is causing the service start to fail to migrate.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-clustnomigrate-ca
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 3.1-4 Wed Dec 20 2023 Lenka Spackova Fixed broken links and anchors. Revision 3.1-3 Fri Nov 12 2021 Lenka Spackova Updated Section 4.9, "Database Connectors" . Revision 3.1-2 Tue Dec 18 2018 Lenka Spackova Fixed syntax of a command in the PostgreSQL migration chapter. Revision 3.1-1 Thu May 03 2018 Lenka Spackova Release of Red Hat Software Collections 3.1 Release Notes. Revision 3.1-0 Wed Mar 04 2018 Lenka Spackova Release of Red Hat Software Collections 3.1 Beta Release Notes.
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.1_release_notes/appe-documentation-3.1_release_notes-revision_history
RBAC APIs
RBAC APIs OpenShift Container Platform 4.17 Reference guide for RBAC APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/rbac_apis/index
12.3. Configuring File Associations
12.3. Configuring File Associations 12.3.1. What Are MIME Types? In GNOME, MIME ( Multipurpose Internet Mail Extension ) types are used to identify the format of a file. The GNOME Desktop uses MIME types to: Determine which application should open a specific file format by default. Register other applications that can also open a specific file format. Provide a string describing the type of a file, for example, in a file properties dialog of the Files application. Provide an icon representing a specific file format, for example, in a file properties dialog of the Files application. MIME type names follow a given format: Example 12.7. MIME Types Format image/jpeg is an example of a MIME type where image is the media type, and jpeg is the subtype identifier. GNOME follows the freedesktop.org Shared MIME Info specification to determine: The machine-wide and user-specific location to store all MIME type specification files. How to register a MIME type so that the desktop environment knows which applications can be used to open a specific file format. How the user can change which applications should open what file formats. 12.3.1.1. What Is the MIME Database? The MIME database is a collection of all MIME type specification files that GNOME uses to store information about known MIME types. The most important part of the MIME database from the system administrator's point of view is the /usr/share/mime/packages/ directory where the MIME type related files specifying information on known MIME types are stored. One example of such a file is /usr/share/mime/packages/freedesktop.org.xml , specifying information about the standard MIME types available on the system by default. That file is provided by the shared-mime-info package. Getting More Information For detailed information describing the MIME type system, see the freedesktop.org Shared MIME Info specification located at the freedesktop.org website: http://www.freedesktop.org/wiki/Specifications/shared-mime-info-spec/ 12.3.2. Adding a Custom MIME Type for All Users To add a custom MIME type for all users on the system and register a default application for that MIME type, you need to create a new MIME type specification file in the /usr/share/mime/packages/ directory and a .desktop file in the /usr/share/applications/ directory. Procedure 12.3. Adding a Custom application/x-newtype MIME Type for All Users Create the /usr/share/mime/packages/application-x-newtype.xml file: <?xml version="1.0" encoding="UTF-8"?> <mime-info xmlns="http://www.freedesktop.org/standards/shared-mime-info"> <mime-type type="application/x-newtype"> <comment>new mime type</comment> <glob pattern="*.xyz"/> </mime-type> </mime-info> The sample application-x-newtype.xml file above defines a new MIME type application/x-newtype and assigns file names with the .xyz extension to that MIME type. Create a new .desktop file named, for example, myapplication1.desktop , and place it in the /usr/share/applications/ directory: The sample myapplication1.desktop file above associates the application/x-newtype MIME type with an application named My Application 1 , which is run by the command myapplication1 . Based on how myapplication1 gets started, choose one respective field code from Desktop Entry Specification . For example, for an application capable of opening multiple files, use: As root, update the MIME database for your changes to take effect: As root, update the application database: To verify that you have successfully associated *.xyz files with the application/x-newtype MIME type, first create an empty file, for example test.xyz : Then run the gvfs-info command: To verify that myapplication1.desktop has been correctly set as the default registered application for the application/x-newtype MIME type, run the gvfs-mime --query command: 12.3.3. Adding a Custom MIME Type for Individual Users To add a custom MIME type for individual users and register a default application for that MIME type, you need to create a new MIME type specification file in the ~/.local/share/mime/packages/ directory and a .desktop file in the ~/.local/share/applications/ directory. Procedure 12.4. Adding a Custom application/x-newtype MIME Type for Individual Users Create the ~/.local/share/mime/packages/application-x-newtype.xml file: <?xml version="1.0" encoding="UTF-8"?> <mime-info xmlns="http://www.freedesktop.org/standards/shared-mime-info"> <mime-type type="application/x-newtype"> <comment>new mime type</comment> <glob pattern="*.xyz"/> </mime-type> </mime-info> The sample application-x-newtype.xml file above defines a new MIME type application/x-newtype and assigns file names with the .xyz extension to that MIME type. Create a new .desktop file named, for example, myapplication1.desktop , and place it in the ~/.local/share/applications/ directory: The sample myapplication1.desktop file above associates the application/x-newtype MIME type with an application named My Application 1 , which is run by the command myapplication1 . Based on how myapplication1 gets started, choose one respective field code from Desktop Entry Specification . For example, for an application capable of opening multiple files, use: Update the MIME database for your changes to take effect: Update the application database: To verify that you have successfully associated *.xyz files with the application/x-newtype MIME type, first create an empty file, for example test.xyz : Then run the gvfs-info command: To verify that myapplication1.desktop has been correctly set as the default registered application for the application/x-newtype MIME type, run the gvfs-mime --query command: 12.3.4. Overriding the Default Registered Application for All Users The /usr/share/applications/mimeapps.list and /usr/share/applications/[desktop environment name]-mimeapps.list file are the package-installed defaults, which specify which application is registered to open specific MIME types by default. To override the system defaults for all users on the system, system administrators need to create the /etc/xdg/mimeapps.list or /etc/xdg/[desktop environment name]-mimeapps.list file with a list of MIME types for which they want to override the default registered application. The order in which the configurations are applied is as follows: /usr/share/applications/ /etc/xdg/ Within a particular location, the configurations are applied in this order: mimeapps.list [desktop environment name]-mimeapps.list System administrator's configuration thus takes precedence over package configuration. And within each, desktop-specific configuration takes precedence over the configuration that does not specify the desktop environment. Note Red Hat Enterprise Linux versions prior to 7.5 used the defaults.list file instead of the mimeapps.list file. Procedure 12.5. Overriding the Default Registered Application for All Users Consult the /usr/share/applications/mimeapps.list file to determine the MIME types for which you want to change the default registered application. For example, the following sample of the mimeapps.list file specifies the default registered application for the text/html and application/xhtml+xml MIME types: The default application ( Firefox ) is defined by specifying its corresponding .desktop file ( firefox.desktop ). The default location for other applications' .desktop files is /usr/share/applications/ . Create the /etc/xdg/mimeapps.list file. In the file, specify the MIME types and their corresponding default registered applications: This sets the default registered application for the text/html MIME type to myapplication1.desktop , and the default registered application for the application/xhtml+xml MIME type to myapplication2.desktop . For these settings to function properly, ensure that both the myapplication1.desktop and myapplication2.desktop files are placed in the /usr/share/applications/ directory. You can use the gvfs-mime query command to verify that the default registered application has been set correctly: 12.3.5. Overriding the Default Registered Application for Individual Users The /usr/share/applications/mimeapps.list and /usr/share/applications/[desktop environment name]-mimeapps.list file are the package-installed defaults, which specify which application is registered to open specific MIME types by default. To override the system defaults for individual users, you need to create the ~/.local/share/applications/mimeapps.list or ~/.local/share/applications/[desktop environment id]-mimeapps.list file with a list of MIME types for which you want to override the default registered application. The order in which the configurations are applied is as follows: /usr/share/applications/ /etc/xdg/ ~/.local/share/application/ Within a particular location, the configurations are applied in this order: mimeapps.list [desktop environment name]-mimeapps.list User's configuration thus takes precedence over system administrator's configuration, and system administrator's configuration takes precedence over package configuration. And within each, desktop-specific configuration takes precedence over the configuration that does not specify the desktop environment. Note Red Hat Enterprise Linux versions prior to 7.5 used the defaults.list file instead of the mimeapps.list file. Procedure 12.6. Overriding the Default Registered Application for Individual Users Consult the /usr/share/applications/mimeapps.list file to determine the MIME types for which you want to change the default registered application. For example, the following sample of the mimeapps.list file specifies the default registered application for the text/html and application/xhtml+xml MIME types: The default application ( Firefox ) is defined by specifying its corresponding .desktop file ( firefox.desktop ). The system default location for other applications' .desktop files is /usr/share/applications/ . Individual users' .desktop files can be stored in ~/.local/share/applications/ . Create the ~/.local/share/applications/mimeapps.list file. In the file, specify the MIME types and their corresponding default registered applications: This sets the default registered application for the text/html MIME type to myapplication1.desktop , and the default registered application for the application/xhtml+xml MIME type to myapplication2.desktop . For these settings to function properly, ensure that both the myapplication1.desktop and myapplication2.desktop files are placed in the /usr/share/applications/ directory. You can use the gvfs-mime --query command to verify that the default registered application has been set correctly:
[ "media-type / subtype-identifier", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <mime-info xmlns=\"http://www.freedesktop.org/standards/shared-mime-info\"> <mime-type type=\"application/x-newtype\"> <comment>new mime type</comment> <glob pattern=\"*.xyz\"/> </mime-type> </mime-info>", "[Desktop Entry] Type=Application MimeType=application/x-newtype Name= My Application 1 Exec= myapplication1 field_code", "Exec= myapplication1 %F", "update-mime-database /usr/share/mime", "update-desktop-database /usr/share/applications", "touch test.xyz", "gvfs-info test.xyz | grep \"standard::content-type\" standard::content-type: application/x-newtype", "gvfs-mime --query application/x-newtype Default application for 'application/x-newtype': myapplication1.desktop Registered applications: myapplication1.desktop Recommended applications: myapplication1.desktop", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <mime-info xmlns=\"http://www.freedesktop.org/standards/shared-mime-info\"> <mime-type type=\"application/x-newtype\"> <comment>new mime type</comment> <glob pattern=\"*.xyz\"/> </mime-type> </mime-info>", "[Desktop Entry] Type=Application MimeType=application/x-newtype Name= My Application 1 Exec= myapplication1 field_code", "Exec= myapplication1 %F", "update-mime-database ~/.local/share/mime", "update-desktop-database ~/.local/share/applications", "touch test.xyz", "gvfs-info test.xyz | grep \"standard::content-type\" standard::content-type: application/x-newtype", "gvfs-mime --query application/x-newtype Default application for 'application/x-newtype': myapplication1.desktop Registered applications: myapplication1.desktop Recommended applications: myapplication1.desktop", "[Default Applications] text/html=firefox.desktop application/xhtml+xml=firefox.desktop", "[Default Applications] text/html= myapplication1.desktop application/xhtml+xml= myapplication2.desktop", "gvfs-mime query text/html Default application for 'text/html': myapplication1.desktop Registered applications: myapplication1.desktop firefox.desktop Recommended applications: myapplication1.desktop firefox.desktop", "[Default Applications] text/html=firefox.desktop application/xhtml+xml=firefox.desktop", "[Default Applications] text/html= myapplication1.desktop application/xhtml+xml= myapplication2.desktop", "gvfs-mime --query text/html Default application for 'text/html': myapplication1.desktop Registered applications: myapplication1.desktop firefox.desktop Recommended applications: myapplication1.desktop firefox.desktop" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/File_Formats
Chapter 5. Downloading the test plan from Red Hat Certification Portal
Chapter 5. Downloading the test plan from Red Hat Certification Portal Procedure Log in to Red Hat Certification portal . Search for the case number related to your product certification, and copy it. For example, 02887238. Click Cases enter the product case number. Optional: Click Test Plans . The test plan displays a list of components that will be tested during the test run. Click Download Test Plan . steps If you plan to use Cockpit to run the tests, see Chapter 6, Configuring the system and running tests by using Cockpit . If you plan to use CLI to run the tests, see Chapter 7, Configuring the system and running tests by using RHCert CLI Tool .
null
https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_certified_cloud_and_service_provider_certification_for_red_hat_enterprise_linux_for_sap_images_workflow_guide/proc_cloud-wf-downloading-the-test-plan-from-rhcert-connect_cloud-instance-wf-setting-test-environment
Developing and Managing Integrations Using Camel K
Developing and Managing Integrations Using Camel K Red Hat build of Apache Camel K 1.10.7 A developer's guide to Camel K Red Hat build of Apache Camel K Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/developing_and_managing_integrations_using_camel_k/index
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Please let us know how we could make it better. To do so: Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.4_release_notes/proc_providing-feedback-on-red-hat-documentation
function::user_string_n2_quoted
function::user_string_n2_quoted Name function::user_string_n2_quoted - Retrieves and quotes string from user space Synopsis Arguments addr the user space address to retrieve the string from inlen the maximum length of the string to read (if not null terminated) outlen the maximum length of the output string Description Reads up to inlen characters of a C string from the given user space memory address, and returns up to outlen characters, where any ASCII characters that are not printable are replaced by the corresponding escape sequence in the returned string. Note that the string will be surrounded by double quotes. On the rare cases when userspace data is not accessible at the given address, the address itself is returned as a string, without double quotes.
[ "user_string_n2_quoted:string(addr:long,inlen:long,outlen:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-string-n2-quoted
Appendix B. Revision History
Appendix B. Revision History Revision History Revision 0.4-33 Wed Mar 08 2017 Jiri Herrmann Prepared the book for the 6.9 GA release Revision 0.4-31 Mon Dec 20 2016 Jiri Herrmann Prepared the book for the 6.9 Beta release Revision 0.4-30 Mon May 10 2016 Jiri Herrmann Prepared the book for the 6.8 GA release Revision 0.4-23 Tue Mar 01 2016 Jiri Herrmann Prepared the book for the 6.8 beta release Revision 0.4-22 Thu Oct 08 2015 Jiri Herrmann Cleaned up the Revision History Revision 0.4-21 Fri Oct 10 2014 Scott Radvan Version for 6.6 GA release.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_security_guide/appe-virtualization_security_guide-revision_history
probe::ioscheduler_trace.elv_abort_request
probe::ioscheduler_trace.elv_abort_request Name probe::ioscheduler_trace.elv_abort_request - Fires when a request is aborted. Synopsis Values disk_major Disk major no of request. rq Address of request. name Name of the probe point elevator_name The type of I/O elevator currently enabled. disk_minor Disk minor number of request. rq_flags Request flags.
[ "ioscheduler_trace.elv_abort_request" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ioscheduler-trace-elv-abort-request
Chapter 3. Managing high availability services with Pacemaker
Chapter 3. Managing high availability services with Pacemaker The Pacemaker service manages core container and active-passive services, such as Galera, RabbitMQ, Redis, and HAProxy. You use Pacemaker to view and manage general information about the managed services, virtual IP addresses, power management, and fencing. 3.1. Pacemaker resource bundles and containers Pacemaker manages Red Hat OpenStack Platform (RHOSP) services as Bundle Set resources, or bundles. Most of these services are active-active services that start in the same way and always run on each Controller node. Pacemaker manages the following resource types: Bundle A bundle resource configures and replicates the same container on all Controller nodes, maps the necessary storage paths to the container directories, and sets specific attributes related to the resource itself. Container A container can run different kinds of resources, from simple systemd services like HAProxy to complex services like Galera, which requires specific resource agents that control and set the state of the service on the different nodes. Important You cannot use podman or systemctl to manage bundles or containers. You can use the commands to check the status of the services, but you must use Pacemaker to perform actions on these services. Podman containers that Pacemaker controls have a RestartPolicy set to no by Podman. This is to ensure that Pacemaker, and not Podman, controls the container start and stop actions. 3.1.1. Simple Bundle Set resources (simple bundles) A simple Bundle Set resource, or simple bundle, is a set of containers that each include the same Pacemaker services that you want to deploy across the Controller nodes. The following example shows a list of simple bundles from the output of the pcs status command: For each bundle, you can see the following details: The name that Pacemaker assigns to the service. The reference to the container that is associated with the bundle. The list and status of replicas that are running on the different Controller nodes. The following example shows the settings for the haproxy-bundle simple bundle: The example shows the following information about the containers in the bundle: image : Image used by the container, which refers to the local registry of the undercloud. network : Container network type, which is "host" in the example. options : Specific options for the container. replicas : Indicates how many copies of the container must run in the cluster. Each bundle includes three containers, one for each Controller node. run-command : System command used to spawn the container. Storage Mapping : Mapping of the local path on each host to the container. The haproxy configuration is located in the /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg file instead of the /etc/haproxy/haproxy.cfg file. Note Although HAProxy provides high availability services by load balancing traffic to selected services, you configure HAProxy as a highly available service by managing it as a Pacemaker bundle service. 3.1.2. Complex Bundle Set resources (complex bundles) Complex Bundle Set resources, or complex bundles, are Pacemaker services that specify a resource configuration in addition to the basic container configuration that is included in simple bundles. This configuration is needed to manage multi-state resources, which are services that can have different states depending on the Controller node they run on. This example shows a list of complex bundles from the output of the pcs status command: This output shows the following information about each complex bundle: RabbitMQ: All three Controller nodes run a standalone instance of the service, similar to a simple bundle. Galera: All three Controller nodes are running as Galera masters under the same constraints. Redis: The overcloud-controller-0 container is running as the master, while the other two Controller nodes are running as slaves. Each container type might run under different constraints. The following example shows the settings for the galera-bundle complex bundle: This output shows that, unlike in a simple bundle, the galera-bundle resource includes explicit resource configuration that determines all aspects of the multi-state resource. Note Although a service can run on multiple Controller nodes at the same time, the Controller node itself might not be listening at the IP address that is required to reach those services. For information about how to check the IP address of a service, see Section 3.4, "Viewing resource information for virtual IPs in a high availability cluster" . 3.2. Checking Pacemaker cluster status You can check the status of the Pacemaker cluster in any node where Pacemaker is running, and view information about the number of resources that are active and running. Prerequisites High availability is deployed and running. Procedure Log in to any Controller node as the tripleo-admin user. Run the pcs status command: Example output: The main sections of the output show the following information about the cluster: Cluster name : Name of the cluster. [NUM] nodes configured : Number of nodes that are configured for the cluster. [NUM] resources configured : Number of resources that are configured for the cluster. Online : Names of the Controller nodes that are currently online. GuestOnline : Names of the guest nodes that are currently online. Each guest node consists of a complex Bundle Set resource. For more information about bundle sets, see Section 3.1, "Pacemaker resource bundles and containers" . 3.3. Checking bundle status in a high availability cluster You can check the status of a bundle from an undercloud node or log in to one of the Controller nodes to check the bundle status directly. Prerequisites High availability is deployed and running. Procedure Use one of the following options: Log in to an undercloud node and check the bundle status, in this example haproxy-bundle : Example output: The output shows that the haproxy process is running inside the container. Log in to a Controller node and check the bundle status, in this example haproxy : Example output: 3.4. Viewing resource information for virtual IPs in a high availability cluster To check the status of all virtual IPs (VIPs) or a specific VIP, run the pcs resource show command with the relevant options. Each IPaddr2 resource sets a virtual IP address that clients use to request access to a service. If the Controller node with that IP address fails, the IPaddr2 resource reassigns the IP address to a different Controller node. Prerequisites High availability is deployed and running. Procedure Log in to any Controller node as the tripleo-admin user. Use one of the following options: Show all resources that use virtual IPs by running the pcs resource show command with the --full option: Example output: Each IP address is initially attached to a specific Controller node. For example, 192.168.1.150 is started on overcloud-controller-0 . However, if that Controller node fails, the IP address is reassigned to other Controller nodes in the cluster. The following table describes the IP addresses in the example output and shows the original allocation of each IP address. Table 3.1. IP address description and allocation source IP Address Description Allocated From 10.200.0.6 Controller virtual IP address Part of the dhcp_start and dhcp_end range set to 10.200.0.5-10.200.0.24 in the undercloud.conf file 192.168.1.150 Public IP address ExternalAllocationPools attribute in the network-environment.yaml file 172.16.0.10 Provides access to OpenStack API services on a Controller node InternalApiAllocationPools in the network-environment.yaml file 172.16.0.11 Provides access to Redis service on a Controller node InternalApiAllocationPools in the network-environment.yaml file 172.18.0.10 Storage virtual IP address that provides access to the Glance API and to Swift Proxy services StorageAllocationPools attribute in the network-environment.yaml file 172.19.0.10 Provides access to storage management StorageMgmtAlloctionPools in the network-environment.yaml file View a specific VIP address by running the pcs resource show command with the name of the resource that uses that VIP, in this example ip-192.168.1.150 : Example output: 3.5. Viewing network information for virtual IPs in a high availability cluster You can view the network interface information for a Controller node that is assigned to a specific virtual IP (VIP), and view port number assignments for a specific service. Prerequisites High availability is deployed and running. Procedure Log in to the Controller node that is assigned to the IP address you want to view and run the ip addr show command on the network interface, in this example vlan100 : Example output: Run the netstat command to show all processes that listen to the IP address, in this example 192.168.1.150.haproxy : Example output: Note Processes that are listening to all local addresses, such as 0.0.0.0 , are also available through 192.168.1.150 . These processes include sshd , mysqld , dhclient , ntpd . View the default port number assignments and the services they listen to by opening the configuration file for the HA service, in this example /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg : TCP port 6080: nova_novncproxy TCP port 9696: neutron TCP port 8000: heat_cfn TCP port 80: horizon TCP port 8776: cinder In this example, most services that are defined in the haproxy.cfg file listen to the 192.168.1.150 IP address on all three Controller nodes. However, only the controller-0 node is listening externally to the 192.168.1.150 IP address. Therefore, if the controller-0 node fails, HAProxy only needs to re-assign 192.168.1.150 to another Controller node and all other services will already be running on the fallback Controller node. 3.6. Checking fencing agent and Pacemaker daemon status You can check the status of the fencing agent and the status of the Pacemaker daemons in any node where Pacemaker is running, and view information about the number of Controller nodes that are active and running. Prerequisites High availability is deployed and running. Procedure Log in to any Controller node as the tripleo-admin user. Run the pcs status command: Example output: The output shows the following sections of the pcs status command output: my-ipmilan-for-controller : Shows the type of fencing for each Controller node ( stonith:fence_ipmilan ) and whether or not the IPMI service is stopped or running. PCSD Status : Shows that all three Controller nodes are currently online. Daemon Status : Shows the status of the three Pacemaker daemons: corosync , pacemaker , and pcsd . In the example, all three services are active and enabled. 3.7. Additional resources Configuring and Managing High Availability Clusters
[ "Podman container set: haproxy-bundle [192.168.24.1:8787/rhosp-rhel9/openstack-haproxy:pcmklatest] haproxy-bundle-podman-0 (ocf::heartbeat:podman): Started overcloud-controller-0 haproxy-bundle-podman-1 (ocf::heartbeat:podman): Started overcloud-controller-1 haproxy-bundle-podman-2 (ocf::heartbeat:podman): Started overcloud-controller-2", "sudo pcs resource show haproxy-bundle Bundle: haproxy-bundle Podman: image=192.168.24.1:8787/rhosp-rhel9/openstack-haproxy:pcmklatest network=host options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" replicas=3 run-command=\"/bin/bash /usr/local/bin/kolla_start\" Storage Mapping: options=ro source-dir=/var/lib/kolla/config_files/haproxy.json target-dir=/var/lib/kolla/config_files/config.json (haproxy-cfg-files) options=ro source-dir=/var/lib/config-data/puppet-generated/haproxy/ target-dir=/var/lib/kolla/config_files/src (haproxy-cfg-data) options=ro source-dir=/etc/hosts target-dir=/etc/hosts (haproxy-hosts) options=ro source-dir=/etc/localtime target-dir=/etc/localtime (haproxy-localtime) options=ro source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted (haproxy-pki-extracted) options=ro source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt (haproxy-pki-ca-bundle-crt) options=ro source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt (haproxy-pki-ca-bundle-trust-crt) options=ro source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem (haproxy-pki-cert) options=rw source-dir=/dev/log target-dir=/dev/log (haproxy-dev-log)", "Podman container set: rabbitmq-bundle [192.168.24.1:8787/rhosp-rhel9/openstack-rabbitmq:pcmklatest] rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-controller-0 rabbitmq-bundle-1 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-controller-1 rabbitmq-bundle-2 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-controller-2 Podman container set: galera-bundle [192.168.24.1:8787/rhosp-rhel9/openstack-mariadb:pcmklatest] galera-bundle-0 (ocf::heartbeat:galera): Master overcloud-controller-0 galera-bundle-1 (ocf::heartbeat:galera): Master overcloud-controller-1 galera-bundle-2 (ocf::heartbeat:galera): Master overcloud-controller-2 Podman container set: redis-bundle [192.168.24.1:8787/rhosp-rhel9/openstack-redis:pcmklatest] redis-bundle-0 (ocf::heartbeat:redis): Master overcloud-controller-0 redis-bundle-1 (ocf::heartbeat:redis): Slave overcloud-controller-1 redis-bundle-2 (ocf::heartbeat:redis): Slave overcloud-controller-2", "[...] Bundle: galera-bundle Podman: image=192.168.24.1:8787/rhosp-rhel9/openstack-mariadb:pcmklatest masters=3 network=host options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" replicas=3 run-command=\"/bin/bash /usr/local/bin/kolla_start\" Network: control-port=3123 Storage Mapping: options=ro source-dir=/var/lib/kolla/config_files/mysql.json target-dir=/var/lib/kolla/config_files/config.json (mysql-cfg-files) options=ro source-dir=/var/lib/config-data/puppet-generated/mysql/ target-dir=/var/lib/kolla/config_files/src (mysql-cfg-data) options=ro source-dir=/etc/hosts target-dir=/etc/hosts (mysql-hosts) options=ro source-dir=/etc/localtime target-dir=/etc/localtime (mysql-localtime) options=rw source-dir=/var/lib/mysql target-dir=/var/lib/mysql (mysql-lib) options=rw source-dir=/var/log/mariadb target-dir=/var/log/mariadb (mysql-log-mariadb) options=rw source-dir=/dev/log target-dir=/dev/log (mysql-dev-log) Resource: galera (class=ocf provider=heartbeat type=galera) Attributes: additional_parameters=--open-files-limit=16384 cluster_host_map=overcloud-controller-0:overcloud-controller-0.internalapi.localdomain;overcloud-controller-1:overcloud-controller-1.internalapi.localdomain;overcloud-controller-2:overcloud-controller-2.internalapi.localdomain enable_creation=true wsrep_cluster_address=gcomm://overcloud-controller-0.internalapi.localdomain,overcloud-controller-1.internalapi.localdomain,overcloud-controller-2.internalapi.localdomain Meta Attrs: container-attribute-target=host master-max=3 ordered=true Operations: demote interval=0s timeout=120 (galera-demote-interval-0s) monitor interval=20 timeout=30 (galera-monitor-interval-20) monitor interval=10 role=Master timeout=30 (galera-monitor-interval-10) monitor interval=30 role=Slave timeout=30 (galera-monitor-interval-30) promote interval=0s on-fail=block timeout=300s (galera-promote-interval-0s) start interval=0s timeout=120 (galera-start-interval-0s) stop interval=0s timeout=120 (galera-stop-interval-0s) [...]", "ssh tripleo-admin@overcloud-controller-0", "[tripleo-admin@overcloud-controller-0 ~] USD sudo pcs status", "Cluster name: tripleo_cluster Stack: corosync Current DC: overcloud-controller-1 (version 2.0.1-4.el9-0eb7991564) - partition with quorum Last updated: Thu Feb 8 14:29:21 2018 Last change: Sat Feb 3 11:37:17 2018 by root via cibadmin on overcloud-controller-2 12 nodes configured 37 resources configured Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] GuestOnline: [ galera-bundle-0@overcloud-controller-0 galera-bundle-1@overcloud-controller-1 galera-bundle-2@overcloud-controller-2 rabbitmq-bundle-0@overcloud-controller-0 rabbitmq-bundle-1@overcloud-controller-1 rabbitmq-bundle-2@overcloud-controller-2 redis-bundle-0@overcloud-controller-0 redis-bundle-1@overcloud-controller-1 redis-bundle-2@overcloud-controller-2 ] Full list of resources: [...]", "sudo podman exec -it haproxy-bundle-podman-0 ps -efww | grep haproxy*", "root 7 1 0 06:08 ? 00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws haproxy 11 7 0 06:08 ? 00:00:17 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws", "ps -ef | grep haproxy*", "root 17774 17729 0 06:08 ? 00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws 42454 17819 17774 0 06:08 ? 00:00:21 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws root 288508 237714 0 07:04 pts/0 00:00:00 grep --color=auto haproxy* ps -ef | grep -e 17774 -e 17819 root 17774 17729 0 06:08 ? 00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws 42454 17819 17774 0 06:08 ? 00:00:22 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws root 301950 237714 0 07:07 pts/0 00:00:00 grep --color=auto -e 17774 -e 17819", "ssh tripleo-admin@overcloud-controller-0", "sudo pcs resource show --full", "ip-10.200.0.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1 ip-192.168.1.150 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 ip-172.16.0.10 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1 ip-172.16.0.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 ip-172.18.0.10 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2 ip-172.19.0.10 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2", "sudo pcs resource show ip-192.168.1.150", "Resource: ip-192.168.1.150 (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.1.150 cidr_netmask=32 Operations: start interval=0s timeout=20s (ip-192.168.1.150-start-timeout-20s) stop interval=0s timeout=20s (ip-192.168.1.150-stop-timeout-20s) monitor interval=10s timeout=20s (ip-192.168.1.150-monitor-interval-10s)", "ip addr show vlan100", "9: vlan100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether be:ab:aa:37:34:e7 brd ff:ff:ff:ff:ff:ff inet *192.168.1.151/24* brd 192.168.1.255 scope global vlan100 valid_lft forever preferred_lft forever inet *192.168.1.150/32* brd 192.168.1.255 scope global vlan100 valid_lft forever preferred_lft forever", "sudo netstat -tupln | grep \"192.168.1.150.haproxy\"", "tcp 0 0 192.168.1.150:8778 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8042 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:9292 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8080 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:80 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8977 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:6080 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:9696 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8000 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8004 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8774 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:5000 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8776 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8041 0.0.0.0:* LISTEN 61029/haproxy", "ssh tripleo-admin@overcloud-controller-0", "[tripleo-admin@overcloud-controller-0 ~] USD sudo pcs status", "my-ipmilan-for-controller-0 (stonith:fence_ipmilan): Started my-ipmilan-for-controller-0 my-ipmilan-for-controller-1 (stonith:fence_ipmilan): Started my-ipmilan-for-controller-1 my-ipmilan-for-controller-2 (stonith:fence_ipmilan): Started my-ipmilan-for-controller-2 PCSD Status: overcloud-controller-0: Online overcloud-controller-1: Online overcloud-controller-2: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0 pcsd: active/enabled" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/high_availability_deployment_and_usage/assembly_managing-ha-services-with-pacemaker_rhosp
8.176. perl-WWW-Curl
8.176. perl-WWW-Curl 8.176.1. RHBA-2014:0588 - perl-WWW-Curl bug fix update Updated perl-WWW-Curl packages that fix one bug are now available for Red Hat Enterprise Linux 6. The perl-WWW-Curl packages provide a Perl extension interface for libcurl. Bug Fix BZ# 984894 Previously, accessing the value of the CURLINFO_PRIVATE option caused a program to terminate unexpectedly with a segmentation fault. This update fixes this by ensuring that CURLINFO_PRIVATE is an accessible scalar string. As a result, programs can now access CURLINFO_PRIVATE as expected. Users of perl-WWW-Curl are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/perl-www-curl
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/security_architecture/making-open-source-more-inclusive
18.2. XML Representation of a Virtual Machine Pool
18.2. XML Representation of a Virtual Machine Pool Example 18.1. An XML representation of a virtual machine pool
[ "<vmpool href=\"/ovirt-engine/api/vmpools/2d2d5e26-1b6e-11e1-8cda-001320f76e8e\"> id=\"2d2d5e26-1b6e-11e1-8cda-001320f76e8e\" <actions> <link href=\"/ovirt-engine/api/vmpools/2d2d5e26-1b6e-11e1-8cda-001320f76e8e/allocatevm\" rel=\"allocatevm\"/> </actions> <name>VMPool1</name> <description>Virtual Machine Pool 1</description> <size>2</size> <cluster href=\"/ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95\"/> id=\"99408929-82cf-4dc7-a532-9d998063fa95\" <template href=\"/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000\"/> id=\"00000000-0000-0000-0000-000000000000\" <prestarted_vms>0</prestarted_vms> <max_user_vms>1</max_user_vms> </vmpool>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/xml_representation_of_a_virtual_machine_pool
4.162. lohit-oriya-fonts
4.162. lohit-oriya-fonts 4.162.1. RHEA-2011:1137 - lohit-oriya-fonts enhancement update An updated lohit-oriya-fonts package which adds one enhancement is now available for Red Hat Enterprise Linux 6. The lohit-oriya-fonts package provides a free Oriya TrueType/OpenType font. Enhancement BZ# 691293 Unicode 6.0, the most recent major version of the Unicode standard, introduces the Indian Rupee Sign (U+20B9), the new official Indian currency symbol. With this update, the lohit-oriya-fonts package now includes a glyph for this new character. All users requiring the Indian rupee sign should install this updated package, which adds this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/lohit-oriya-fonts
Chapter 2. Using Control Groups
Chapter 2. Using Control Groups The following sections provide an overview of tasks related to creation and management of control groups. This guide focuses on utilities provided by systemd that are preferred as a way of cgroup management and will be supported in the future. versions of Red Hat Enterprise Linux used the libcgroup package for creating and managing cgroups. This package is still available to assure backward compatibility (see Warning ), but it will not be supported in future versions of Red Hat Enterprise Linux. 2.1. Creating Control Groups From the systemd 's perspective, a cgroup is bound to a system unit configurable with a unit file and manageable with systemd's command-line utilities. Depending on the type of application, your resource management settings can be transient or persistent . To create a transient cgroup for a service, start the service with the systemd-run command. This way, it is possible to set limits on resources consumed by the service during its runtime. Applications can create transient cgroups dynamically by using API calls to systemd . See the section called "Online Documentation" for API reference. Transient unit is removed automatically as soon as the service is stopped. To assign a persistent cgroup to a service, edit its unit configuration file. The configuration is preserved after the system reboot, so it can be used to manage services that are started automatically. Note that scope units cannot be created in this way. 2.1.1. Creating Transient Cgroups with systemd-run The systemd-run command is used to create and start a transient service or scope unit and run a custom command in the unit. Commands executed in service units are started asynchronously in the background, where they are invoked from the systemd process. Commands run in scope units are started directly from the systemd-run process and thus inherit the execution environment of the caller. Execution in this case is synchronous. To run a command in a specified cgroup, type as root : The name stands for the name you want the unit to be known under. If --unit is not specified, a unit name will be generated automatically. It is recommended to choose a descriptive name, since it will represent the unit in the systemctl output. The name has to be unique during runtime of the unit. Use the optional --scope parameter to create a transient scope unit instead of service unit that is created by default. With the --slice option, you can make your newly created service or scope unit a member of a specified slice. Replace slice_name with the name of an existing slice (as shown in the output of systemctl -t slice ), or create a new slice by passing a unique name. By default, services and scopes are created as members of the system.slice . Replace command with the command you wish to execute in the service unit. Place this command at the very end of the systemd-run syntax, so that the parameters of this command are not confused for parameters of systemd-run . Besides the above options, there are several other parameters available for systemd-run . For example, --description creates a description of the unit, --remain-after-exit allows to collect runtime information after terminating the service's process. The --machine option executes the command in a confined container. See the systemd-run (1) manual page to learn more. Example 2.1. Starting a New Service with systemd-run Use the following command to run the top utility in a service unit in a new slice called test . Type as root : The following message is displayed to confirm that you started the service successfully: Now, the name toptest.service can be used to monitor or to modify the cgroup with systemctl commands. 2.1.2. Creating Persistent Cgroups To configure a unit to be started automatically on system boot, execute the systemctl enable command (see the chapter called Managing Services with systemd in Red Hat Enterprise Linux 7 System Administrators Guide ). Running this command automatically creates a unit file in the /usr/lib/systemd/system/ directory. To make persistent changes to the cgroup, add or modify configuration parameters in its unit file. For more information, see Section 2.3.2, "Modifying Unit Files" .
[ "~]# systemd-run --unit= name --scope --slice= slice_name command", "~]# systemd-run --unit= toptest --slice= test top -b", "Running as unit toptest.service" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/resource_management_guide/chap-Using_Control_Groups
Chapter 5. Using build strategies
Chapter 5. Using build strategies The following sections define the primary supported build strategies, and how to use them. 5.1. Docker build OpenShift Container Platform uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation . Tip If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation. 5.1.1. Replacing Dockerfile FROM image You can replace the FROM instruction of the Dockerfile with the from of the BuildConfig object. If the Dockerfile uses multi-stage builds, the image in the last FROM instruction will be replaced. Procedure To replace the FROM instruction of the Dockerfile with the from of the BuildConfig . strategy: dockerStrategy: from: kind: "ImageStreamTag" name: "debian:latest" 5.1.2. Using Dockerfile path By default, docker builds use a Dockerfile located at the root of the context specified in the BuildConfig.spec.source.contextDir field. The dockerfilePath field allows the build to use a different path to locate your Dockerfile, relative to the BuildConfig.spec.source.contextDir field. It can be a different file name than the default Dockerfile, such as MyDockerfile , or a path to a Dockerfile in a subdirectory, such as dockerfiles/app1/Dockerfile . Procedure To use the dockerfilePath field for the build to use a different path to locate your Dockerfile, set: strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile 5.1.3. Using docker environment variables To make environment variables available to the docker build process and resulting image, you can add environment variables to the dockerStrategy definition of the build configuration. The environment variables defined there are inserted as a single ENV Dockerfile instruction right after the FROM instruction, so that it can be referenced later on within the Dockerfile. Procedure The variables are defined during build and stay in the output image, therefore they will be present in any container that runs that image as well. For example, defining a custom HTTP proxy to be used during build and runtime: dockerStrategy: ... env: - name: "HTTP_PROXY" value: "http://myproxy.net:5187/" You can also manage environment variables defined in the build configuration with the oc set env command. 5.1.4. Adding docker build arguments You can set docker build arguments using the buildArgs array. The build arguments are passed to docker when a build is started. Tip See Understand how ARG and FROM interact in the Dockerfile reference documentation. Procedure To set docker build arguments, add entries to the buildArgs array, which is located in the dockerStrategy definition of the BuildConfig object. For example: dockerStrategy: ... buildArgs: - name: "foo" value: "bar" Note Only the name and value fields are supported. Any settings on the valueFrom field are ignored. 5.1.5. Squashing layers with docker builds Docker builds normally create a layer representing each instruction in a Dockerfile. Setting the imageOptimizationPolicy to SkipLayers merges all instructions into a single layer on top of the base image. Procedure Set the imageOptimizationPolicy to SkipLayers : strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers 5.1.6. Using build volumes You can mount build volumes to give running builds access to information that you don't want to persist in the output container image. Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs , whose data can persist in the output container image. The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts . Prerequisites You have added an input secret, config map, or both to a BuildConfig object . Procedure In the dockerStrategy definition of the BuildConfig object, add any build volumes to the volumes array. For example: spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value 1 5 9 Required. A unique name. 2 6 10 Required. The absolute path of the mount point. It must not contain .. or : and doesn't collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images. 3 7 11 Required. The type of source, ConfigMap , Secret , or CSI . 4 8 Required. The name of the source. 12 Required. The driver that provides the ephemeral CSI volume. 13 Required. This value must be set to true . Provides a read-only volume. 14 Optional. The volume attributes of the ephemeral CSI volume. Consult the CSI driver's documentation for supported attribute keys and values. Note The Shared Resource CSI Driver is supported as a Technology Preview feature. 5.2. Source-to-image build Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on. 5.2.1. Performing source-to-image incremental builds Source-to-image (S2I) can perform incremental builds, which means it reuses artifacts from previously-built images. Procedure To create an incremental build, create a with the following modification to the strategy definition: strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "incremental-image:latest" 1 incremental: true 2 1 Specify an image that supports incremental builds. Consult the documentation of the builder image to determine if it supports this behavior. 2 This flag controls whether an incremental build is attempted. If the builder image does not support incremental builds, the build will still succeed, but you will get a log message stating the incremental build was not successful because of a missing save-artifacts script. Additional resources See S2I Requirements for information on how to create a builder image supporting incremental builds. 5.2.2. Overriding source-to-image builder image scripts You can override the assemble , run , and save-artifacts source-to-image (S2I) scripts provided by the builder image. Procedure To override the assemble , run , and save-artifacts S2I scripts provided by the builder image, either: Provide an assemble , run , or save-artifacts script in the .s2i/bin directory of your application source repository. Provide a URL of a directory containing the scripts as part of the strategy definition. For example: strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "builder-image:latest" scripts: "http://somehost.com/scripts_directory" 1 1 This path will have run , assemble , and save-artifacts appended to it. If any or all scripts are found they will be used in place of the same named scripts provided in the image. Note Files located at the scripts URL take precedence over files located in .s2i/bin of the source repository. 5.2.3. Source-to-image environment variables There are two ways to make environment variables available to the source build process and resulting image. Environment files and BuildConfig environment values. Variables provided will be present during the build process and in the output image. 5.2.3.1. Using source-to-image environment files Source build enables you to set environment values, one per line, inside your application, by specifying them in a .s2i/environment file in the source repository. The environment variables specified in this file are present during the build process and in the output image. If you provide a .s2i/environment file in your source repository, source-to-image (S2I) reads this file during the build. This allows customization of the build behavior as the assemble script may use these variables. Procedure For example, to disable assets compilation for your Rails application during the build: Add DISABLE_ASSET_COMPILATION=true in the .s2i/environment file. In addition to builds, the specified environment variables are also available in the running application itself. For example, to cause the Rails application to start in development mode instead of production : Add RAILS_ENV=development to the .s2i/environment file. The complete list of supported environment variables is available in the using images section for each image. 5.2.3.2. Using source-to-image build configuration environment You can add environment variables to the sourceStrategy definition of the build configuration. The environment variables defined there are visible during the assemble script execution and will be defined in the output image, making them also available to the run script and application code. Procedure For example, to disable assets compilation for your Rails application: sourceStrategy: ... env: - name: "DISABLE_ASSET_COMPILATION" value: "true" Additional resources The build environment section provides more advanced instructions. You can also manage environment variables defined in the build configuration with the oc set env command. 5.2.4. Ignoring source-to-image source files Source-to-image (S2I) supports a .s2iignore file, which contains a list of file patterns that should be ignored. Files in the build working directory, as provided by the various input sources, that match a pattern found in the .s2iignore file will not be made available to the assemble script. 5.2.5. Creating images from source code with source-to-image Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts. 5.2.5.1. Understanding the source-to-image build process The build process consists of the following three fundamental elements, which are combined into a final container image: Sources Source-to-image (S2I) scripts Builder image S2I generates a Dockerfile with the builder image as the first FROM instruction. The Dockerfile generated by S2I is then passed to Buildah. 5.2.5.2. How to write source-to-image scripts You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble / run / save-artifacts scripts. All of these locations are checked on each build in the following order: A script specified in the build configuration. A script found in the application source .s2i/bin directory. A script found at the default image URL with the io.openshift.s2i.scripts-url label. Both the io.openshift.s2i.scripts-url label specified in the image and the script specified in a build configuration can take one of the following forms: image:///path_to_scripts_dir : absolute path inside the image to a directory where the S2I scripts are located. file:///path_to_scripts_dir : relative or absolute path to a directory on the host where the S2I scripts are located. http(s)://path_to_scripts_dir : URL to a directory where the S2I scripts are located. Table 5.1. S2I scripts Script Description assemble The assemble script builds the application artifacts from a source and places them into appropriate directories inside the image. This script is required. The workflow for this script is: Optional: Restore build artifacts. If you want to support incremental builds, make sure to define save-artifacts as well. Place the application source in the desired location. Build the application artifacts. Install the artifacts into locations appropriate for them to run. run The run script executes your application. This script is required. save-artifacts The save-artifacts script gathers all dependencies that can speed up the build processes that follow. This script is optional. For example: For Ruby, gems installed by Bundler. For Java, .m2 contents. These dependencies are gathered into a tar file and streamed to the standard output. usage The usage script allows you to inform the user how to properly use your image. This script is optional. test/run The test/run script allows you to create a process to check if the image is working correctly. This script is optional. The proposed flow of that process is: Build the image. Run the image to verify the usage script. Run s2i build to verify the assemble script. Optional: Run s2i build again to verify the save-artifacts and assemble scripts save and restore artifacts functionality. Run the image to verify the test application is working. Note The suggested location to put the test application built by your test/run script is the test/test-app directory in your image repository. Example S2I scripts The following example S2I scripts are written in Bash. Each example assumes its tar contents are unpacked into the /tmp/s2i directory. assemble script: #!/bin/bash # restore build artifacts if [ "USD(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi # move the application source mv /tmp/s2i/src USDHOME/src # build application artifacts pushd USD{HOME} make all # install the artifacts make install popd run script: #!/bin/bash # run the application /opt/application/run.sh save-artifacts script: #!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd usage script: #!/bin/bash # inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF Additional resources S2I Image Creation Tutorial 5.2.6. Using build volumes You can mount build volumes to give running builds access to information that you don't want to persist in the output container image. Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs , whose data can persist in the output container image. The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts . Prerequisites You have added an input secret, config map, or both to a BuildConfig object . Procedure In the sourceStrategy definition of the BuildConfig object, add any build volumes to the volumes array. For example: spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value 1 5 9 Required. A unique name. 2 6 10 Required. The absolute path of the mount point. It must not contain .. or : and doesn't collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images. 3 7 11 Required. The type of source, ConfigMap , Secret , or CSI . 4 8 Required. The name of the source. 12 Required. The driver that provides the ephemeral CSI volume. 13 Required. This value must be set to true . Provides a read-only volume. 14 Optional. The volume attributes of the ephemeral CSI volume. Consult the CSI driver's documentation for supported attribute keys and values. Note The Shared Resource CSI Driver is supported as a Technology Preview feature. 5.3. Custom build The custom build strategy allows developers to define a specific builder image responsible for the entire build process. Using your own builder image allows you to customize your build process. A custom builder image is a plain container image embedded with build process logic, for example for building RPMs or base images. Custom builds run with a high level of privilege and are not available to users by default. Only users who can be trusted with cluster administration permissions should be granted access to run custom builds. 5.3.1. Using FROM image for custom builds You can use the customStrategy.from section to indicate the image to use for the custom build Procedure Set the customStrategy.from section: strategy: customStrategy: from: kind: "DockerImage" name: "openshift/sti-image-builder" 5.3.2. Using secrets in custom builds In addition to secrets for source and images that can be added to all build types, custom strategies allow adding an arbitrary list of secrets to the builder pod. Procedure To mount each secret at a specific location, edit the secretSource and mountPath fields of the strategy YAML file: strategy: customStrategy: secrets: - secretSource: 1 name: "secret1" mountPath: "/tmp/secret1" 2 - secretSource: name: "secret2" mountPath: "/tmp/secret2" 1 secretSource is a reference to a secret in the same namespace as the build. 2 mountPath is the path inside the custom builder where the secret should be mounted. 5.3.3. Using environment variables for custom builds To make environment variables available to the custom build process, you can add environment variables to the customStrategy definition of the build configuration. The environment variables defined there are passed to the pod that runs the custom build. Procedure Define a custom HTTP proxy to be used during build: customStrategy: ... env: - name: "HTTP_PROXY" value: "http://myproxy.net:5187/" To manage environment variables defined in the build configuration, enter the following command: USD oc set env <enter_variables> 5.3.4. Using custom builder images OpenShift Container Platform's custom build strategy enables you to define a specific builder image responsible for the entire build process. When you need a build to produce individual artifacts such as packages, JARs, WARs, installable ZIPs, or base images, use a custom builder image using the custom build strategy. A custom builder image is a plain container image embedded with build process logic, which is used for building artifacts such as RPMs or base container images. Additionally, the custom builder allows implementing any extended build process, such as a CI/CD flow that runs unit or integration tests. 5.3.4.1. Custom builder image Upon invocation, a custom builder image receives the following environment variables with the information needed to proceed with the build: Table 5.2. Custom Builder Environment Variables Variable Name Description BUILD The entire serialized JSON of the Build object definition. If you must use a specific API version for serialization, you can set the buildAPIVersion parameter in the custom strategy specification of the build configuration. SOURCE_REPOSITORY The URL of a Git repository with source to be built. SOURCE_URI Uses the same value as SOURCE_REPOSITORY . Either can be used. SOURCE_CONTEXT_DIR Specifies the subdirectory of the Git repository to be used when building. Only present if defined. SOURCE_REF The Git reference to be built. ORIGIN_VERSION The version of the OpenShift Container Platform master that created this build object. OUTPUT_REGISTRY The container image registry to push the image to. OUTPUT_IMAGE The container image tag name for the image being built. PUSH_DOCKERCFG_PATH The path to the container registry credentials for running a podman push operation. 5.3.4.2. Custom builder workflow Although custom builder image authors have flexibility in defining the build process, your builder image must adhere to the following required steps necessary for running a build inside of OpenShift Container Platform: The Build object definition contains all the necessary information about input parameters for the build. Run the build process. If your build produces an image, push it to the output location of the build if it is defined. Other output locations can be passed with environment variables. 5.4. Pipeline build Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by OpenShift Container Platform in the same way as any other build type. Pipeline workflows are defined in a jenkinsfile , either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration. 5.4.1. Understanding OpenShift Container Platform pipelines Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. Pipelines give you control over building, deploying, and promoting your applications on OpenShift Container Platform. Using a combination of the Jenkins Pipeline build strategy, jenkinsfiles , and the OpenShift Container Platform Domain Specific Language (DSL) provided by the Jenkins Client Plugin, you can create advanced build, test, deploy, and promote pipelines for any scenario. OpenShift Container Platform Jenkins Sync Plugin The OpenShift Container Platform Jenkins Sync Plugin keeps the build configuration and build objects in sync with Jenkins jobs and builds, and provides the following: Dynamic job and run creation in Jenkins. Dynamic creation of agent pod templates from image streams, image stream tags, or config maps. Injection of environment variables. Pipeline visualization in the OpenShift Container Platform web console. Integration with the Jenkins Git plugin, which passes commit information from OpenShift Container Platform builds to the Jenkins Git plugin. Synchronization of secrets into Jenkins credential entries. OpenShift Container Platform Jenkins Client Plugin The OpenShift Container Platform Jenkins Client Plugin is a Jenkins plugin which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with an OpenShift Container Platform API Server. The plugin uses the OpenShift Container Platform command line tool, oc , which must be available on the nodes executing the script. The Jenkins Client Plugin must be installed on your Jenkins master so the OpenShift Container Platform DSL will be available to use within the jenkinsfile for your application. This plugin is installed and enabled by default when using the OpenShift Container Platform Jenkins image. For OpenShift Container Platform Pipelines within your project, you will must use the Jenkins Pipeline Build Strategy. This strategy defaults to using a jenkinsfile at the root of your source repository, but also provides the following configuration options: An inline jenkinsfile field within your build configuration. A jenkinsfilePath field within your build configuration that references the location of the jenkinsfile to use relative to the source contextDir . Note The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . 5.4.2. Providing the Jenkins file for pipeline builds Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The jenkinsfile uses the standard groovy language syntax to allow fine grained control over the configuration, build, and deployment of your application. You can supply the jenkinsfile in one of the following ways: A file located within your source code repository. Embedded as part of your build configuration using the jenkinsfile field. When using the first option, the jenkinsfile must be included in your applications source code repository at one of the following locations: A file named jenkinsfile at the root of your repository. A file named jenkinsfile at the root of the source contextDir of your repository. A file name specified via the jenkinsfilePath field of the JenkinsPipelineStrategy section of your BuildConfig, which is relative to the source contextDir if supplied, otherwise it defaults to the root of the repository. The jenkinsfile is run on the Jenkins agent pod, which must have the OpenShift Container Platform client binaries available if you intend to use the OpenShift Container Platform DSL. Procedure To provide the Jenkins file, you can either: Embed the Jenkins file in the build configuration. Include in the build configuration a reference to the Git repository that contains the Jenkins file. Embedded Definition kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') } Reference to Git Repository kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: source: git: uri: "https://github.com/openshift/ruby-hello-world" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1 1 The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . 5.4.3. Using environment variables for pipeline builds Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. To make environment variables available to the Pipeline build process, you can add environment variables to the jenkinsPipelineStrategy definition of the build configuration. Once defined, the environment variables will be set as parameters for any Jenkins job associated with the build configuration. Procedure To define environment variables to be used during build, edit the YAML file: jenkinsPipelineStrategy: ... env: - name: "FOO" value: "BAR" You can also manage environment variables defined in the build configuration with the oc set env command. 5.4.3.1. Mapping between BuildConfig environment variables and Jenkins job parameters When a Jenkins job is created or updated based on changes to a Pipeline strategy build configuration, any environment variables in the build configuration are mapped to Jenkins job parameters definitions, where the default values for the Jenkins job parameters definitions are the current values of the associated environment variables. After the Jenkins job's initial creation, you can still add additional parameters to the job from the Jenkins console. The parameter names differ from the names of the environment variables in the build configuration. The parameters are honored when builds are started for those Jenkins jobs. How you start builds for the Jenkins job dictates how the parameters are set. If you start with oc start-build , the values of the environment variables in the build configuration are the parameters set for the corresponding job instance. Any changes you make to the parameters' default values from the Jenkins console are ignored. The build configuration values take precedence. If you start with oc start-build -e , the values for the environment variables specified in the -e option take precedence. If you specify an environment variable not listed in the build configuration, they will be added as a Jenkins job parameter definitions. Any changes you make from the Jenkins console to the parameters corresponding to the environment variables are ignored. The build configuration and what you specify with oc start-build -e takes precedence. If you start the Jenkins job with the Jenkins console, then you can control the setting of the parameters with the Jenkins console as part of starting a build for the job. Note It is recommended that you specify in the build configuration all possible environment variables to be associated with job parameters. Doing so reduces disk I/O and improves performance during Jenkins processing. 5.4.4. Pipeline build tutorial Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. This example demonstrates how to create an OpenShift Container Platform Pipeline that will build, deploy, and verify a Node.js/MongoDB application using the nodejs-mongodb.json template. Procedure Create the Jenkins master: USD oc project <project_name> Select the project that you want to use or create a new project with oc new-project <project_name> . USD oc new-app jenkins-ephemeral 1 If you want to use persistent storage, use jenkins-persistent instead. Create a file named nodejs-sample-pipeline.yaml with the following content: Note This creates a BuildConfig object that employs the Jenkins pipeline strategy to build, deploy, and scale the Node.js/MongoDB example application. kind: "BuildConfig" apiVersion: "v1" metadata: name: "nodejs-sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline After you create a BuildConfig object with a jenkinsPipelineStrategy , tell the pipeline what to do by using an inline jenkinsfile : Note This example does not set up a Git repository for the application. The following jenkinsfile content is written in Groovy using the OpenShift Container Platform DSL. For this example, include inline content in the BuildConfig object using the YAML Literal Style, though including a jenkinsfile in your source repository is the preferred method. def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo "Using project: USD{openshift.project()}" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector("all", [ template : templateName ]).delete() 5 if (openshift.selector("secrets", templateName).exists()) { 6 openshift.selector("secrets", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector("bc", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == "Complete") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector("dc", templateName).rollout() timeout(5) { 9 openshift.selector("dc", templateName).related('pods').untilEach(1) { return (it.object().status.phase == "Running") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag("USD{templateName}:latest", "USD{templateName}-staging:latest") 10 } } } } } } } 1 Path of the template to use. 1 2 Name of the template that will be created. 3 Spin up a node.js agent pod on which to run this build. 4 Set a timeout of 20 minutes for this pipeline. 5 Delete everything with this template label. 6 Delete any secrets with this template label. 7 Create a new application from the templatePath . 8 Wait up to five minutes for the build to complete. 9 Wait up to five minutes for the deployment to complete. 10 If everything else succeeded, tag the USD {templateName}:latest image as USD {templateName}-staging:latest . A pipeline build configuration for the staging environment can watch for the USD {templateName}-staging:latest image to change and then deploy it to the staging environment. Note The example was written using the declarative pipeline style, but the older scripted pipeline style is also supported. Create the Pipeline BuildConfig in your OpenShift Container Platform cluster: USD oc create -f nodejs-sample-pipeline.yaml If you do not want to create your own file, you can use the sample from the Origin repository by running: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml Start the Pipeline: USD oc start-build nodejs-sample-pipeline Note Alternatively, you can start your pipeline with the OpenShift Container Platform web console by navigating to the Builds Pipeline section and clicking Start Pipeline , or by visiting the Jenkins Console, navigating to the Pipeline that you created, and clicking Build Now . Once the pipeline is started, you should see the following actions performed within your project: A job instance is created on the Jenkins server. An agent pod is launched, if your pipeline requires one. The pipeline runs on the agent pod, or the master if no agent is required. Any previously created resources with the template=nodejs-mongodb-example label will be deleted. A new application, and all of its associated resources, will be created from the nodejs-mongodb-example template. A build will be started using the nodejs-mongodb-example BuildConfig . The pipeline will wait until the build has completed to trigger the stage. A deployment will be started using the nodejs-mongodb-example deployment configuration. The pipeline will wait until the deployment has completed to trigger the stage. If the build and deploy are successful, the nodejs-mongodb-example:latest image will be tagged as nodejs-mongodb-example:stage . The agent pod is deleted, if one was required for the pipeline. Note The best way to visualize the pipeline execution is by viewing it in the OpenShift Container Platform web console. You can view your pipelines by logging in to the web console and navigating to Builds Pipelines. 5.5. Adding secrets with web console You can add a secret to your build configuration so that it can access a private repository. Procedure To add a secret to your build configuration so that it can access a private repository from the OpenShift Container Platform web console: Create a new OpenShift Container Platform project. Create a secret that contains credentials for accessing a private source code repository. Create a build configuration. On the build configuration editor page or in the create app from builder image page of the web console, set the Source Secret . Click Save . 5.6. Enabling pulling and pushing You can enable pulling to a private registry by setting the pull secret and pushing by setting the push secret in the build configuration. Procedure To enable pulling to a private registry: Set the pull secret in the build configuration. To enable pushing: Set the push secret in the build configuration.
[ "strategy: dockerStrategy: from: kind: \"ImageStreamTag\" name: \"debian:latest\"", "strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile", "dockerStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"", "dockerStrategy: buildArgs: - name: \"foo\" value: \"bar\"", "strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers", "spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"incremental-image:latest\" 1 incremental: true 2", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"builder-image:latest\" scripts: \"http://somehost.com/scripts_directory\" 1", "sourceStrategy: env: - name: \"DISABLE_ASSET_COMPILATION\" value: \"true\"", "#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd", "#!/bin/bash run the application /opt/application/run.sh", "#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd", "#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF", "spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value", "strategy: customStrategy: from: kind: \"DockerImage\" name: \"openshift/sti-image-builder\"", "strategy: customStrategy: secrets: - secretSource: 1 name: \"secret1\" mountPath: \"/tmp/secret1\" 2 - secretSource: name: \"secret2\" mountPath: \"/tmp/secret2\"", "customStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"", "oc set env <enter_variables>", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') }", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: source: git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1", "jenkinsPipelineStrategy: env: - name: \"FOO\" value: \"BAR\"", "oc project <project_name>", "oc new-app jenkins-ephemeral 1", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"nodejs-sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline", "def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo \"Using project: USD{openshift.project()}\" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector(\"all\", [ template : templateName ]).delete() 5 if (openshift.selector(\"secrets\", templateName).exists()) { 6 openshift.selector(\"secrets\", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector(\"bc\", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == \"Complete\") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector(\"dc\", templateName).rollout() timeout(5) { 9 openshift.selector(\"dc\", templateName).related('pods').untilEach(1) { return (it.object().status.phase == \"Running\") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag(\"USD{templateName}:latest\", \"USD{templateName}-staging:latest\") 10 } } } } } } }", "oc create -f nodejs-sample-pipeline.yaml", "oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml", "oc start-build nodejs-sample-pipeline" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/builds_using_buildconfig/build-strategies
Chapter 3. Using Kerberos
Chapter 3. Using Kerberos Maintaining system security and integrity within a network is critical, and it encompasses every user, application, service, and server within the network infrastructure. It requires an understanding of everything that is running on the network and the manner in which these services are used. At the core of maintaining this security is maintaining access to these applications and services and enforcing that access. Kerberos provides a mechanism that allows both users and machines to identify themselves to network and receive defined, limited access to the areas and services that the administrator configured. Kerberos authenticates entities by verifying their identity, and Kerberos also secures this authenticating data so that it cannot be accessed and used or tampered with by an outsider. 3.1. About Kerberos Kerberos is a network authentication protocol created by MIT, and uses symmetric-key cryptography [1] to authenticate users to network services, which means passwords are never actually sent over the network. Consequently, when users authenticate to network services using Kerberos, unauthorized users attempting to gather passwords by monitoring network traffic are effectively thwarted. 3.1.1. How Kerberos Works Most conventional network services use password-based authentication schemes, where a user supplies a password to access a given network server. However, the transmission of authentication information for many services is unencrypted. For such a scheme to be secure, the network has to be inaccessible to outsiders, and all computers and users on the network must be trusted and trustworthy. With simple, password-based authentication, a network that is connected to the Internet cannot be assumed to be secure. Any attacker who gains access to the network can use a simple packet analyzer, or packet sniffer , to intercept usernames and passwords, compromising user accounts and, therefore, the integrity of the entire security infrastructure. Kerberos eliminates the transmission of unencrypted passwords across the network and removes the potential threat of an attacker sniffing the network. Rather than authenticating each user to each network service separately as with simple password authentication, Kerberos uses symmetric encryption and a trusted third party (a key distribution center or KDC) to authenticate users to a suite of network services. The computers managed by that KDC and any secondary KDCs constitute a realm . When a user authenticates to the KDC, the KDC sends a set of credentials (a ticket ) specific to that session back to the user's machine, and any Kerberos-aware services look for the ticket on the user's machine rather than requiring the user to authenticate using a password. As shown in Figure 3.1, "Kerberos Authentication, in Steps" , each user is identified to the KDC with a unique identity, called a principal . When a user on a Kerberos-aware network logs into his workstation, his principal is sent to the KDC as part of a request for a ticket-getting ticket (or TGT) from the authentication server. This request can be sent by the login program so that it is transparent to the user or can be sent manually by a user through the kinit program after the user logs in. The KDC then checks for the principal in its database. If the principal is found, the KDC creates a TGT, encrypts it using the user's key, and sends the TGT to that user. Figure 3.1. Kerberos Authentication, in Steps The login or kinit program on the client then decrypts the TGT using the user's key, which it computes from the user's password. The user's key is used only on the client machine and is not transmitted over the network. The ticket (or credentials) sent by the KDC are stored in a local file, the credentials cache , which can be checked by Kerberos-aware services. After authentication, servers can check an unencrypted list of recognized principals and their keys rather than checking kinit ; this is kept in a keytab . The TGT is set to expire after a certain period of time (usually ten to twenty-four hours) and is stored in the client machine's credentials cache. An expiration time is set so that a compromised TGT is of use to an attacker for only a short period of time. After the TGT has been issued, the user does not have to re-enter their password until the TGT expires or until they log out and log in again. Whenever the user needs access to a network service, the client software uses the TGT to request a new ticket for that specific service from the ticket-granting server (TGS). The service ticket is then used to authenticate the user to that service transparently. Warning The Kerberos system can be compromised if a user on the network authenticates against a non-Kerberos aware service by transmitting a password in plain text. The use of non-Kerberos aware services (including telnet and FTP) is highly discouraged. Other encrypted protocols, such as SSH or SSL-secured services, is preferred to unencrypted services, but this is still not ideal. Kerberos relies on being able to resolve machine names and on accurate timestamps to issue and expire tickets. Thus, Kerberos requires both adequate clock synchronization and a working domain name service (DNS) to function correctly. Approximate clock synchronization between the machines on the network can be set up using a service such as ntpd , which is documented in /usr/share/doc/ntp- version-number /html/index.html . Both DNS entries and hosts on the network must be properly configured, which is covered in the Kerberos documentation in /usr/share/doc/krb5-server- version-number . 3.1.2. Considerations for Deploying Kerberos Although Kerberos removes a common and severe security threat, it is difficult to implement for a variety of reasons: Migrating user passwords from a standard UNIX password database, such as /etc/passwd or /etc/shadow , to a Kerberos password database can be tedious. There is no automated mechanism to perform this task. This is covered in question 2.23 in the online Kerberos FAQ for the US Navy. Kerberos assumes that each user is trusted but is using an untrusted host on an untrusted network. Its primary goal is to prevent unencrypted passwords from being transmitted across that network. However, if anyone other than the proper user has access to the one host that issues tickets used for authentication - the KDC - the entire Kerberos authentication system are at risk. For an application to use Kerberos, its source must be modified to make the appropriate calls into the Kerberos libraries. Applications modified in this way are considered to be Kerberos-aware , or kerberized . For some applications, this can be quite problematic due to the size of the application or its design. For other incompatible applications, changes must be made to the way in which the server and client communicate. Again, this can require extensive programming. Closed-source applications that do not have Kerberos support by default are often the most problematic. Kerberos is an all-or-nothing solution. If Kerberos is used on the network, any unencrypted passwords transferred to a non-Kerberos aware service are at risk. Thus, the network gains no benefit from the use of Kerberos. To secure a network with Kerberos, one must either use Kerberos-aware versions of all client/server applications that transmit passwords unencrypted, or not use that client/server application at all. 3.1.3. Additional Resources for Kerberos Kerberos can be a complex service to implement, with a lot of flexibility in how it is deployed. Table 3.1, "External Kerberos Documentation" and Table 3.2, "Important Kerberos Manpages" list of a few of the most important or most useful sources for more information on using Kerberos. Table 3.1. External Kerberos Documentation Documentation Location Kerberos V5 Installation Guide (in both PostScript and HTML) /usr/share/doc/krb5-server- version-number Kerberos V5 System Administrator's Guide (in both PostScript and HTML) /usr/share/doc/krb5-server- version-number Kerberos V5 UNIX User's Guide (in both PostScript and HTML) /usr/share/doc/krb5-workstation- version-number "Kerberos: The Network Authentication Protocol" webpage from MIT http://web.mit.edu/kerberos/www/ The Kerberos Frequently Asked Questions (FAQ) http://www.cmf.nrl.navy.mil/CCS/people/kenh/kerberos-faq.html Designing an Authentication System: a Dialogue in Four Scenes , originally by Bill Bryant in 1988, modified by Theodore Ts'o in 1997. This document is a conversation between two developers who are thinking through the creation of a Kerberos-style authentication system. The conversational style of the discussion makes this a good starting place for people who are completely unfamiliar with Kerberos. http://web.mit.edu/kerberos/www/dialogue.html A how-to article for kerberizing a network. http://www.ornl.gov/~jar/HowToKerb.html Any of the manpage files can be opened by running man command_name . Table 3.2. Important Kerberos Manpages Manpage Description Client Applications kerberos An introduction to the Kerberos system which describes how credentials work and provides recommendations for obtaining and destroying Kerberos tickets. The bottom of the man page references a number of related man pages. kinit Describes how to use this command to obtain and cache a ticket-granting ticket. kdestroy Describes how to use this command to destroy Kerberos credentials. klist Describes how to use this command to list cached Kerberos credentials. Administrative Applications kadmin Describes how to use this command to administer the Kerberos V5 database. kdb5_util Describes how to use this command to create and perform low-level administrative functions on the Kerberos V5 database. Server Applications krb5kdc Describes available command line options for the Kerberos V5 KDC. kadmind Describes available command line options for the Kerberos V5 administration server. Configuration Files krb5.conf Describes the format and options available within the configuration file for the Kerberos V5 library. kdc.conf Describes the format and options available within the configuration file for the Kerberos V5 AS and KDC. [1] A system where both the client and the server share a common key that is used to encrypt and decrypt network communication.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_smart_cards/Using_Kerberos
Chapter 1. Introduction
Chapter 1. Introduction 1.1. About the User Interface Guide This guide is for architects, engineers, consultants, and others who want to use the Migration Toolkit for Applications (MTA) user interface to accelerate large-scale application modernization efforts across hybrid cloud environments on Red Hat OpenShift. This solution provides insight throughout the adoption process, at both the portfolio and application levels: inventory, assess, analyze, and manage applications for faster migration to OpenShift via the user interface. 1.2. About the Migration Toolkit for Applications What is the Migration Toolkit for Applications? Migration Toolkit for Applications (MTA) accelerates large-scale application modernization efforts across hybrid cloud environments on Red Hat OpenShift. This solution provides insight throughout the adoption process, at both the portfolio and application levels: inventory, assess, analyze, and manage applications for faster migration to OpenShift via the user interface. In MTA 7.1 and later, when you add an application to the Application Inventory , MTA automatically creates and executes language and technology discovery tasks. Language discovery identifies the programming languages used in the application. Technology discovery identifies technologies, such as Enterprise Java Beans (EJB), Spring, etc. Then, each task assigns appropriate tags to the application, reducing the time and effort you spend manually tagging the application. MTA uses an extensive default questionnaire as the basis for assessing your applications, or you can create your own custom questionnaire, enabling you to estimate the difficulty, time, and other resources needed to prepare an application for containerization. You can use the results of an assessment as the basis for discussions between stakeholders to determine which applications are good candidates for containerization, which require significant work first, and which are not suitable for containerization. MTA analyzes applications by applying one or more rulesets to each application considered to determine which specific lines of that application must be modified before it can be modernized. MTA examines application artifacts, including project source directories and application archives, and then produces an HTML report highlighting areas needing changes. How does the Migration Toolkit for Applications simplify migration? The Migration Toolkit for Applications looks for common resources and known trouble spots when migrating applications. It provides a high-level view of the technologies used by the application. MTA generates a detailed report evaluating a migration or modernization path. This report can help you to estimate the effort required for large-scale projects and to reduce the work involved. 1.3. About the user interface The user interface for the Migration Toolkit for Applications allows a team of users to assess and analyze applications for risks and suitability for migration to hybrid cloud environments on Red Hat OpenShift. Use the user interface to assess and analyze your applications to get insights about potential pitfalls in the adoption process, at both the portfolio and application levels as you inventory, assess, analyze, and manage applications for faster migration to OpenShift.
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/user_interface_guide/mta-ui-guide-introduction
Chapter 5. Running routes inside Red Hat Fuse Tooling
Chapter 5. Running routes inside Red Hat Fuse Tooling There are two ways to run your routes using the tooling: Section 5.1, "Running routes as a local Camel context" Section 5.2, "Running routes using Maven" 5.1. Running routes as a local Camel context Overview The simplest way to run an Apache Camel route is as a Local Camel Context . This method enables you to launch the route directly from the Project Explorer view's context menu. When you run a route from the context menu, the tooling automatically creates a runtime profile for you. You can also create a custom runtime profile for running your route. Your route runs as if it were invoked directly from the command line and uses Apache Camel's embedded Spring container. You can configure a number of the runtime parameters by editing the runtime profile. Procedure To run a route as a local Camel context: In the Project Explorer view, select a routing context file. Right-click it to open the context menu, and then select Run As Local Camel Context . Note Selecting Local Camel Context (without tests) directs the tooling to run the project without performing validation tests, which may be faster. Result The Console view displays the output generated from running the route. Related topics Section 5.3.1, "Editing a Local Camel Context runtime profile" 5.2. Running routes using Maven Overview If the project containing your route is a Maven project, you can use the m2e plug-in to run your route. Using this option, you can execute any Maven goals, before the route runs. Procedure To run a route using Maven: In the Project Explorer view, select the root of the project . Right-click it to open the context menu, and then select Run As Maven build . The first time you run the project using Maven, the Edit Configuration and launch editor opens, so you can create a Maven runtime profile. To create the runtime profile, on the Maven tab: Make sure the route directory of your Apache Camel project appears in the Base directory: field. For example, on Linux the root of your project is similar to ~/workspace/simple-router . In the Goals: field, enter camel:run . Important If you created your project using the Java DSL, enter exec:java in the Goals: field. Click Apply and then Run . Subsequent Maven runs use this profile, unless you modify it between runs. Results The Console view displays the output from the Maven run. Related topics Section 5.3.2, "Editing a Maven runtime profile" 5.3. Working with runtime profiles Red Hat Fuse Tooling stores information about the runtime environments for each project in runtime profiles . The runtime profiles keep track of such information as which Maven goals to call, the Java runtime environment to use, any system variables that need to be set, and so on. A project can have more than one runtime profile. 5.3.1. Editing a Local Camel Context runtime profile Overview A Local Camel Context runtime profile configures how Apache Camel is invoked to execute a route. A Local Camel Context runtime profile stores the name of the context file in which your routes are defined, the name of the main to invoke, the command line options passed into the JVM, the JRE to use, the classpath to use, any environment variables that need to be set, and a few other pieces of information. The runtime configuration editor for a Local Camel Context runtime profile contains the following tabs: Camel Context File - specifies the name of the new configuration and the full path of the routing context file that contains your routes. JMX - specifies JMX connection details, including the JMX URI and the user name and password (optional) to use to access it. Main - specifies the fully qualified name of the project's base directory, a few options for locating the base directory, any goals required to execute before running the route, and the version of the Maven runtime to use. JRE - specifies the JRE and command line arguments to use when starting the JVM. Refresh - specifies how Maven refreshes the project's resource files after a run terminates. Environment - specifies any environment variables that need to be set. Common - specifies how the profile is stored and the output displayed. The first time an Apache Camel route is run as a Local Camel Context , Red Hat Fuse Tooling creates for the routing context file a default runtime profile, which should not require editing. Accessing the Local Camel Context's runtime configuration editor In the Project Explorer view, select the Camel context file for which you want to edit or create a custom runtime profile. Right-click it to open the context menu, and then select Run As Run Configurations to open the Run Configurations dialog. In the context selection pane, select Local Camel Context , and then click at the top, left of the context selection pane. In the Name field, enter a new name for your runtime profile. Figure 5.1. Runtime configuration editor for Local Camel Context Setting the camel context file The Camel Context File tab has one field, Select Camel Context file... . Enter the full path to the routing context file that contains your route definitions. The Browse button accesses the Open Resource dialog, which facilitates locating the target routing context file. This dialog is preconfigured to search for files that contain Apache Camel routes. Changing the command line options By default the only command line option passed to the JVM is: If you are using a custom main class you may need to pass in different options. To do so, on the Main tab, click the Add button to enter a parameter's name and value. You can click the Add Parameter dialog's Variables... button to display a list of variables that you can select. To add or modify JVM-specific arguments, edit the VM arguments field on the JRE tab. Changing where output is sent By default, the output generated from running the route is sent to the Console view. But you can redirect it to a file instead. To redirect output to a file: Select the Common tab. In the Standard Input and Output pane, click the checkbox to the Output File: field, and then enter the path to the file where you want to send the output. The Workspace , File System , and Variables buttons facilitate building the path to the output file. Related topics Section 5.1, "Running routes as a local Camel context" 5.3.2. Editing a Maven runtime profile Overview A Maven runtime profile configures how Maven invokes Apache Camel. A Maven runtime profile stores the Maven goals to execute, any Maven profiles to use, the version of Maven to use, the JRE to use, the classpath to use, any environment variables that need to be set, and a few other pieces of information. Important The first time an Apache Camel route is run using Maven, you must create a default runtime profile for it. The runtime configuration editor for a Fuse runtime profile contains the following tabs: Main - specifies the name of the new configuration, the fully qualified name of the project's base directory, a few options for locating the base directory, any goals required to execute before running the route, and the version of the Maven runtime to use. JRE - specifies the JRE and command line arguments to use when starting the JVM. Refresh - specifies how Maven refreshes the project's resource files after a run terminates. Source - specifies the location of any additional sources that the project requires. Environment - specifies any environment variables that need to be set. Common - specifies how the profile is stored and the output displayed. Accessing the Maven runtime configuration editor In the Project Explorer view, select the root of the project for which you want to edit or create a custom runtime profile. Right-click it to open the context menu, and then select Run As Run Configurations to open the Run Configurations dialog. In the context selection pane, select Maven Build , and then click at the top, left of the context selection pane. Figure 5.2. Runtime configuration editor for Maven Changing the Maven goal The most commonly used goal when running a route is camel:run . It loads the routes into a Spring container running in its own JVM. The Apache Camel plug-in also supports a camel:embedded goal that loads the Spring container into the same JVM used by Maven. The advantage of this is that the routes should bootstrap faster. Projects based on Java DSL use the exec:java goal. If your POM contains other goals, you can change the Maven goal used by clicking the Configure... button to the Maven Runtime field on the Main tab. On the Installations dialog, you edit the Global settings for <selected_runtime> installation field. Changing the version of Maven By default, Red Hat Fuse Tooling for Eclipse uses m2e, which is embedded in Eclipse. If you want to use a different version of Maven or have a newer version installed on your development machine, you can select it from the Maven Runtime drop-down menu on the Main tab. Changing where the output is sent By default, the output from the route execution is sent to the Console view. But you can redirect it to a file instead. To redirect output to a file: Select the Common tab. Click the checkbox to the Output File: field, and then enter the path to the file where you want to send the output. The Workspace , File System , and Variables buttons facilitate building the path to the output file. Related topics Section 5.2, "Running routes using Maven"
[ "-fa context-file" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/riderrunning
Chapter 7. Sources
Chapter 7. Sources The updated Red Hat Ceph Storage source code packages are available at the following location: For Red Hat Enterprise Linux 8: http://ftp.redhat.com/redhat/linux/enterprise/8Base/en/RHCEPH/SRPMS/ For Red Hat Enterprise Linux 9: https://ftp.redhat.com/redhat/linux/enterprise/9Base/en/RHCEPH/SRPMS/
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/6.1_release_notes/sources
Chapter 2. Supported Configurations
Chapter 2. Supported Configurations For information on supported configurations, see Red Hat AMQ Broker 7 Supported Configurations . Minimum Java version At a minimum, AMQ Broker 7.11 requires Java version 11 to run. Openwire support AMQ 7 Broker has provided support for the Openwire protocol since its release in 2017 as a means to migrate client applications to AMQ 7. With the release of AMQ Broker 7.9.0 in 2021, the Openwire protocol was deprecated and customers were encouraged to migrate their existing Openwire client applications to one of the fully supported protocols of AMQ 7 (CORE, AMQP, MQTT, or STOMP). Starting with the AMQ Broker 8.0 release, the Openwire protocol will be removed from AMQ Broker.
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/release_notes_for_red_hat_amq_broker_7.11/supported_configurations
Chapter 5. Upgrading the Migration Toolkit for Containers
Chapter 5. Upgrading the Migration Toolkit for Containers You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.13 by using Operator Lifecycle Manager. You can upgrade MTC on OpenShift Container Platform 4.5, and earlier versions, by reinstalling the legacy Migration Toolkit for Containers Operator. Important If you are upgrading from MTC version 1.3, you must perform an additional procedure to update the MigPlan custom resource (CR). 5.1. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform 4.13 You can upgrade the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4.13 by using the Operator Lifecycle Manager. Important When upgrading the MTC by using the Operator Lifecycle Manager, you must use a supported migration path. Migration paths Migrating from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy MTC Operator and MTC 1.7.x. Migrating from MTC 1.7.x to MTC 1.8.x is not supported. You must use MTC 1.7.x to migrate anything with a source of OpenShift Container Platform 4.9 or earlier. MTC 1.7.x must be used on both source and destination. MTC 1.8.x only supports migrations from OpenShift Container Platform 4.10 or later to OpenShift Container Platform 4.10 or later. For migrations only involving cluster versions 4.10 and later, either 1.7.x or 1.8.x may be used. However, it must be the same MTC version on both source & destination. Migration from source MTC 1.7.x to destination MTC 1.8.x is unsupported. Migration from source MTC 1.8.x to destination MTC 1.7.x is unsupported. Migration from source MTC 1.7.x to destination MTC 1.7.x is supported. Migration from source MTC 1.8.x to destination MTC 1.8.x is supported Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform console, navigate to Operators Installed Operators . Operators that have a pending upgrade display an Upgrade available status. Click Migration Toolkit for Containers Operator . Click the Subscription tab. Any upgrades requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for upgrade and click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the upgrade. When complete, the status changes to Succeeded and Up to date . Click Workloads Pods to verify that the MTC pods are running. 5.2. Upgrading the Migration Toolkit for Containers to 1.8.0 To upgrade the Migration Toolkit for Containers to 1.8.0, complete the following steps. Procedure Determine subscription names and current channels to work with for upgrading by using one of the following methods: Determine the subscription names and channels by running the following command: USD oc -n openshift-migration get sub Example output NAME PACKAGE SOURCE CHANNEL mtc-operator mtc-operator mtc-operator-catalog release-v1.7 redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace redhat-oadp-operator mtc-operator-catalog stable-1.0 Or return the subscription names and channels in JSON by running the following command: USD oc -n openshift-migration get sub -o json | jq -r '.items[] | { name: .metadata.name, package: .spec.name, channel: .spec.channel }' Example output { "name": "mtc-operator", "package": "mtc-operator", "channel": "release-v1.7" } { "name": "redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace", "package": "redhat-oadp-operator", "channel": "stable-1.0" } For each subscription, patch to move from the MTC 1.7 channel to the MTC 1.8 channel by running the following command: USD oc -n openshift-migration patch subscription mtc-operator --type merge --patch '{"spec": {"channel": "release-v1.8"}}' Example output subscription.operators.coreos.com/mtc-operator patched 5.2.1. Upgrading OADP 1.0 to 1.2 for Migration Toolkit for Containers 1.8.0 To upgrade OADP 1.0 to 1.2 for Migration Toolkit for Containers 1.8.0, complete the following steps. Procedure For each subscription, patch the OADP operator from OADP 1.0 to OADP 1.2 by running the following command: USD oc -n openshift-migration patch subscription redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace --type merge --patch '{"spec": {"channel":"stable-1.2"}}' Note Sections indicating the user-specific returned NAME values that are used for the installation of MTC & OADP, respectively. Example output subscription.operators.coreos.com/redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace patched Note The returned value will be similar to redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace , which is used in this example. If the installPlanApproval parameter is set to Automatic , the Operator Lifecycle Manager (OLM) begins the upgrade process. If the installPlanApproval parameter is set to Manual , you must approve each installPlan before the OLM begins the upgrades. Verification Verify that the OLM has completed the upgrades of OADP and MTC by running the following command: USD oc -n openshift-migration get subscriptions.operators.coreos.com mtc-operator -o json | jq '.status | (."state"=="AtLatestKnown")' When a value of true is returned, verify the channel used for each subscription by running the following command: USD oc -n openshift-migration get sub -o json | jq -r '.items[] | {name: .metadata.name, channel: .spec.channel }' Example output { "name": "mtc-operator", "channel": "release-v1.8" } { "name": "redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace", "channel": "stable-1.2" } USD oc -n openshift-migration get csv Example output NAME DISPLAY VERSION REPLACES PHASE mtc-operator.v1.8.0 Migration Toolkit for Containers Operator 1.8.0 mtc-operator.v1.7.13 Succeeded oadp-operator.v1.2.2 OADP Operator 1.2.2 oadp-operator.v1.0.13 Succeeded 5.3. Upgrading the Migration Toolkit for Containers on OpenShift Container Platform versions 4.2 to 4.5 You can upgrade Migration Toolkit for Containers (MTC) on OpenShift Container Platform versions 4.2 to 4.5 by manually installing the legacy Migration Toolkit for Containers Operator. Prerequisites You must be logged in as a user with cluster-admin privileges. You must have access to registry.redhat.io . You must have podman installed. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials by entering the following command: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: USD podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7:/operator.yml ./ Replace the Migration Toolkit for Containers Operator by entering the following command: USD oc replace --force -f operator.yml Scale the migration-operator deployment to 0 to stop the deployment by entering the following command: USD oc scale -n openshift-migration --replicas=0 deployment/migration-operator Scale the migration-operator deployment to 1 to start the deployment and apply the changes by entering the following command: USD oc scale -n openshift-migration --replicas=1 deployment/migration-operator Verify that the migration-operator was upgraded by entering the following command: USD oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F ":" '{ print USDNF }' Download the controller.yml file by entering the following command: USD podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Create the migration-controller object by entering the following command: USD oc create -f controller.yml Verify that the MTC pods are running by entering the following command: USD oc get pods -n openshift-migration 5.4. Upgrading MTC 1.3 to 1.8 If you are upgrading Migration Toolkit for Containers (MTC) version 1.3.x to 1.8, you must update the MigPlan custom resource (CR) manifest on the cluster on which the MigrationController pod is running. Because the indirectImageMigration and indirectVolumeMigration parameters do not exist in MTC 1.3, their default value in version 1.4 is false , which means that direct image migration and direct volume migration are enabled. Because the direct migration requirements are not fulfilled, the migration plan cannot reach a Ready state unless these parameter values are changed to true . Important Migrating from OpenShift Container Platform 3 to OpenShift Container Platform 4 requires a legacy MTC Operator and MTC 1.7.x. Upgrading MTC 1.7.x to 1.8.x requires manually updating the OADP channel from stable-1.0 to stable-1.2 in order to successfully complete the upgrade from 1.7.x to 1.8.x. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Log in to the cluster on which the MigrationController pod is running. Get the MigPlan CR manifest: USD oc get migplan <migplan> -o yaml -n openshift-migration Update the following parameter values and save the file as migplan.yaml : ... spec: indirectImageMigration: true indirectVolumeMigration: true Replace the MigPlan CR manifest to apply the changes: USD oc replace -f migplan.yaml -n openshift-migration Get the updated MigPlan CR manifest to verify the changes: USD oc get migplan <migplan> -o yaml -n openshift-migration
[ "oc -n openshift-migration get sub", "NAME PACKAGE SOURCE CHANNEL mtc-operator mtc-operator mtc-operator-catalog release-v1.7 redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace redhat-oadp-operator mtc-operator-catalog stable-1.0", "oc -n openshift-migration get sub -o json | jq -r '.items[] | { name: .metadata.name, package: .spec.name, channel: .spec.channel }'", "{ \"name\": \"mtc-operator\", \"package\": \"mtc-operator\", \"channel\": \"release-v1.7\" } { \"name\": \"redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace\", \"package\": \"redhat-oadp-operator\", \"channel\": \"stable-1.0\" }", "oc -n openshift-migration patch subscription mtc-operator --type merge --patch '{\"spec\": {\"channel\": \"release-v1.8\"}}'", "subscription.operators.coreos.com/mtc-operator patched", "oc -n openshift-migration patch subscription redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace --type merge --patch '{\"spec\": {\"channel\":\"stable-1.2\"}}'", "subscription.operators.coreos.com/redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace patched", "oc -n openshift-migration get subscriptions.operators.coreos.com mtc-operator -o json | jq '.status | (.\"state\"==\"AtLatestKnown\")'", "oc -n openshift-migration get sub -o json | jq -r '.items[] | {name: .metadata.name, channel: .spec.channel }'", "{ \"name\": \"mtc-operator\", \"channel\": \"release-v1.8\" } { \"name\": \"redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace\", \"channel\": \"stable-1.2\" }", "Confirm that the `mtc-operator.v1.8.0` and `oadp-operator.v1.2.x` packages are installed by running the following command:", "oc -n openshift-migration get csv", "NAME DISPLAY VERSION REPLACES PHASE mtc-operator.v1.8.0 Migration Toolkit for Containers Operator 1.8.0 mtc-operator.v1.7.13 Succeeded oadp-operator.v1.2.2 OADP Operator 1.2.2 oadp-operator.v1.0.13 Succeeded", "podman login registry.redhat.io", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7:/operator.yml ./", "oc replace --force -f operator.yml", "oc scale -n openshift-migration --replicas=0 deployment/migration-operator", "oc scale -n openshift-migration --replicas=1 deployment/migration-operator", "oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "oc create -f controller.yml", "oc get pods -n openshift-migration", "oc get migplan <migplan> -o yaml -n openshift-migration", "spec: indirectImageMigration: true indirectVolumeMigration: true", "oc replace -f migplan.yaml -n openshift-migration", "oc get migplan <migplan> -o yaml -n openshift-migration" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/migration_toolkit_for_containers/upgrading-mtc
probe::stap.pass1a
probe::stap.pass1a Name probe::stap.pass1a - Starting stap pass1 (parsing user script) Synopsis stap.pass1a Values session the systemtap_session variable s Description pass1a fires just after the call to gettimeofday , before the user script is parsed.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-stap-pass1a
5.3. Remote Query
5.3. Remote Query When executing remote queries the cacheManager must be an instance of RemoteCacheManager , and an example configuration utilizing a RemoteCacheManager is found below for both Java and blueprint.xml: Using only Java Using Blueprint and Java Java RemoteCacheManagerFactory class: Java InfinispanQueryExample class: blueprint.xml: The remote_query_cache is an arbitrary name for a cache that holds the data, and the results of the query will be a list of domain objects stored as a CamelInfinispanOperationResult header. In addition, there are the following requirements: The RemoteCacheManager must be configured to use ProtoStreamMarshaller . The ProtoStreamMarshaller must be registered with the RemoteCacheManager 's serialization context. The .proto descriptors for domain objects must be registered with the remote JBoss Data Grid server. For more details on how to setup a RemoteCacheManager , see the Remote Querying section of the Red Hat JBoss Data Grid Infinispan Query Guide . Report a bug
[ "from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION, InfinispanConstants.QUERY) .setHeader(InfinispanConstants.QUERY_BUILDER, new InfinispanQueryBuilder() { public Query build(QueryFactory<Query> queryFactory) { return queryFactory.from(User.class).having(\"name\").like(\"%abc%\") .toBuilder().build(); } }) .to(\"infinispan://localhost?cacheContainer=#cacheManager&cacheName=remote_query_cache\") ;", "public class RemoteCacheManagerFactory { ConfigurationBuilder clientBuilder; public RemoteCacheManagerFactory(String hostname, int port) { clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(hostname).port(port); } public RemoteCacheManager newRemoteCacheManager() { return new RemoteCacheManager(clientBuilder.build()); } }", "public class InfinispanQueryExample { public InfinispanQueryBuilder getBuilder() { return new InfinispanQueryBuilder() { public Query build(QueryFactory<Query> queryFactory) { return queryFactory.from(User.class) .having(\"name\") .like(\"%abc%\") .toBuilder().build(); } } } }", "<bean id=\"remoteCacheManagerFactory\" class=\"com.jboss.datagrid.RemoteCacheManagerFactory\"> <argument value=\"localhost\"/> <argument value=\"11222\"/> </bean> <bean id=\"cacheManager\" factory-ref=\"remoteCacheManagerFactory\" factory-method=\"newRemoteCacheManager\"> </bean> <bean id=\"queryBuilder\" class=\"org.example.com.InfinispanQueryExample\"/> <camelContext id=\"route\" xmlns=\"http://camel.apache.org/schema/blueprint\"> <route> <from uri=\"direct:start\"/> <setHeader headerName=\"CamelInfinispanOperation\"> <constant>CamelInfinispanOperationQuery</constant> </setHeader> <setHeader headerName=\"CamelInfinispanQueryBuilder\"> <method ref=\"queryBuilder\" method=\"getBuilder\"/> </setHeader> <to uri=\"infinispan://localhost?cacheContainer=#cacheManager&cacheName=remote_query_cache\"/> </route> </camelContext>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/remote_query
Chapter 4. API index
Chapter 4. API index API API group AdminNetworkPolicy policy.networking.k8s.io/v1alpha1 AdminPolicyBasedExternalRoute k8s.ovn.org/v1 AlertingRule monitoring.openshift.io/v1 Alertmanager monitoring.coreos.com/v1 AlertmanagerConfig monitoring.coreos.com/v1beta1 AlertRelabelConfig monitoring.openshift.io/v1 APIRequestCount apiserver.openshift.io/v1 APIServer config.openshift.io/v1 APIService apiregistration.k8s.io/v1 AppliedClusterResourceQuota quota.openshift.io/v1 Authentication config.openshift.io/v1 Authentication operator.openshift.io/v1 BareMetalHost metal3.io/v1alpha1 BaselineAdminNetworkPolicy policy.networking.k8s.io/v1alpha1 Binding v1 BMCEventSubscription metal3.io/v1alpha1 BrokerTemplateInstance template.openshift.io/v1 Build build.openshift.io/v1 Build config.openshift.io/v1 BuildConfig build.openshift.io/v1 BuildLog build.openshift.io/v1 BuildRequest build.openshift.io/v1 CatalogSource operators.coreos.com/v1alpha1 CertificateSigningRequest certificates.k8s.io/v1 CloudCredential operator.openshift.io/v1 CloudPrivateIPConfig cloud.network.openshift.io/v1 ClusterAutoscaler autoscaling.openshift.io/v1 ClusterCSIDriver operator.openshift.io/v1 ClusterOperator config.openshift.io/v1 ClusterResourceQuota quota.openshift.io/v1 ClusterRole authorization.openshift.io/v1 ClusterRole rbac.authorization.k8s.io/v1 ClusterRoleBinding authorization.openshift.io/v1 ClusterRoleBinding rbac.authorization.k8s.io/v1 ClusterServiceVersion operators.coreos.com/v1alpha1 ClusterVersion config.openshift.io/v1 ComponentStatus v1 Config imageregistry.operator.openshift.io/v1 Config operator.openshift.io/v1 Config samples.operator.openshift.io/v1 ConfigMap v1 Console config.openshift.io/v1 Console operator.openshift.io/v1 ConsoleCLIDownload console.openshift.io/v1 ConsoleExternalLogLink console.openshift.io/v1 ConsoleLink console.openshift.io/v1 ConsoleNotification console.openshift.io/v1 ConsolePlugin console.openshift.io/v1 ConsoleQuickStart console.openshift.io/v1 ConsoleSample console.openshift.io/v1 ConsoleYAMLSample console.openshift.io/v1 ContainerRuntimeConfig machineconfiguration.openshift.io/v1 ControllerConfig machineconfiguration.openshift.io/v1 ControllerRevision apps/v1 ControlPlaneMachineSet machine.openshift.io/v1 CredentialsRequest cloudcredential.openshift.io/v1 CronJob batch/v1 CSIDriver storage.k8s.io/v1 CSINode storage.k8s.io/v1 CSISnapshotController operator.openshift.io/v1 CSIStorageCapacity storage.k8s.io/v1 CustomResourceDefinition apiextensions.k8s.io/v1 DaemonSet apps/v1 DataImage metal3.io/v1alpha1 Deployment apps/v1 DeploymentConfig apps.openshift.io/v1 DeploymentConfigRollback apps.openshift.io/v1 DeploymentLog apps.openshift.io/v1 DeploymentRequest apps.openshift.io/v1 DNS config.openshift.io/v1 DNS operator.openshift.io/v1 DNSRecord ingress.operator.openshift.io/v1 EgressFirewall k8s.ovn.org/v1 EgressIP k8s.ovn.org/v1 EgressQoS k8s.ovn.org/v1 EgressRouter network.operator.openshift.io/v1 EgressService k8s.ovn.org/v1 Endpoints v1 EndpointSlice discovery.k8s.io/v1 Etcd operator.openshift.io/v1 Event v1 Event events.k8s.io/v1 Eviction policy/v1 FeatureGate config.openshift.io/v1 FirmwareSchema metal3.io/v1alpha1 FlowSchema flowcontrol.apiserver.k8s.io/v1 Group user.openshift.io/v1 HardwareData metal3.io/v1alpha1 HelmChartRepository helm.openshift.io/v1beta1 HorizontalPodAutoscaler autoscaling/v2 HostFirmwareComponents metal3.io/v1alpha1 HostFirmwareSettings metal3.io/v1alpha1 Identity user.openshift.io/v1 Image config.openshift.io/v1 Image image.openshift.io/v1 ImageContentPolicy config.openshift.io/v1 ImageContentSourcePolicy operator.openshift.io/v1alpha1 ImageDigestMirrorSet config.openshift.io/v1 ImagePruner imageregistry.operator.openshift.io/v1 ImageSignature image.openshift.io/v1 ImageStream image.openshift.io/v1 ImageStreamImage image.openshift.io/v1 ImageStreamImport image.openshift.io/v1 ImageStreamLayers image.openshift.io/v1 ImageStreamMapping image.openshift.io/v1 ImageStreamTag image.openshift.io/v1 ImageTag image.openshift.io/v1 ImageTagMirrorSet config.openshift.io/v1 Infrastructure config.openshift.io/v1 Ingress config.openshift.io/v1 Ingress networking.k8s.io/v1 IngressClass networking.k8s.io/v1 IngressController operator.openshift.io/v1 InsightsOperator operator.openshift.io/v1 InstallPlan operators.coreos.com/v1alpha1 IPAddress ipam.cluster.x-k8s.io/v1beta1 IPAddressClaim ipam.cluster.x-k8s.io/v1beta1 IPPool whereabouts.cni.cncf.io/v1alpha1 Job batch/v1 KubeAPIServer operator.openshift.io/v1 KubeControllerManager operator.openshift.io/v1 KubeletConfig machineconfiguration.openshift.io/v1 KubeScheduler operator.openshift.io/v1 KubeStorageVersionMigrator operator.openshift.io/v1 Lease coordination.k8s.io/v1 LimitRange v1 LocalResourceAccessReview authorization.openshift.io/v1 LocalSubjectAccessReview authorization.k8s.io/v1 LocalSubjectAccessReview authorization.openshift.io/v1 Machine machine.openshift.io/v1beta1 MachineAutoscaler autoscaling.openshift.io/v1beta1 MachineConfig machineconfiguration.openshift.io/v1 MachineConfigPool machineconfiguration.openshift.io/v1 MachineConfiguration operator.openshift.io/v1 MachineHealthCheck machine.openshift.io/v1beta1 MachineSet machine.openshift.io/v1beta1 Metal3Remediation infrastructure.cluster.x-k8s.io/v1beta1 Metal3RemediationTemplate infrastructure.cluster.x-k8s.io/v1beta1 MultiNetworkPolicy k8s.cni.cncf.io/v1beta1 MutatingWebhookConfiguration admissionregistration.k8s.io/v1 Namespace v1 Network config.openshift.io/v1 Network operator.openshift.io/v1 NetworkAttachmentDefinition k8s.cni.cncf.io/v1 NetworkPolicy networking.k8s.io/v1 Node v1 Node config.openshift.io/v1 NodeMetrics metrics.k8s.io/v1beta1 OAuth config.openshift.io/v1 OAuthAccessToken oauth.openshift.io/v1 OAuthAuthorizeToken oauth.openshift.io/v1 OAuthClient oauth.openshift.io/v1 OAuthClientAuthorization oauth.openshift.io/v1 OLMConfig operators.coreos.com/v1 OpenShiftAPIServer operator.openshift.io/v1 OpenShiftControllerManager operator.openshift.io/v1 Operator operators.coreos.com/v1 OperatorCondition operators.coreos.com/v2 OperatorGroup operators.coreos.com/v1 OperatorHub config.openshift.io/v1 OperatorPKI network.operator.openshift.io/v1 OverlappingRangeIPReservation whereabouts.cni.cncf.io/v1alpha1 PackageManifest packages.operators.coreos.com/v1 PerformanceProfile performance.openshift.io/v2 PersistentVolume v1 PersistentVolumeClaim v1 Pod v1 PodDisruptionBudget policy/v1 PodMetrics metrics.k8s.io/v1beta1 PodMonitor monitoring.coreos.com/v1 PodNetworkConnectivityCheck controlplane.operator.openshift.io/v1alpha1 PodSecurityPolicyReview security.openshift.io/v1 PodSecurityPolicySelfSubjectReview security.openshift.io/v1 PodSecurityPolicySubjectReview security.openshift.io/v1 PodTemplate v1 PreprovisioningImage metal3.io/v1alpha1 PriorityClass scheduling.k8s.io/v1 PriorityLevelConfiguration flowcontrol.apiserver.k8s.io/v1 Probe monitoring.coreos.com/v1 Profile tuned.openshift.io/v1 Project config.openshift.io/v1 Project project.openshift.io/v1 ProjectHelmChartRepository helm.openshift.io/v1beta1 ProjectRequest project.openshift.io/v1 Prometheus monitoring.coreos.com/v1 PrometheusRule monitoring.coreos.com/v1 Provisioning metal3.io/v1alpha1 Proxy config.openshift.io/v1 RangeAllocation security.openshift.io/v1 ReplicaSet apps/v1 ReplicationController v1 ResourceAccessReview authorization.openshift.io/v1 ResourceQuota v1 Role authorization.openshift.io/v1 Role rbac.authorization.k8s.io/v1 RoleBinding authorization.openshift.io/v1 RoleBinding rbac.authorization.k8s.io/v1 RoleBindingRestriction authorization.openshift.io/v1 Route route.openshift.io/v1 RuntimeClass node.k8s.io/v1 Scale autoscaling/v1 Scheduler config.openshift.io/v1 Secret v1 SecretList image.openshift.io/v1 SecurityContextConstraints security.openshift.io/v1 SelfSubjectAccessReview authorization.k8s.io/v1 SelfSubjectReview authentication.k8s.io/v1 SelfSubjectRulesReview authorization.k8s.io/v1 SelfSubjectRulesReview authorization.openshift.io/v1 Service v1 ServiceAccount v1 ServiceCA operator.openshift.io/v1 ServiceMonitor monitoring.coreos.com/v1 StatefulSet apps/v1 Storage operator.openshift.io/v1 StorageClass storage.k8s.io/v1 StorageState migration.k8s.io/v1alpha1 StorageVersionMigration migration.k8s.io/v1alpha1 SubjectAccessReview authorization.k8s.io/v1 SubjectAccessReview authorization.openshift.io/v1 SubjectRulesReview authorization.openshift.io/v1 Subscription operators.coreos.com/v1alpha1 Template template.openshift.io/v1 TemplateInstance template.openshift.io/v1 ThanosRuler monitoring.coreos.com/v1 TokenRequest authentication.k8s.io/v1 TokenReview authentication.k8s.io/v1 Tuned tuned.openshift.io/v1 User user.openshift.io/v1 UserIdentityMapping user.openshift.io/v1 UserOAuthAccessToken oauth.openshift.io/v1 ValidatingAdmissionPolicy admissionregistration.k8s.io/v1 ValidatingAdmissionPolicyBinding admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration admissionregistration.k8s.io/v1 VolumeAttachment storage.k8s.io/v1 VolumeSnapshot snapshot.storage.k8s.io/v1 VolumeSnapshotClass snapshot.storage.k8s.io/v1 VolumeSnapshotContent snapshot.storage.k8s.io/v1
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/api_overview/api-index
Web console
Web console OpenShift Container Platform 4.17 Getting started with the web console in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>", "oc edit console.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Console metadata: name: cluster spec: authentication: logoutRedirect: \"\" 1 status: consoleURL: \"\" 2", "oc create configmap console-custom-logo --from-file /path/to/console-custom-logo.png -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: console-custom-logo namespace: openshift-config binaryData: console-custom-logo.png: <base64-encoded_logo> ... 1", "oc edit consoles.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: customLogoFile: key: console-custom-logo.png name: console-custom-logo customProductName: My Console", "oc get clusteroperator console -o yaml", "oc get consoles.operator.openshift.io -o yaml", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: example spec: href: 'https://www.example.com' location: HelpMenu 1 text: Link 1", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-link-for-all-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard text: This appears in all namespaces", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-for-some-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard # This text will appear in a box called \"Launcher\" under \"namespace\" or \"project\" in the web console text: Custom Link Text namespaceDashboard: namespaces: # for these specific namespaces - my-namespace - your-namespace - other-namespace", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: application-menu-link-1 spec: href: 'https://www.example.com' location: ApplicationMenu text: Link 1 applicationMenu: section: My New Section # image that is 24x24 in size imageURL: https://via.placeholder.com/24", "oc edit ingress.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: console namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2", "oc edit ingress.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: downloads namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2", "oc adm create-login-template > login.html", "oc adm create-provider-selection-template > providers.html", "oc adm create-error-template > errors.html", "oc create secret generic login-template --from-file=login.html -n openshift-config", "oc create secret generic providers-template --from-file=providers.html -n openshift-config", "oc create secret generic error-template --from-file=errors.html -n openshift-config", "oc edit oauths cluster", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: templates: error: name: error-template login: name: login-template providerSelection: name: providers-template", "apiVersion: console.openshift.io/v1 kind: ConsoleExternalLogLink metadata: name: example spec: hrefTemplate: >- https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} text: Example Logs", "apiVersion: console.openshift.io/v1 kind: ConsoleNotification metadata: name: example spec: text: This is an example notification message with an optional link. location: BannerTop 1 link: href: 'https://www.example.com' text: Optional link text color: '#fff' backgroundColor: '#0088ce'", "apiVersion: console.openshift.io/v1 kind: ConsoleCLIDownload metadata: name: example-cli-download-links spec: description: | This is an example of download links displayName: example links: - href: 'https://www.example.com/public/example.tar' text: example for linux - href: 'https://www.example.com/public/example.mac.zip' text: example for mac - href: 'https://www.example.com/public/example.win.zip' text: example for windows", "apiVersion: console.openshift.io/v1 kind: ConsoleYAMLSample metadata: name: example spec: targetResource: apiVersion: batch/v1 kind: Job title: Example Job description: An example Job YAML sample yaml: | apiVersion: batch/v1 kind: Job metadata: name: countdown spec: template: metadata: name: countdown spec: containers: - name: counter image: centos:7 command: - \"bin/bash\" - \"-c\" - \"for i in 9 8 7 6 5 4 3 2 1 ; do echo USDi ; done\" restartPolicy: Never", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: perspectives: - id: admin visibility: state: Enabled - id: dev visibility: state: Enabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: perspectives: - id: admin requiresAccessReview: - group: rbac.authorization.k8s.io resource: clusterroles verb: list - id: dev state: Enabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: perspectives: - id: admin visibility: state: AccessReview accessReview: missing: - resource: deployment verb: list required: - resource: namespaces verb: list - id: dev visibility: state: Enabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: developerCatalog: categories: types: state: Disabled disabled: - BuilderImage - Devfile - HelmChart", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: developerCatalog: categories: types: state: Enabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: developerCatalog: categories: types: state: Disabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: developerCatalog: categories: types: state: Enabled enabled: - BuilderImage - Devfile - HelmChart -", "conster Header: React.FC = () => { const { t } = useTranslation('plugin__console-demo-plugin'); return <h1>{t('Hello, World!')}</h1>; };", "conster Header: React.FC = () => { const { t } = useTranslation('plugin__console-demo-plugin'); return <h1>{t('Hello, World!')}</h1>; };", "spec: backend: service: basePath: / name: console-demo-plugin namespace: console-demo-plugin port: 9001 type: Service displayName: OpenShift Console Demo Plugin i18n: loadType: Preload 1", "{ \"type\": \"console.navigation/section\", \"properties\": { \"id\": \"admin-demo-section\", \"perspective\": \"admin\", \"name\": \"%plugin__console-plugin-template~Plugin Template%\" } }", "// t('plugin__console-demo-plugin~Demo Plugin')", "yarn i18n", "yarn install", "yarn run start", "oc login", "yarn run start-console", "podman machine ssh sudo -i rpm-ostree install qemu-user-static systemctl reboot", "docker build -t quay.io/my-repositroy/my-plugin:latest .", "docker run -it --rm -d -p 9001:80 quay.io/my-repository/my-plugin:latest", "docker push quay.io/my-repository/my-plugin:latest", "helm upgrade -i my-plugin charts/openshift-console-plugin -n my-plugin-namespace --create-namespace --set plugin.image=my-plugin-image-location", "plugin: name: \"\" description: \"\" image: \"\" imagePullPolicy: IfNotPresent replicas: 2 port: 9443 securityContext: enabled: true podSecurityContext: enabled: true runAsNonRoot: true seccompProfile: type: RuntimeDefault containerSecurityContext: enabled: true allowPrivilegeEscalation: false capabilities: drop: - ALL resources: requests: cpu: 10m memory: 50Mi basePath: / certificateSecretName: \"\" serviceAccount: create: true annotations: {} name: \"\" patcherServiceAccount: create: true annotations: {} name: \"\" jobs: patchConsoles: enabled: true image: \"registry.redhat.io/openshift4/ose-tools-rhel8@sha256:e44074f21e0cca6464e50cb6ff934747e0bd11162ea01d522433a1a1ae116103\" podSecurityContext: enabled: true runAsNonRoot: true seccompProfile: type: RuntimeDefault containerSecurityContext: enabled: true allowPrivilegeEscalation: false capabilities: drop: - ALL resources: requests: cpu: 10m memory: 50Mi", "apiVersion: console.openshift.io/v1 kind: ConsolePlugin metadata: name:<plugin-name> spec: proxy: - alias: helm-charts 1 authorization: UserToken 2 caCertificate: '-----BEGIN CERTIFICATE-----\\nMIID....'en 3 endpoint: 4 service: name: <service-name> namespace: <service-namespace> port: <service-port> type: Service", "\"consolePlugin\": { \"name\": \"my-plugin\", 1 \"version\": \"0.0.1\", 2 \"displayName\": \"My Plugin\", 3 \"description\": \"Enjoy this shiny, new console plugin!\", 4 \"exposedModules\": { \"ExamplePage\": \"./components/ExamplePage\" }, \"dependencies\": { \"@console/pluginAPI\": \"/*\" } }", "{ \"type\": \"console.tab/horizontalNav\", \"properties\": { \"page\": { \"name\": \"Example Tab\", \"href\": \"example\" }, \"model\": { \"group\": \"core\", \"version\": \"v1\", \"kind\": \"Pod\" }, \"component\": { \"USDcodeRef\": \"ExampleTab\" } } }", "\"exposedModules\": { \"ExamplePage\": \"./components/ExamplePage\", \"ExampleTab\": \"./components/ExampleTab\" }", "import * as React from 'react'; export default function ExampleTab() { return ( <p>This is a custom tab added to a resource using a dynamic plugin.</p> ); }", "helm upgrade -i my-plugin charts/openshift-console-plugin -n my-plugin-namespace --create-namespace --set plugin.image=my-plugin-image-location", "const Component: React.FC = (props) => { const [activePerspective, setActivePerspective] = useActivePerspective(); return <select value={activePerspective} onChange={(e) => setActivePerspective(e.target.value)} > { // ...perspective options } </select> }", "<GreenCheckCircleIcon title=\"Healthy\" />", "<RedExclamationCircleIcon title=\"Failed\" />", "<YellowExclamationTriangleIcon title=\"Warning\" />", "<BlueInfoCircleIcon title=\"Info\" />", "<ErrorStatus title={errorMsg} />", "<InfoStatus title={infoMsg} />", "<ProgressStatus title={progressMsg} />", "<SuccessStatus title={successMsg} />", "const [navItemExtensions, navItemsResolved] = useResolvedExtensions<NavItem>(isNavItem); // process adapted extensions and render your component", "const HomePage: React.FC = (props) => { const page = { href: '/home', name: 'Home', component: () => <>Home</> } return <HorizontalNav match={props.match} pages={[page]} /> }", "const MachineList: React.FC<MachineListProps> = (props) => { return ( <VirtualizedTable<MachineKind> {...props} aria-label='Machines' columns={getMachineColumns} Row={getMachineTableRow} /> ); }", "const PodRow: React.FC<RowProps<K8sResourceCommon>> = ({ obj, activeColumnIDs }) => { return ( <> <TableData id={columns[0].id} activeColumnIDs={activeColumnIDs}> <ResourceLink kind=\"Pod\" name={obj.metadata.name} namespace={obj.metadata.namespace} /> </TableData> <TableData id={columns[1].id} activeColumnIDs={activeColumnIDs}> <ResourceLink kind=\"Namespace\" name={obj.metadata.namespace} /> </TableData> </> ); };", "// See implementation for more details on TableColumn type const [activeColumns, userSettingsLoaded] = useActiveColumns({ columns, showNamespaceOverride: false, columnManagementID, }); return userSettingsAreLoaded ? <VirtualizedTable columns={activeColumns} {...otherProps} /> : null", "const exampleList: React.FC = () => { return ( <> <ListPageHeader title=\"Example List Page\"/> </> ); };", "const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreate groupVersionKind=\"Pod\">Create Pod</ListPageCreate> </ListPageHeader> </> ); };", "const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreateLink to={'/link/to/my/page'}>Create Item</ListPageCreateLink> </ListPageHeader> </> ); };", "const exampleList: React.FC<MyProps> = () => { return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreateButton createAccessReview={access}>Create Pod</ListPageCreateButton> </ListPageHeader> </> ); };", "const exampleList: React.FC<MyProps> = () => { const items = { SAVE: 'Save', DELETE: 'Delete', } return ( <> <ListPageHeader title=\"Example Pod List Page\"/> <ListPageCreateDropdown createAccessReview={access} items={items}>Actions</ListPageCreateDropdown> </ListPageHeader> </> ); };", "// See implementation for more details on RowFilter and FilterValue types const [staticData, filteredData, onFilterChange] = useListPageFilter( data, rowFilters, staticFilters, ); // ListPageFilter updates filter state based on user interaction and resulting filtered data can be rendered in an independent component. return ( <> <ListPageHeader .../> <ListPagBody> <ListPageFilter data={staticData} onFilterChange={onFilterChange} /> <List data={filteredData} /> </ListPageBody> </> )", "// See implementation for more details on RowFilter and FilterValue types const [staticData, filteredData, onFilterChange] = useListPageFilter( data, rowFilters, staticFilters, ); // ListPageFilter updates filter state based on user interaction and resulting filtered data can be rendered in an independent component. return ( <> <ListPageHeader .../> <ListPagBody> <ListPageFilter data={staticData} onFilterChange={onFilterChange} /> <List data={filteredData} /> </ListPageBody> </> )", "<ResourceLink kind=\"Pod\" name=\"testPod\" title={metadata.uid} />", "<ResourceIcon kind=\"Pod\"/>", "const Component: React.FC = () => { const [model, inFlight] = useK8sModel({ group: 'app'; version: 'v1'; kind: 'Deployment' }); return }", "const Component: React.FC = () => { const [models, inFlight] = UseK8sModels(); return }", "const Component: React.FC = () => { const watchRes = { } const [data, loaded, error] = useK8sWatchResource(watchRes) return }", "const Component: React.FC = () => { const watchResources = { 'deployment': {...}, 'pod': {...} } const {deployment, pod} = useK8sWatchResources(watchResources) return }", "<StatusPopupSection firstColumn={ <> <span>{title}</span> <span className=\"text-secondary\"> My Example Item </span> </> } secondColumn='Status' >", "<StatusPopupSection firstColumn='Example' secondColumn='Status' > <StatusPopupItem icon={healthStateMapping[MCGMetrics.state]?.icon}> Complete </StatusPopupItem> <StatusPopupItem icon={healthStateMapping[RGWMetrics.state]?.icon}> Pending </StatusPopupItem> </StatusPopupSection>", "<Overview> <OverviewGrid mainCards={mainCards} leftCards={leftCards} rightCards={rightCards} /> </Overview>", "<Overview> <OverviewGrid mainCards={mainCards} leftCards={leftCards} rightCards={rightCards} /> </Overview>", "return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )", "return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )", "return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )", "return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> <InventoryItemBody error={loadError}> {loaded && <InventoryItemStatus count={workerNodes.length} icon={<MonitoringIcon />} />} </InventoryItemBody> </InventoryItem> )", "if (loadError) { title = <Link to={workerNodesLink}>{t('Worker Nodes')}</Link>; } else if (!loaded) { title = <><InventoryItemLoading /><Link to={workerNodesLink}>{t('Worker Nodes')}</Link></>; } return ( <InventoryItem> <InventoryItemTitle>{title}</InventoryItemTitle> </InventoryItem> )", "<React.Suspense fallback={<LoadingBox />}> <CodeEditor value={code} language=\"yaml\" /> </React.Suspense>", "<React.Suspense fallback={<LoadingBox />}> <ResourceYAMLEditor initialResource={resource} header=\"Create resource\" onSave={(content) => updateResource(content)} /> </React.Suspense>", "const [resource, loaded, loadError] = useK8sWatchResource(clusterResource); return <ResourceEventStream resource={resource} />", "const context: AppPage: React.FC = () => {<br/> const [launchModal] = useModal();<br/> const onClick = () => launchModal(ModalComponent);<br/> return (<br/> <Button onClick={onClick}>Launch a Modal</Button><br/> )<br/>}<br/>`", "const context: ActionContext = { 'a-context-id': { dataFromDynamicPlugin } }; <ActionServiceProvider context={context}> {({ actions, options, loaded }) => loaded && ( <ActionMenu actions={actions} options={options} variant={ActionMenuVariant.DROPDOWN} /> ) } </ActionServiceProvider>", "const logNamespaceChange = (namespace) => console.log(`New namespace: USD{namespace}`); <NamespaceBar onNamespaceChange={logNamespaceChange}> <NamespaceBarApplicationSelector /> </NamespaceBar> <Page>", "//in ErrorBoundary component return ( if (this.state.hasError) { return <ErrorBoundaryFallbackPage errorMessage={errorString} componentStack={componentStackString} stack={stackTraceString} title={errorString}/>; } return this.props.children; )", "<QueryBrowser defaultTimespan={15 * 60 * 1000} namespace={namespace} pollInterval={30 * 1000} queries={[ 'process_resident_memory_bytes{job=\"console\"}', 'sum(irate(container_network_receive_bytes_total[6h:5m])) by (pod)', ]} />", "const PodAnnotationsButton = ({ pod }) => { const { t } = useTranslation(); const launchAnnotationsModal = useAnnotationsModal<PodKind>(pod); return <button onClick={launchAnnotationsModal}>{t('Edit Pod Annotations')}</button> }", "const DeletePodButton = ({ pod }) => { const { t } = useTranslation(); const launchDeleteModal = useDeleteModal<PodKind>(pod); return <button onClick={launchDeleteModal}>{t('Delete Pod')}</button> }", "const PodLabelsButton = ({ pod }) => { const { t } = useTranslation(); const launchLabelsModal = useLabelsModal<PodKind>(pod); return <button onClick={launchLabelsModal}>{t('Edit Pod Labels')}</button> }", "const Component: React.FC = (props) => { const [activeNamespace, setActiveNamespace] = useActiveNamespace(); return <select value={activeNamespace} onChange={(e) => setActiveNamespace(e.target.value)} > { // ...namespace options } </select> }", "const Component: React.FC = (props) => { const [state, setState, loaded] = useUserSettings( 'devconsole.addPage.showDetails', true, true, ); return loaded ? ( <WrappedComponent {...props} userSettingState={state} setUserSettingState={setState} /> ) : null; };", "const OpenQuickStartButton = ({ quickStartId }) => { const { setActiveQuickStart } = useQuickStartContext(); const onClick = React.useCallback(() => { setActiveQuickStart(quickStartId); }, [quickStartId]); return <button onClick={onClick}>{t('Open Quick Start')}</button> };", "<React.Suspense fallback={<LoadingBox />}> <YAMLEditor value={code} /> </React.Suspense>", "oc get console.operator.openshift.io cluster -o jsonpath='{.spec.plugins}'", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-console spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-console podSelector: {} policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-operators spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-operators podSelector: {} policyTypes: - Ingress", "oc delete devworkspaces.workspace.devfile.io --all-namespaces --all --wait", "oc delete devworkspaceroutings.controller.devfile.io --all-namespaces --all --wait", "oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaceroutings.controller.devfile.io", "oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaces.workspace.devfile.io", "oc delete customresourcedefinitions.apiextensions.k8s.io devworkspacetemplates.workspace.devfile.io", "oc delete customresourcedefinitions.apiextensions.k8s.io devworkspaceoperatorconfigs.controller.devfile.io", "oc get customresourcedefinitions.apiextensions.k8s.io | grep \"devfile.io\"", "oc delete deployment/devworkspace-webhook-server -n openshift-operators", "oc delete mutatingwebhookconfigurations controller.devfile.io", "oc delete validatingwebhookconfigurations controller.devfile.io", "oc delete all --selector app.kubernetes.io/part-of=devworkspace-operator,app.kubernetes.io/name=devworkspace-webhook-server -n openshift-operators", "oc delete serviceaccounts devworkspace-webhook-server -n openshift-operators", "oc delete clusterrole devworkspace-webhook-server", "oc delete clusterrolebinding devworkspace-webhook-server", "oc edit consoles.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: managementState: Removed 1", "oc get -o yaml consolequickstart spring-with-s2i > my-quick-start.yaml", "oc create -f my-quick-start.yaml", "oc explain consolequickstarts", "summary: failed: Try the steps again. success: Your Spring application is running. title: Run the Spring application conclusion: >- Your Spring application is deployed and ready. 1", "apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' 1", "apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring 1 durationMinutes: 10", "apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 1", "spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 icon: >- 1 data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJMYXllcl8xIiBkYXRhLW5hbWU9IkxheWVyIDEiIHZpZXdCb3g9IjAgMCAxMDI0IDEwMjQiPjxkZWZzPjxzdHlsZT4uY2xzLTF7ZmlsbDojMTUzZDNjO30uY2xzLTJ7ZmlsbDojZDhkYTlkO30uY2xzLTN7ZmlsbDojNThjMGE4O30uY2xzLTR7ZmlsbDojZmZmO30uY2xzLTV7ZmlsbDojM2Q5MTkxO308L3N0eWxlPjwvZGVmcz48dGl0bGU+c25vd2Ryb3BfaWNvbl9yZ2JfZGVmYXVsdDwvdGl0bGU+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMTAxMi42OSw1OTNjLTExLjEyLTM4LjA3LTMxLTczLTU5LjIxLTEwMy44LTkuNS0xMS4zLTIzLjIxLTI4LjI5LTM5LjA2LTQ3Ljk0QzgzMy41MywzNDEsNzQ1LjM3LDIzNC4xOCw2NzQsMTY4Ljk0Yy01LTUuMjYtMTAuMjYtMTAuMzEtMTUuNjUtMTUuMDdhMjQ2LjQ5LDI0Ni40OSwwLDAsMC0zNi41NS0yNi44LDE4Mi41LDE4Mi41LDAsMCwwLTIwLjMtMTEuNzcsMjAxLjUzLDIwMS41MywwLDAsMC00My4xOS0xNUExNTUuMjQsMTU1LjI0LDAsMCwwLDUyOCw5NS4yYy02Ljc2LS42OC0xMS43NC0uODEtMTQuMzktLjgxaDBsLTEuNjIsMC0xLjYyLDBhMTc3LjMsMTc3LjMsMCwwLDAtMzEuNzcsMy4zNSwyMDguMjMsMjA4LjIzLDAsMCwwLTU2LjEyLDE3LjU2LDE4MSwxODEsMCwwLDAtMjAuMjcsMTEuNzUsMjQ3LjQzLDI0Ny40MywwLDAsMC0zNi41NywyNi44MUMzNjAuMjUsMTU4LjYyLDM1NSwxNjMuNjgsMzUwLDE2OWMtNzEuMzUsNjUuMjUtMTU5LjUsMTcyLTI0MC4zOSwyNzIuMjhDOTMuNzMsNDYwLjg4LDgwLDQ3Ny44Nyw3MC41Miw0ODkuMTcsNDIuMzUsNTIwLDIyLjQzLDU1NC45LDExLjMxLDU5MywuNzIsNjI5LjIyLTEuNzMsNjY3LjY5LDQsNzA3LjMxLDE1LDc4Mi40OSw1NS43OCw4NTkuMTIsMTE4LjkzLDkyMy4wOWEyMiwyMiwwLDAsMCwxNS41OSw2LjUyaDEuODNsMS44Ny0uMzJjODEuMDYtMTMuOTEsMTEwLTc5LjU3LDE0My40OC0xNTUuNiwzLjkxLTguODgsNy45NS0xOC4wNSwxMi4yLTI3LjQzcTUuNDIsOC41NCwxMS4zOSwxNi4yM2MzMS44NSw0MC45MSw3NS4xMiw2NC42NywxMzIuMzIsNzIuNjNsMTguOCwyLjYyLDQuOTUtMTguMzNjMTMuMjYtNDkuMDcsMzUuMy05MC44NSw1MC42NC0xMTYuMTksMTUuMzQsMjUuMzQsMzcuMzgsNjcuMTIsNTAuNjQsMTE2LjE5bDUsMTguMzMsMTguOC0yLjYyYzU3LjItOCwxMDAuNDctMzEuNzIsMTMyLjMyLTcyLjYzcTYtNy42OCwxMS4zOS0xNi4yM2M0LjI1LDkuMzgsOC4yOSwxOC41NSwxMi4yLDI3LjQzLDMzLjQ5LDc2LDYyLjQyLDE0MS42OSwxNDMuNDgsMTU1LjZsMS44MS4zMWgxLjg5YTIyLDIyLDAsMCwwLDE1LjU5LTYuNTJjNjMuMTUtNjQsMTAzLjk1LTE0MC42LDExNC44OS0yMTUuNzhDMTAyNS43Myw2NjcuNjksMTAyMy4yOCw2MjkuMjIsMTAxMi42OSw1OTNaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNMzY0LjE1LDE4NS4yM2MxNy44OS0xNi40LDM0LjctMzAuMTUsNDkuNzctNDAuMTFhMjEyLDIxMiwwLDAsMSw2NS45My0yNS43M0ExOTgsMTk4LDAsMCwxLDUxMiwxMTYuMjdhMTk2LjExLDE5Ni4xMSwwLDAsMSwzMiwzLjFjNC41LjkxLDkuMzYsMi4wNiwxNC41MywzLjUyLDYwLjQxLDIwLjQ4LDg0LjkyLDkxLjA1LTQ3LjQ0LDI0OC4wNi0yOC43NSwzNC4xMi0xNDAuNywxOTQuODQtMTg0LjY2LDI2OC40MmE2MzAuODYsNjMwLjg2LDAsMCwwLTMzLjIyLDU4LjMyQzI3Niw2NTUuMzQsMjY1LjQsNTk4LDI2NS40LDUyMC4yOSwyNjUuNCwzNDAuNjEsMzExLjY5LDI0MC43NCwzNjQuMTUsMTg1LjIzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTUyNy41NCwzODQuODNjODQuMDYtOTkuNywxMTYuMDYtMTc3LjI4LDk1LjIyLTIzMC43NCwxMS42Miw4LjY5LDI0LDE5LjIsMzcuMDYsMzEuMTMsNTIuNDgsNTUuNSw5OC43OCwxNTUuMzgsOTguNzgsMzM1LjA3LDAsNzcuNzEtMTAuNiwxMzUuMDUtMjcuNzcsMTc3LjRhNjI4LjczLDYyOC43MywwLDAsMC0zMy4yMy01OC4zMmMtMzktNjUuMjYtMTMxLjQ1LTE5OS0xNzEuOTMtMjUyLjI3QzUyNi4zMywzODYuMjksNTI3LDM4NS41Miw1MjcuNTQsMzg0LjgzWiIvPjxwYXRoIGNsYXNzPSJjbHMtNCIgZD0iTTEzNC41OCw5MDguMDdoLS4wNmEuMzkuMzksMCwwLDEtLjI3LS4xMWMtMTE5LjUyLTEyMS4wNy0xNTUtMjg3LjQtNDcuNTQtNDA0LjU4LDM0LjYzLTQxLjE0LDEyMC0xNTEuNiwyMDIuNzUtMjQyLjE5LTMuMTMsNy02LjEyLDE0LjI1LTguOTIsMjEuNjktMjQuMzQsNjQuNDUtMzYuNjcsMTQ0LjMyLTM2LjY3LDIzNy40MSwwLDU2LjUzLDUuNTgsMTA2LDE2LjU5LDE0Ny4xNEEzMDcuNDksMzA3LjQ5LDAsMCwwLDI4MC45MSw3MjNDMjM3LDgxNi44OCwyMTYuOTMsODkzLjkzLDEzNC41OCw5MDguMDdaIi8+PHBhdGggY2xhc3M9ImNscy01IiBkPSJNNTgzLjQzLDgxMy43OUM1NjAuMTgsNzI3LjcyLDUxMiw2NjQuMTUsNTEyLDY2NC4xNXMtNDguMTcsNjMuNTctNzEuNDMsMTQ5LjY0Yy00OC40NS02Ljc0LTEwMC45MS0yNy41Mi0xMzUuNjYtOTEuMThhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Ny03MS41NGwuMjEtLjMyLjE5LS4zM2MzOC02My42MywxMjYuNC0xOTEuMzcsMTY3LjEyLTI0NS42Niw0MC43MSw1NC4yOCwxMjkuMSwxODIsMTY3LjEyLDI0NS42NmwuMTkuMzMuMjEuMzJhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Nyw3MS41NEM2ODQuMzQsNzg2LjI3LDYzMS44OCw4MDcuMDUsNTgzLjQzLDgxMy43OVoiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik04ODkuNzUsOTA4YS4zOS4zOSwwLDAsMS0uMjcuMTFoLS4wNkM4MDcuMDcsODkzLjkzLDc4Nyw4MTYuODgsNzQzLjA5LDcyM2EzMDcuNDksMzA3LjQ5LDAsMCwwLDIwLjQ1LTU1LjU0YzExLTQxLjExLDE2LjU5LTkwLjYxLDE2LjU5LTE0Ny4xNCwwLTkzLjA4LTEyLjMzLTE3My0zNi42Ni0yMzcuNHEtNC4yMi0xMS4xNi04LjkzLTIxLjdjODIuNzUsOTAuNTksMTY4LjEyLDIwMS4wNSwyMDIuNzUsMjQyLjE5QzEwNDQuNzksNjIwLjU2LDEwMDkuMjcsNzg2Ljg5LDg4OS43NSw5MDhaIi8+PC9zdmc+Cg==", "introduction: >- 1 **Spring** is a Java framework for building applications based on a distributed microservices architecture. - Spring enables easy packaging and configuration of Spring applications into a self-contained executable application which can be easily deployed as a container to OpenShift. - Spring applications can integrate OpenShift capabilities to provide a natural \"Spring on OpenShift\" developer experience for both existing and net-new Spring applications. For example: - Externalized configuration using Kubernetes ConfigMaps and integration with Spring Cloud Kubernetes - Service discovery using Kubernetes Services - Load balancing with Replication Controllers - Kubernetes health probes and integration with Spring Actuator - Metrics: Prometheus, Grafana, and integration with Spring Cloud Sleuth - Distributed tracing with Istio & Jaeger tracing - Developer tooling through Red Hat OpenShift and Red Hat CodeReady developer tooling to quickly scaffold new Spring projects, gain access to familiar Spring APIs in your favorite IDE, and deploy to Red Hat OpenShift", "icon: >- data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHJvbGU9ImltZyIgdmlld.", "accessReviewResources: - group: helm.openshift.io resource: helmchartrepositories verb: create", "accessReviewResources: - group: operators.coreos.com resource: operatorgroups verb: list - group: packages.operators.coreos.com resource: packagemanifests verb: list", "nextQuickStart: - add-healthchecks", "[Perspective switcher]{{highlight qs-perspective-switcher}}", "[Home]{{highlight qs-nav-home}} [Operators]{{highlight qs-nav-operators}} [Workloads]{{highlight qs-nav-workloads}} [Serverless]{{highlight qs-nav-serverless}} [Networking]{{highlight qs-nav-networking}} [Storage]{{highlight qs-nav-storage}} [Service catalog]{{highlight qs-nav-servicecatalog}} [Compute]{{highlight qs-nav-compute}} [User management]{{highlight qs-nav-usermanagement}} [Administration]{{highlight qs-nav-administration}}", "[Add]{{highlight qs-nav-add}} [Topology]{{highlight qs-nav-topology}} [Search]{{highlight qs-nav-search}} [Project]{{highlight qs-nav-project}} [Helm]{{highlight qs-nav-helm}}", "[Builds]{{highlight qs-nav-builds}} [Pipelines]{{highlight qs-nav-pipelines}} [Monitoring]{{highlight qs-nav-monitoring}}", "[CloudShell]{{highlight qs-masthead-cloudshell}} [Utility Menu]{{highlight qs-masthead-utilitymenu}} [User Menu]{{highlight qs-masthead-usermenu}} [Applications]{{highlight qs-masthead-applications}} [Import]{{highlight qs-masthead-import}} [Help]{{highlight qs-masthead-help}} [Notifications]{{highlight qs-masthead-notifications}}", "`code block`{{copy}} `code block`{{execute}}", "``` multi line code block ```{{copy}} ``` multi line code block ```{{execute}}", "Create a serverless application.", "In this quick start, you will deploy a sample application to {product-title}.", "This quick start shows you how to deploy a sample application to {product-title}.", "Tasks to complete: Create a serverless application; Connect an event source; Force a new revision", "You will complete these 3 tasks: Creating a serverless application; Connecting an event source; Forcing a new revision", "Click OK.", "Click on the OK button.", "Enter the Developer perspective: In the main navigation, click the dropdown menu and select Developer. Enter the Administrator perspective: In the main navigation, click the dropdown menu and select Admin.", "In the node.js deployment, hover over the icon.", "Hover over the icon in the node.js deployment.", "Change the time range of the dashboard by clicking the dropdown menu and selecting time range.", "To look at data in a specific time frame, you can change the time range of the dashboard.", "In the navigation menu, click Settings.", "In the left-hand menu, click Settings.", "The success message indicates a connection.", "The message with a green icon indicates a connection.", "Set up your environment.", "Let's set up our environment." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/web_console/index
Chapter 2. Performing a minor update
Chapter 2. Performing a minor update To update your Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 environment to the latest maintenance release, perform the following tasks: Update OVN services on the control plane. Update OVN services on the data plane. Wait for the OpenStack Operator to complete the automatic update of the remaining control plane packages, services, and container images. Update the remaining services on the data plane. 2.1. Updating OVN services on the control plane Update the target version in the OpenStackVersion custom resource (CR) to point to the version that you want to install. After you update the target version, the OVN service update on the control plane begins automatically. Prerequisites You have the name and version of your OpenStackControlPlane CR: Procedure Create a patch file for the OpenStackVersion CR on your workstation, for example, openstackversionpatch.yaml . Set the targetVersion to the release that you want to install: Replace <openstack_version> with the target version you want to install, for example, 1.0.1 . Replace <custom_image> with the location of the latest custom image for the service. You must update the image location for any custom images and the target version at the same time to ensure that the correct custom image is used after the minor update is complete. Patch the OpenStackVersion CR: Replace <openstack_version_CR_name> with the name of your OpenStackVersion resource, for example, openstack-control-plane . Verify that the OVN services are updated on the control plane: The following example output shows that the OVN services are updated: 2.2. Updating OVN services on the data plane Update the OVN services on the data plane. Prerequisites Create the openstack-edpm-update-ovn.yaml file. For more information, see Creating the files for the data plane update . Procedure To update OVN services on the data plane, create an OpenStackDataPlaneDeployment custom resource (CR) with the openstack-edpm-update-ovn.yaml file: Verify that the data plane update deployment succeeded: Replace <openstack_version_CR_name> with the name of your OpenStackVersion resource, for example, openstackversion/openstack . If the deployment fails, see Troubleshooting data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide. Important If the update fails, you can re-run the procedure. Before you re-run the procedure, you must edit the name: parameter in the openstack-edpm-update-ovn.yaml file to avoid conflicts in the CR name. For example: Replace <ovn-update-new-name> with a unique name for the CR. 2.3. Updating the remaining services on the data plane When the OVN service is updated on the control plane and data plane, and the OpenStack Operator has completed the automatic update of the remaining control plane packages, services, and container images, you must update the remaining services on the data plane. Prerequisites Create the openstack-edpm-update-services.yaml file. For more information, see Creating the files for the data plane update . The OVN service is updated on the control plane. For more information, see Updating OVN services on the control plane . The OVN service is updated on the data plane. For more information, see Updating OVN services on the data plane . Procedure Wait until all control plane services are updated: Replace <openstack_version_CR_name> with the name of the OpenStackVersion resource, for example, openstackversion/openstack . The command returns the following output when all the control plane services are updated: To update the remaining services on the data plane, create an OpenStackDataPlaneDeployment custom resource (CR) with the openstack-edpm-update-services.yaml file: Verify that the data plane update deployment succeeded: If the deployment fails, see Troubleshooting data plane creation and deployment in the Deploying Red Hat OpenStack Services on OpenShift guide. Important If the update fails, you can re-run the procedure. Before you re-run the procedure, you must edit the name: parameter in the openstack-edpm-update-services.yaml file to avoid conflicts in the CR name. For example: Replace <services-update-new-name> with a unique name for the CR.
[ "oc get openstackversion NAME TARGET VERSION AVAILABLE VERSION DEPLOYED VERSION openstack-control-plane 18.0.0-20240828.1 18.0.2-20240923.2 18.0.0-20240828.1", "cat <<EOF >openstackversionpatch.yaml \"spec\": { \"targetVersion\": <openstack_version> customContainerImages: cinderApiImage: <custom_image> cinderVolumeImages: netapp: <custom_image> dell: <custom_image> } EOF", "oc patch openstackversion <openstack_version_CR_name> --type=merge --patch-file openstackversionpatch.yaml", "oc wait openstackversion <openstack_version_CR_name> --for=condition=MinorUpdateOVNControlplane --timeout=20m", "openstackversion.core.openstack.org/<openstack_version_CR_name> condition met", "oc create -f openstack-edpm-update-ovn.yaml", "oc wait openstackversion <openstack_version_CR_name> --for=condition=MinorUpdateOVNDataplane --timeout=20m", "oc get openstackdataplanedeployment NAME STATUS MESSAGE edpm-deployment-ipam True Setup Complete edpm-deployment-ipam-ovn-update True Setup Complete", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: <ovn-update-new-name>", "oc wait openstackversion <openstack_version_CR_name> --for=condition=MinorUpdateControlplane --timeout=20m", "openstackversion.core.openstack.org/<openstack_version_CR_name> condition met", "oc create -f openstack-edpm-update-services.yaml", "oc wait openstackversion <openstack_version_CR_name> --for=condition=MinorUpdateDataplane --timeout=20m", "oc get openstackdataplanedeployment NAME STATUS MESSAGE edpm-deployment-ipam True Setup Complete edpm-deployment-ipam-update True Setup Complete edpm-deployment-ipam-update-dataplane-services True Setup Complete", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: <services-update-new-name>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/updating_your_environment_to_the_latest_maintenance_release/assembly_performing-a-minor-update_preparing-minor-update
Appendix B. Contact information
Appendix B. Contact information Red Hat Process Automation Manager documentation team: [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_process_automation_manager/author-group
Chapter 3. Internal storage services
Chapter 3. Internal storage services Red Hat OpenShift Data Foundation service is available for consumption internally to the Red Hat OpenShift Container Platform that runs on the following infrastructure: Amazon Web Services (AWS) Bare metal VMware vSphere Microsoft Azure Google Cloud Red Hat OpenStack 13 or higher (installer-provisioned infrastructure) [Technology Preview] IBM Power IBM Z and IBM(R) LinuxONE ROSA with hosted control planes (HCP) Creation of an internal cluster resource results in the internal provisioning of the OpenShift Data Foundation base services, and makes additional storage classes available to the applications.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/planning_your_deployment/internal-storage-services_rhodf
function::user_string_n_warn
function::user_string_n_warn Name function::user_string_n_warn - Retrieves string from user space Synopsis Arguments addr the user space address to retrieve the string from n the maximum length of the string (if not null terminated) Description Returns up to n characters of a C string from a given user space memory address. Reports " <unknown> " on the rare cases when userspace data is not accessible and warns (but does not abort) about the failure.
[ "user_string_n_warn:string(addr:long,n:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-string-n-warn
36.3. Downloading the Upgraded Kernel
36.3. Downloading the Upgraded Kernel There are several ways to determine if an updated kernel is available for the system. Security Errata - Go to the following location for information on security errata, including kernel upgrades that fix security issues: Via Quarterly Updates - Refer to the following location for details: Via Red Hat Network - Download and install the kernel RPM packages. Red Hat Network can download the latest kernel, upgrade the kernel on the system, create an initial RAM disk image if needed, and configure the boot loader to boot the new kernel. For more information, refer to http://www.redhat.com/docs/manuals/RHNetwork/ . If Red Hat Network was used to download and install the updated kernel, follow the instructions in Section 36.5, "Verifying the Initial RAM Disk Image" and Section 36.6, "Verifying the Boot Loader" , only do not change the kernel to boot by default. Red Hat Network automatically changes the default kernel to the latest version. To install the kernel manually, continue to Section 36.4, "Performing the Upgrade" .
[ "http://www.redhat.com/apps/support/errata/", "http://www.redhat.com/apps/support/errata/rhlas_errata_policy.html" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Manually_Upgrading_the_Kernel-Downloading_the_Upgraded_Kernel
Chapter 3. AlertmanagerConfig [monitoring.coreos.com/v1beta1]
Chapter 3. AlertmanagerConfig [monitoring.coreos.com/v1beta1] Description AlertmanagerConfig configures the Prometheus Alertmanager, specifying how alerts should be grouped, inhibited and notified to external systems. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object AlertmanagerConfigSpec is a specification of the desired behavior of the Alertmanager configuration. By definition, the Alertmanager configuration only applies to alerts for which the namespace label is equal to the namespace of the AlertmanagerConfig resource. 3.1.1. .spec Description AlertmanagerConfigSpec is a specification of the desired behavior of the Alertmanager configuration. By definition, the Alertmanager configuration only applies to alerts for which the namespace label is equal to the namespace of the AlertmanagerConfig resource. Type object Property Type Description inhibitRules array List of inhibition rules. The rules will only apply to alerts matching the resource's namespace. inhibitRules[] object InhibitRule defines an inhibition rule that allows to mute alerts when other alerts are already firing. See https://prometheus.io/docs/alerting/latest/configuration/#inhibit_rule receivers array List of receivers. receivers[] object Receiver defines one or more notification integrations. route object The Alertmanager route definition for alerts matching the resource's namespace. If present, it will be added to the generated Alertmanager configuration as a first-level route. timeIntervals array List of TimeInterval specifying when the routes should be muted or active. timeIntervals[] object TimeInterval specifies the periods in time when notifications will be muted or active. 3.1.2. .spec.inhibitRules Description List of inhibition rules. The rules will only apply to alerts matching the resource's namespace. Type array 3.1.3. .spec.inhibitRules[] Description InhibitRule defines an inhibition rule that allows to mute alerts when other alerts are already firing. See https://prometheus.io/docs/alerting/latest/configuration/#inhibit_rule Type object Property Type Description equal array (string) Labels that must have an equal value in the source and target alert for the inhibition to take effect. sourceMatch array Matchers for which one or more alerts have to exist for the inhibition to take effect. The operator enforces that the alert matches the resource's namespace. sourceMatch[] object Matcher defines how to match on alert's labels. targetMatch array Matchers that have to be fulfilled in the alerts to be muted. The operator enforces that the alert matches the resource's namespace. targetMatch[] object Matcher defines how to match on alert's labels. 3.1.4. .spec.inhibitRules[].sourceMatch Description Matchers for which one or more alerts have to exist for the inhibition to take effect. The operator enforces that the alert matches the resource's namespace. Type array 3.1.5. .spec.inhibitRules[].sourceMatch[] Description Matcher defines how to match on alert's labels. Type object Required name Property Type Description matchType string Match operator, one of = (equal to), != (not equal to), =~ (regex match) or !~ (not regex match). Negative operators ( != and !~ ) require Alertmanager >= v0.22.0. name string Label to match. value string Label value to match. 3.1.6. .spec.inhibitRules[].targetMatch Description Matchers that have to be fulfilled in the alerts to be muted. The operator enforces that the alert matches the resource's namespace. Type array 3.1.7. .spec.inhibitRules[].targetMatch[] Description Matcher defines how to match on alert's labels. Type object Required name Property Type Description matchType string Match operator, one of = (equal to), != (not equal to), =~ (regex match) or !~ (not regex match). Negative operators ( != and !~ ) require Alertmanager >= v0.22.0. name string Label to match. value string Label value to match. 3.1.8. .spec.receivers Description List of receivers. Type array 3.1.9. .spec.receivers[] Description Receiver defines one or more notification integrations. Type object Required name Property Type Description discordConfigs array List of Slack configurations. discordConfigs[] object DiscordConfig configures notifications via Discord. See https://prometheus.io/docs/alerting/latest/configuration/#discord_config emailConfigs array List of Email configurations. emailConfigs[] object EmailConfig configures notifications via Email. msteamsConfigs array List of MSTeams configurations. It requires Alertmanager >= 0.26.0. msteamsConfigs[] object MSTeamsConfig configures notifications via Microsoft Teams. It requires Alertmanager >= 0.26.0. name string Name of the receiver. Must be unique across all items from the list. opsgenieConfigs array List of OpsGenie configurations. opsgenieConfigs[] object OpsGenieConfig configures notifications via OpsGenie. See https://prometheus.io/docs/alerting/latest/configuration/#opsgenie_config pagerdutyConfigs array List of PagerDuty configurations. pagerdutyConfigs[] object PagerDutyConfig configures notifications via PagerDuty. See https://prometheus.io/docs/alerting/latest/configuration/#pagerduty_config pushoverConfigs array List of Pushover configurations. pushoverConfigs[] object PushoverConfig configures notifications via Pushover. See https://prometheus.io/docs/alerting/latest/configuration/#pushover_config slackConfigs array List of Slack configurations. slackConfigs[] object SlackConfig configures notifications via Slack. See https://prometheus.io/docs/alerting/latest/configuration/#slack_config snsConfigs array List of SNS configurations snsConfigs[] object SNSConfig configures notifications via AWS SNS. See https://prometheus.io/docs/alerting/latest/configuration/#sns_configs telegramConfigs array List of Telegram configurations. telegramConfigs[] object TelegramConfig configures notifications via Telegram. See https://prometheus.io/docs/alerting/latest/configuration/#telegram_config victoropsConfigs array List of VictorOps configurations. victoropsConfigs[] object VictorOpsConfig configures notifications via VictorOps. See https://prometheus.io/docs/alerting/latest/configuration/#victorops_config webexConfigs array List of Webex configurations. webexConfigs[] object WebexConfig configures notification via Cisco Webex See https://prometheus.io/docs/alerting/latest/configuration/#webex_config webhookConfigs array List of webhook configurations. webhookConfigs[] object WebhookConfig configures notifications via a generic receiver supporting the webhook payload. See https://prometheus.io/docs/alerting/latest/configuration/#webhook_config wechatConfigs array List of WeChat configurations. wechatConfigs[] object WeChatConfig configures notifications via WeChat. See https://prometheus.io/docs/alerting/latest/configuration/#wechat_config 3.1.10. .spec.receivers[].discordConfigs Description List of Slack configurations. Type array 3.1.11. .spec.receivers[].discordConfigs[] Description DiscordConfig configures notifications via Discord. See https://prometheus.io/docs/alerting/latest/configuration/#discord_config Type object Property Type Description apiURL object The secret's key that contains the Discord webhook URL. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. httpConfig object HTTP client configuration. message string The template of the message's body. sendResolved boolean Whether or not to notify about resolved alerts. title string The template of the message's title. 3.1.12. .spec.receivers[].discordConfigs[].apiURL Description The secret's key that contains the Discord webhook URL. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.13. .spec.receivers[].discordConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.14. .spec.receivers[].discordConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.15. .spec.receivers[].discordConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.16. .spec.receivers[].discordConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.17. .spec.receivers[].discordConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.18. .spec.receivers[].discordConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.19. .spec.receivers[].discordConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.20. .spec.receivers[].discordConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.21. .spec.receivers[].discordConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.22. .spec.receivers[].discordConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.23. .spec.receivers[].discordConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.24. .spec.receivers[].discordConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.25. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.26. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.27. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.28. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.29. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.30. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.31. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.32. .spec.receivers[].discordConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.33. .spec.receivers[].emailConfigs Description List of Email configurations. Type array 3.1.34. .spec.receivers[].emailConfigs[] Description EmailConfig configures notifications via Email. Type object Property Type Description authIdentity string The identity to use for authentication. authPassword object The secret's key that contains the password to use for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. authSecret object The secret's key that contains the CRAM-MD5 secret. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. authUsername string The username to use for authentication. from string The sender address. headers array Further headers email header key/value pairs. Overrides any headers previously set by the notification implementation. headers[] object KeyValue defines a (key, value) tuple. hello string The hostname to identify to the SMTP server. html string The HTML body of the email notification. requireTLS boolean The SMTP TLS requirement. Note that Go does not support unencrypted connections to remote SMTP endpoints. sendResolved boolean Whether or not to notify about resolved alerts. smarthost string The SMTP host and port through which emails are sent. E.g. example.com:25 text string The text body of the email notification. tlsConfig object TLS configuration to string The email address to send notifications to. 3.1.35. .spec.receivers[].emailConfigs[].authPassword Description The secret's key that contains the password to use for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.36. .spec.receivers[].emailConfigs[].authSecret Description The secret's key that contains the CRAM-MD5 secret. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.37. .spec.receivers[].emailConfigs[].headers Description Further headers email header key/value pairs. Overrides any headers previously set by the notification implementation. Type array 3.1.38. .spec.receivers[].emailConfigs[].headers[] Description KeyValue defines a (key, value) tuple. Type object Required key value Property Type Description key string Key of the tuple. value string Value of the tuple. 3.1.39. .spec.receivers[].emailConfigs[].tlsConfig Description TLS configuration Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.40. .spec.receivers[].emailConfigs[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.41. .spec.receivers[].emailConfigs[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.42. .spec.receivers[].emailConfigs[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.43. .spec.receivers[].emailConfigs[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.44. .spec.receivers[].emailConfigs[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.45. .spec.receivers[].emailConfigs[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.46. .spec.receivers[].emailConfigs[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.47. .spec.receivers[].msteamsConfigs Description List of MSTeams configurations. It requires Alertmanager >= 0.26.0. Type array 3.1.48. .spec.receivers[].msteamsConfigs[] Description MSTeamsConfig configures notifications via Microsoft Teams. It requires Alertmanager >= 0.26.0. Type object Required webhookUrl Property Type Description httpConfig object HTTP client configuration. sendResolved boolean Whether to notify about resolved alerts. summary string Message summary template. It requires Alertmanager >= 0.27.0. text string Message body template. title string Message title template. webhookUrl object MSTeams webhook URL. 3.1.49. .spec.receivers[].msteamsConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.50. .spec.receivers[].msteamsConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.51. .spec.receivers[].msteamsConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.52. .spec.receivers[].msteamsConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.53. .spec.receivers[].msteamsConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.54. .spec.receivers[].msteamsConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.55. .spec.receivers[].msteamsConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.56. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.57. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.58. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.59. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.60. .spec.receivers[].msteamsConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.61. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.62. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.63. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.64. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.65. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.66. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.67. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.68. .spec.receivers[].msteamsConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.69. .spec.receivers[].msteamsConfigs[].webhookUrl Description MSTeams webhook URL. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.70. .spec.receivers[].opsgenieConfigs Description List of OpsGenie configurations. Type array 3.1.71. .spec.receivers[].opsgenieConfigs[] Description OpsGenieConfig configures notifications via OpsGenie. See https://prometheus.io/docs/alerting/latest/configuration/#opsgenie_config Type object Property Type Description actions string Comma separated list of actions that will be available for the alert. apiKey object The secret's key that contains the OpsGenie API key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. apiURL string The URL to send OpsGenie API requests to. description string Description of the incident. details array A set of arbitrary key/value pairs that provide further detail about the incident. details[] object KeyValue defines a (key, value) tuple. entity string Optional field that can be used to specify which domain alert is related to. httpConfig object HTTP client configuration. message string Alert text limited to 130 characters. note string Additional alert note. priority string Priority level of alert. Possible values are P1, P2, P3, P4, and P5. responders array List of responders responsible for notifications. responders[] object OpsGenieConfigResponder defines a responder to an incident. One of id , name or username has to be defined. sendResolved boolean Whether or not to notify about resolved alerts. source string Backlink to the sender of the notification. tags string Comma separated list of tags attached to the notifications. 3.1.72. .spec.receivers[].opsgenieConfigs[].apiKey Description The secret's key that contains the OpsGenie API key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.73. .spec.receivers[].opsgenieConfigs[].details Description A set of arbitrary key/value pairs that provide further detail about the incident. Type array 3.1.74. .spec.receivers[].opsgenieConfigs[].details[] Description KeyValue defines a (key, value) tuple. Type object Required key value Property Type Description key string Key of the tuple. value string Value of the tuple. 3.1.75. .spec.receivers[].opsgenieConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.76. .spec.receivers[].opsgenieConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.77. .spec.receivers[].opsgenieConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.78. .spec.receivers[].opsgenieConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.79. .spec.receivers[].opsgenieConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.80. .spec.receivers[].opsgenieConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.81. .spec.receivers[].opsgenieConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.82. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.83. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.84. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.85. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.86. .spec.receivers[].opsgenieConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.87. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.88. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.89. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.90. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.91. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.92. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.93. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.94. .spec.receivers[].opsgenieConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.95. .spec.receivers[].opsgenieConfigs[].responders Description List of responders responsible for notifications. Type array 3.1.96. .spec.receivers[].opsgenieConfigs[].responders[] Description OpsGenieConfigResponder defines a responder to an incident. One of id , name or username has to be defined. Type object Required type Property Type Description id string ID of the responder. name string Name of the responder. type string Type of responder. username string Username of the responder. 3.1.97. .spec.receivers[].pagerdutyConfigs Description List of PagerDuty configurations. Type array 3.1.98. .spec.receivers[].pagerdutyConfigs[] Description PagerDutyConfig configures notifications via PagerDuty. See https://prometheus.io/docs/alerting/latest/configuration/#pagerduty_config Type object Property Type Description class string The class/type of the event. client string Client identification. clientURL string Backlink to the sender of notification. component string The part or component of the affected system that is broken. description string Description of the incident. details array Arbitrary key/value pairs that provide further detail about the incident. details[] object KeyValue defines a (key, value) tuple. group string A cluster or grouping of sources. httpConfig object HTTP client configuration. pagerDutyImageConfigs array A list of image details to attach that provide further detail about an incident. pagerDutyImageConfigs[] object PagerDutyImageConfig attaches images to an incident pagerDutyLinkConfigs array A list of link details to attach that provide further detail about an incident. pagerDutyLinkConfigs[] object PagerDutyLinkConfig attaches text links to an incident routingKey object The secret's key that contains the PagerDuty integration key (when using Events API v2). Either this field or serviceKey needs to be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. sendResolved boolean Whether or not to notify about resolved alerts. serviceKey object The secret's key that contains the PagerDuty service key (when using integration type "Prometheus"). Either this field or routingKey needs to be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. severity string Severity of the incident. source string Unique location of the affected system. url string The URL to send requests to. 3.1.99. .spec.receivers[].pagerdutyConfigs[].details Description Arbitrary key/value pairs that provide further detail about the incident. Type array 3.1.100. .spec.receivers[].pagerdutyConfigs[].details[] Description KeyValue defines a (key, value) tuple. Type object Required key value Property Type Description key string Key of the tuple. value string Value of the tuple. 3.1.101. .spec.receivers[].pagerdutyConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.102. .spec.receivers[].pagerdutyConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.103. .spec.receivers[].pagerdutyConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.104. .spec.receivers[].pagerdutyConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.105. .spec.receivers[].pagerdutyConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.106. .spec.receivers[].pagerdutyConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.107. .spec.receivers[].pagerdutyConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.108. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.109. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.110. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.111. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.112. .spec.receivers[].pagerdutyConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.113. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.114. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.115. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.116. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.117. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.118. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.119. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.120. .spec.receivers[].pagerdutyConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.121. .spec.receivers[].pagerdutyConfigs[].pagerDutyImageConfigs Description A list of image details to attach that provide further detail about an incident. Type array 3.1.122. .spec.receivers[].pagerdutyConfigs[].pagerDutyImageConfigs[] Description PagerDutyImageConfig attaches images to an incident Type object Property Type Description alt string Alt is the optional alternative text for the image. href string Optional URL; makes the image a clickable link. src string Src of the image being attached to the incident 3.1.123. .spec.receivers[].pagerdutyConfigs[].pagerDutyLinkConfigs Description A list of link details to attach that provide further detail about an incident. Type array 3.1.124. .spec.receivers[].pagerdutyConfigs[].pagerDutyLinkConfigs[] Description PagerDutyLinkConfig attaches text links to an incident Type object Property Type Description alt string Text that describes the purpose of the link, and can be used as the link's text. href string Href is the URL of the link to be attached 3.1.125. .spec.receivers[].pagerdutyConfigs[].routingKey Description The secret's key that contains the PagerDuty integration key (when using Events API v2). Either this field or serviceKey needs to be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.126. .spec.receivers[].pagerdutyConfigs[].serviceKey Description The secret's key that contains the PagerDuty service key (when using integration type "Prometheus"). Either this field or routingKey needs to be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.127. .spec.receivers[].pushoverConfigs Description List of Pushover configurations. Type array 3.1.128. .spec.receivers[].pushoverConfigs[] Description PushoverConfig configures notifications via Pushover. See https://prometheus.io/docs/alerting/latest/configuration/#pushover_config Type object Property Type Description device string The name of a device to send the notification to expire string How long your notification will continue to be retried for, unless the user acknowledges the notification. html boolean Whether notification message is HTML or plain text. httpConfig object HTTP client configuration. message string Notification message. priority string Priority, see https://pushover.net/api#priority retry string How often the Pushover servers will send the same notification to the user. Must be at least 30 seconds. sendResolved boolean Whether or not to notify about resolved alerts. sound string The name of one of the sounds supported by device clients to override the user's default sound choice title string Notification title. token object The secret's key that contains the registered application's API token, see https://pushover.net/apps . The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either token or tokenFile is required. tokenFile string The token file that contains the registered application's API token, see https://pushover.net/apps . Either token or tokenFile is required. It requires Alertmanager >= v0.26.0. ttl string The time to live definition for the alert notification url string A supplementary URL shown alongside the message. urlTitle string A title for supplementary URL, otherwise just the URL is shown userKey object The secret's key that contains the recipient user's user key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either userKey or userKeyFile is required. userKeyFile string The user key file that contains the recipient user's user key. Either userKey or userKeyFile is required. It requires Alertmanager >= v0.26.0. 3.1.129. .spec.receivers[].pushoverConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.130. .spec.receivers[].pushoverConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.131. .spec.receivers[].pushoverConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.132. .spec.receivers[].pushoverConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.133. .spec.receivers[].pushoverConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.134. .spec.receivers[].pushoverConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.135. .spec.receivers[].pushoverConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.136. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.137. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.138. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.139. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.140. .spec.receivers[].pushoverConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.141. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.142. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.143. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.144. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.145. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.146. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.147. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.148. .spec.receivers[].pushoverConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.149. .spec.receivers[].pushoverConfigs[].token Description The secret's key that contains the registered application's API token, see https://pushover.net/apps . The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either token or tokenFile is required. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.150. .spec.receivers[].pushoverConfigs[].userKey Description The secret's key that contains the recipient user's user key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either userKey or userKeyFile is required. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.151. .spec.receivers[].slackConfigs Description List of Slack configurations. Type array 3.1.152. .spec.receivers[].slackConfigs[] Description SlackConfig configures notifications via Slack. See https://prometheus.io/docs/alerting/latest/configuration/#slack_config Type object Property Type Description actions array A list of Slack actions that are sent with each notification. actions[] object SlackAction configures a single Slack action that is sent with each notification. See https://api.slack.com/docs/message-attachments#action_fields and https://api.slack.com/docs/message-buttons for more information. apiURL object The secret's key that contains the Slack webhook URL. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. callbackId string channel string The channel or user to send notifications to. color string fallback string fields array A list of Slack fields that are sent with each notification. fields[] object SlackField configures a single Slack field that is sent with each notification. Each field must contain a title, value, and optionally, a boolean value to indicate if the field is short enough to be displayed to other fields designated as short. See https://api.slack.com/docs/message-attachments#fields for more information. footer string httpConfig object HTTP client configuration. iconEmoji string iconURL string imageURL string linkNames boolean mrkdwnIn array (string) pretext string sendResolved boolean Whether or not to notify about resolved alerts. shortFields boolean text string thumbURL string title string titleLink string username string 3.1.153. .spec.receivers[].slackConfigs[].actions Description A list of Slack actions that are sent with each notification. Type array 3.1.154. .spec.receivers[].slackConfigs[].actions[] Description SlackAction configures a single Slack action that is sent with each notification. See https://api.slack.com/docs/message-attachments#action_fields and https://api.slack.com/docs/message-buttons for more information. Type object Required text type Property Type Description confirm object SlackConfirmationField protect users from destructive actions or particularly distinguished decisions by asking them to confirm their button click one more time. See https://api.slack.com/docs/interactive-message-field-guide#confirmation_fields for more information. name string style string text string type string url string value string 3.1.155. .spec.receivers[].slackConfigs[].actions[].confirm Description SlackConfirmationField protect users from destructive actions or particularly distinguished decisions by asking them to confirm their button click one more time. See https://api.slack.com/docs/interactive-message-field-guide#confirmation_fields for more information. Type object Required text Property Type Description dismissText string okText string text string title string 3.1.156. .spec.receivers[].slackConfigs[].apiURL Description The secret's key that contains the Slack webhook URL. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.157. .spec.receivers[].slackConfigs[].fields Description A list of Slack fields that are sent with each notification. Type array 3.1.158. .spec.receivers[].slackConfigs[].fields[] Description SlackField configures a single Slack field that is sent with each notification. Each field must contain a title, value, and optionally, a boolean value to indicate if the field is short enough to be displayed to other fields designated as short. See https://api.slack.com/docs/message-attachments#fields for more information. Type object Required title value Property Type Description short boolean title string value string 3.1.159. .spec.receivers[].slackConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.160. .spec.receivers[].slackConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.161. .spec.receivers[].slackConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.162. .spec.receivers[].slackConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.163. .spec.receivers[].slackConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.164. .spec.receivers[].slackConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.165. .spec.receivers[].slackConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.166. .spec.receivers[].slackConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.167. .spec.receivers[].slackConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.168. .spec.receivers[].slackConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.169. .spec.receivers[].slackConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.170. .spec.receivers[].slackConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.171. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.172. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.173. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.174. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.175. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.176. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.177. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.178. .spec.receivers[].slackConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.179. .spec.receivers[].snsConfigs Description List of SNS configurations Type array 3.1.180. .spec.receivers[].snsConfigs[] Description SNSConfig configures notifications via AWS SNS. See https://prometheus.io/docs/alerting/latest/configuration/#sns_configs Type object Property Type Description apiURL string The SNS API URL i.e. https://sns.us-east-2.amazonaws.com . If not specified, the SNS API URL from the SNS SDK will be used. attributes object (string) SNS message attributes. httpConfig object HTTP client configuration. message string The message content of the SNS notification. phoneNumber string Phone number if message is delivered via SMS in E.164 format. If you don't specify this value, you must specify a value for the TopicARN or TargetARN. sendResolved boolean Whether or not to notify about resolved alerts. sigv4 object Configures AWS's Signature Verification 4 signing process to sign requests. subject string Subject line when the message is delivered to email endpoints. targetARN string The mobile platform endpoint ARN if message is delivered via mobile notifications. If you don't specify this value, you must specify a value for the topic_arn or PhoneNumber. topicARN string SNS topic ARN, i.e. arn:aws:sns:us-east-2:698519295917:My-Topic If you don't specify this value, you must specify a value for the PhoneNumber or TargetARN. 3.1.181. .spec.receivers[].snsConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.182. .spec.receivers[].snsConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.183. .spec.receivers[].snsConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.184. .spec.receivers[].snsConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.185. .spec.receivers[].snsConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.186. .spec.receivers[].snsConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.187. .spec.receivers[].snsConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.188. .spec.receivers[].snsConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.189. .spec.receivers[].snsConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.190. .spec.receivers[].snsConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.191. .spec.receivers[].snsConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.192. .spec.receivers[].snsConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.193. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.194. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.195. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.196. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.197. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.198. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.199. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.200. .spec.receivers[].snsConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.201. .spec.receivers[].snsConfigs[].sigv4 Description Configures AWS's Signature Verification 4 signing process to sign requests. Type object Property Type Description accessKey object AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. profile string Profile is the named AWS profile used to authenticate. region string Region is the AWS region. If blank, the region from the default credentials chain used. roleArn string RoleArn is the named AWS profile used to authenticate. secretKey object SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. 3.1.202. .spec.receivers[].snsConfigs[].sigv4.accessKey Description AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.203. .spec.receivers[].snsConfigs[].sigv4.secretKey Description SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.204. .spec.receivers[].telegramConfigs Description List of Telegram configurations. Type array 3.1.205. .spec.receivers[].telegramConfigs[] Description TelegramConfig configures notifications via Telegram. See https://prometheus.io/docs/alerting/latest/configuration/#telegram_config Type object Property Type Description apiURL string The Telegram API URL i.e. https://api.telegram.org . If not specified, default API URL will be used. botToken object Telegram bot token. It is mutually exclusive with botTokenFile . The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either botToken or botTokenFile is required. botTokenFile string File to read the Telegram bot token from. It is mutually exclusive with botToken . Either botToken or botTokenFile is required. It requires Alertmanager >= v0.26.0. chatID integer The Telegram chat ID. disableNotifications boolean Disable telegram notifications httpConfig object HTTP client configuration. message string Message template parseMode string Parse mode for telegram message sendResolved boolean Whether to notify about resolved alerts. 3.1.206. .spec.receivers[].telegramConfigs[].botToken Description Telegram bot token. It is mutually exclusive with botTokenFile . The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Either botToken or botTokenFile is required. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.207. .spec.receivers[].telegramConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.208. .spec.receivers[].telegramConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.209. .spec.receivers[].telegramConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.210. .spec.receivers[].telegramConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.211. .spec.receivers[].telegramConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.212. .spec.receivers[].telegramConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.213. .spec.receivers[].telegramConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.214. .spec.receivers[].telegramConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.215. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.216. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.217. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.218. .spec.receivers[].telegramConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.219. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.220. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.221. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.222. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.223. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.224. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.225. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.226. .spec.receivers[].telegramConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.227. .spec.receivers[].victoropsConfigs Description List of VictorOps configurations. Type array 3.1.228. .spec.receivers[].victoropsConfigs[] Description VictorOpsConfig configures notifications via VictorOps. See https://prometheus.io/docs/alerting/latest/configuration/#victorops_config Type object Property Type Description apiKey object The secret's key that contains the API key to use when talking to the VictorOps API. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. apiUrl string The VictorOps API URL. customFields array Additional custom fields for notification. customFields[] object KeyValue defines a (key, value) tuple. entityDisplayName string Contains summary of the alerted problem. httpConfig object The HTTP client's configuration. messageType string Describes the behavior of the alert (CRITICAL, WARNING, INFO). monitoringTool string The monitoring tool the state message is from. routingKey string A key used to map the alert to a team. sendResolved boolean Whether or not to notify about resolved alerts. stateMessage string Contains long explanation of the alerted problem. 3.1.229. .spec.receivers[].victoropsConfigs[].apiKey Description The secret's key that contains the API key to use when talking to the VictorOps API. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.230. .spec.receivers[].victoropsConfigs[].customFields Description Additional custom fields for notification. Type array 3.1.231. .spec.receivers[].victoropsConfigs[].customFields[] Description KeyValue defines a (key, value) tuple. Type object Required key value Property Type Description key string Key of the tuple. value string Value of the tuple. 3.1.232. .spec.receivers[].victoropsConfigs[].httpConfig Description The HTTP client's configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.233. .spec.receivers[].victoropsConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.234. .spec.receivers[].victoropsConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.235. .spec.receivers[].victoropsConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.236. .spec.receivers[].victoropsConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.237. .spec.receivers[].victoropsConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.238. .spec.receivers[].victoropsConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.239. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.240. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.241. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.242. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.243. .spec.receivers[].victoropsConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.244. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.245. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.246. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.247. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.248. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.249. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.250. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.251. .spec.receivers[].victoropsConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.252. .spec.receivers[].webexConfigs Description List of Webex configurations. Type array 3.1.253. .spec.receivers[].webexConfigs[] Description WebexConfig configures notification via Cisco Webex See https://prometheus.io/docs/alerting/latest/configuration/#webex_config Type object Required roomID Property Type Description apiURL string The Webex Teams API URL i.e. https://webexapis.com/v1/messages httpConfig object The HTTP client's configuration. You must use this configuration to supply the bot token as part of the HTTP Authorization header. message string Message template roomID string ID of the Webex Teams room where to send the messages. sendResolved boolean Whether to notify about resolved alerts. 3.1.254. .spec.receivers[].webexConfigs[].httpConfig Description The HTTP client's configuration. You must use this configuration to supply the bot token as part of the HTTP Authorization header. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.255. .spec.receivers[].webexConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.256. .spec.receivers[].webexConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.257. .spec.receivers[].webexConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.258. .spec.receivers[].webexConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.259. .spec.receivers[].webexConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.260. .spec.receivers[].webexConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.261. .spec.receivers[].webexConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.262. .spec.receivers[].webexConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.263. .spec.receivers[].webexConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.264. .spec.receivers[].webexConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.265. .spec.receivers[].webexConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.266. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.267. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.268. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.269. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.270. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.271. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.272. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.273. .spec.receivers[].webexConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.274. .spec.receivers[].webhookConfigs Description List of webhook configurations. Type array 3.1.275. .spec.receivers[].webhookConfigs[] Description WebhookConfig configures notifications via a generic receiver supporting the webhook payload. See https://prometheus.io/docs/alerting/latest/configuration/#webhook_config Type object Property Type Description httpConfig object HTTP client configuration. maxAlerts integer Maximum number of alerts to be sent per webhook message. When 0, all alerts are included. sendResolved boolean Whether or not to notify about resolved alerts. url string The URL to send HTTP POST requests to. urlSecret takes precedence over url . One of urlSecret and url should be defined. urlSecret object The secret's key that contains the webhook URL to send HTTP requests to. urlSecret takes precedence over url . One of urlSecret and url should be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. 3.1.276. .spec.receivers[].webhookConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.277. .spec.receivers[].webhookConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.278. .spec.receivers[].webhookConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.279. .spec.receivers[].webhookConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.280. .spec.receivers[].webhookConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.281. .spec.receivers[].webhookConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.282. .spec.receivers[].webhookConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.283. .spec.receivers[].webhookConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.284. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.285. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.286. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.287. .spec.receivers[].webhookConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.288. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.289. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.290. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.291. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.292. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.293. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.294. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.295. .spec.receivers[].webhookConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.296. .spec.receivers[].webhookConfigs[].urlSecret Description The secret's key that contains the webhook URL to send HTTP requests to. urlSecret takes precedence over url . One of urlSecret and url should be defined. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.297. .spec.receivers[].wechatConfigs Description List of WeChat configurations. Type array 3.1.298. .spec.receivers[].wechatConfigs[] Description WeChatConfig configures notifications via WeChat. See https://prometheus.io/docs/alerting/latest/configuration/#wechat_config Type object Property Type Description agentID string apiSecret object The secret's key that contains the WeChat API key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. apiURL string The WeChat API URL. corpID string The corp id for authentication. httpConfig object HTTP client configuration. message string API request data as defined by the WeChat API. messageType string sendResolved boolean Whether or not to notify about resolved alerts. toParty string toTag string toUser string 3.1.299. .spec.receivers[].wechatConfigs[].apiSecret Description The secret's key that contains the WeChat API key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.300. .spec.receivers[].wechatConfigs[].httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 3.1.301. .spec.receivers[].wechatConfigs[].httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 3.1.302. .spec.receivers[].wechatConfigs[].httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.303. .spec.receivers[].wechatConfigs[].httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 3.1.304. .spec.receivers[].wechatConfigs[].httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.305. .spec.receivers[].wechatConfigs[].httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.306. .spec.receivers[].wechatConfigs[].httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. Type object Required key name Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string The name of the secret in the object's namespace to select from. 3.1.307. .spec.receivers[].wechatConfigs[].httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 3.1.308. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.309. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.310. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.311. .spec.receivers[].wechatConfigs[].httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.312. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 3.1.313. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.314. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.315. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.316. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 3.1.317. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the ConfigMap or its key must be defined 3.1.318. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.319. .spec.receivers[].wechatConfigs[].httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. TODO: Add other useful fields. apiVersion, kind, uid? More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Drop kubebuilder:default when controller-gen doesn't need it https://github.com/kubernetes-sigs/kubebuilder/issues/3896 . optional boolean Specify whether the Secret or its key must be defined 3.1.320. .spec.route Description The Alertmanager route definition for alerts matching the resource's namespace. If present, it will be added to the generated Alertmanager configuration as a first-level route. Type object Property Type Description activeTimeIntervals array (string) ActiveTimeIntervals is a list of TimeInterval names when this route should be active. continue boolean Boolean indicating whether an alert should continue matching subsequent sibling nodes. It will always be overridden to true for the first-level route by the Prometheus operator. groupBy array (string) List of labels to group by. Labels must not be repeated (unique list). Special label "... " (aggregate by all possible labels), if provided, must be the only element in the list. groupInterval string How long to wait before sending an updated notification. Must match the regular expression`^(( )y)?(([0-9] )w)?(( )d)?(([0-9] )h)?(( )m)?(([0-9] )s)?(([0-9]+)ms)?USD` Example: "5m" groupWait string How long to wait before sending the initial notification. Must match the regular expression`^(( )y)?(([0-9] )w)?(( )d)?(([0-9] )h)?(( )m)?(([0-9] )s)?(([0-9]+)ms)?USD` Example: "30s" matchers array List of matchers that the alert's labels should match. For the first level route, the operator removes any existing equality and regexp matcher on the namespace label and adds a namespace: <object namespace> matcher. matchers[] object Matcher defines how to match on alert's labels. muteTimeIntervals array (string) Note: this comment applies to the field definition above but appears below otherwise it gets included in the generated manifest. CRD schema doesn't support self-referential types for now (see https://github.com/kubernetes/kubernetes/issues/62872 ). We have to use an alternative type to circumvent the limitation. The downside is that the Kube API can't validate the data beyond the fact that it is a valid JSON representation. MuteTimeIntervals is a list of TimeInterval names that will mute this route when matched. receiver string Name of the receiver for this route. If not empty, it should be listed in the receivers field. repeatInterval string How long to wait before repeating the last notification. Must match the regular expression`^(( )y)?(([0-9] )w)?(( )d)?(([0-9] )h)?(( )m)?(([0-9] )s)?(([0-9]+)ms)?USD` Example: "4h" routes array (undefined) Child routes. 3.1.321. .spec.route.matchers Description List of matchers that the alert's labels should match. For the first level route, the operator removes any existing equality and regexp matcher on the namespace label and adds a namespace: <object namespace> matcher. Type array 3.1.322. .spec.route.matchers[] Description Matcher defines how to match on alert's labels. Type object Required name Property Type Description matchType string Match operator, one of = (equal to), != (not equal to), =~ (regex match) or !~ (not regex match). Negative operators ( != and !~ ) require Alertmanager >= v0.22.0. name string Label to match. value string Label value to match. 3.1.323. .spec.timeIntervals Description List of TimeInterval specifying when the routes should be muted or active. Type array 3.1.324. .spec.timeIntervals[] Description TimeInterval specifies the periods in time when notifications will be muted or active. Type object Property Type Description name string Name of the time interval. timeIntervals array TimeIntervals is a list of TimePeriod. timeIntervals[] object TimePeriod describes periods of time. 3.1.325. .spec.timeIntervals[].timeIntervals Description TimeIntervals is a list of TimePeriod. Type array 3.1.326. .spec.timeIntervals[].timeIntervals[] Description TimePeriod describes periods of time. Type object Property Type Description daysOfMonth array DaysOfMonth is a list of DayOfMonthRange daysOfMonth[] object DayOfMonthRange is an inclusive range of days of the month beginning at 1 months array (string) Months is a list of MonthRange times array Times is a list of TimeRange times[] object TimeRange defines a start and end time in 24hr format weekdays array (string) Weekdays is a list of WeekdayRange years array (string) Years is a list of YearRange 3.1.327. .spec.timeIntervals[].timeIntervals[].daysOfMonth Description DaysOfMonth is a list of DayOfMonthRange Type array 3.1.328. .spec.timeIntervals[].timeIntervals[].daysOfMonth[] Description DayOfMonthRange is an inclusive range of days of the month beginning at 1 Type object Property Type Description end integer End of the inclusive range start integer Start of the inclusive range 3.1.329. .spec.timeIntervals[].timeIntervals[].times Description Times is a list of TimeRange Type array 3.1.330. .spec.timeIntervals[].timeIntervals[].times[] Description TimeRange defines a start and end time in 24hr format Type object Property Type Description endTime string EndTime is the end time in 24hr format. startTime string StartTime is the start time in 24hr format. 3.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1beta1/alertmanagerconfigs GET : list objects of kind AlertmanagerConfig /apis/monitoring.coreos.com/v1beta1/namespaces/{namespace}/alertmanagerconfigs DELETE : delete collection of AlertmanagerConfig GET : list objects of kind AlertmanagerConfig POST : create an AlertmanagerConfig /apis/monitoring.coreos.com/v1beta1/namespaces/{namespace}/alertmanagerconfigs/{name} DELETE : delete an AlertmanagerConfig GET : read the specified AlertmanagerConfig PATCH : partially update the specified AlertmanagerConfig PUT : replace the specified AlertmanagerConfig 3.2.1. /apis/monitoring.coreos.com/v1beta1/alertmanagerconfigs HTTP method GET Description list objects of kind AlertmanagerConfig Table 3.1. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfigList schema 401 - Unauthorized Empty 3.2.2. /apis/monitoring.coreos.com/v1beta1/namespaces/{namespace}/alertmanagerconfigs HTTP method DELETE Description delete collection of AlertmanagerConfig Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind AlertmanagerConfig Table 3.3. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfigList schema 401 - Unauthorized Empty HTTP method POST Description create an AlertmanagerConfig Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body AlertmanagerConfig schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfig schema 201 - Created AlertmanagerConfig schema 202 - Accepted AlertmanagerConfig schema 401 - Unauthorized Empty 3.2.3. /apis/monitoring.coreos.com/v1beta1/namespaces/{namespace}/alertmanagerconfigs/{name} Table 3.7. Global path parameters Parameter Type Description name string name of the AlertmanagerConfig HTTP method DELETE Description delete an AlertmanagerConfig Table 3.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified AlertmanagerConfig Table 3.10. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified AlertmanagerConfig Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified AlertmanagerConfig Table 3.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.14. Body parameters Parameter Type Description body AlertmanagerConfig schema Table 3.15. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerConfig schema 201 - Created AlertmanagerConfig schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/monitoring_apis/alertmanagerconfig-monitoring-coreos-com-v1beta1
Chapter 4. Viewing the Ceph overview with Datadog
Chapter 4. Viewing the Ceph overview with Datadog After installing and configuring the Datadog integration with Ceph, return to the Datadog App . The user interface will present navigation on the left side of the screen. Prerequisites Internet access. Procedure Hover over Dashboards to expose the submenu and then click Ceph Overview . Datadog presents an overview of the Ceph Storage Cluster. Click Dashboards->New Dashboard to create a custom Ceph dashboard.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/monitoring_ceph_with_datadog_guide/viewing-the-ceph-overview-with-datadog_datadog
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 6.4.0-2 Wed Jul 19 2017 David Le Sage Updates for 6.4
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_2_modeshape_tools/appe-revision_history
Chapter 3. Installing a cluster on IBM Power in a restricted network
Chapter 3. Installing a cluster on IBM Power in a restricted network In OpenShift Container Platform version 4.13, you can install a cluster on IBM Power infrastructure that you provision in a restricted network. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a mirror registry for installation in a restricted network and obtained the imageContentSources data for your version of OpenShift Container Platform. Before you begin the installation process, you must move or remove any existing installation files. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are performed on a machine with access to the installation media. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 3.2. About installations in restricted networks In OpenShift Container Platform 4.13, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 3.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 3.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 3.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 3.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 3.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.4.3. Minimum IBM Power requirements You can install OpenShift Container Platform version 4.13 on the following IBM hardware: IBM Power9 or Power10 processor-based systems Note Support for RHCOS functionality for all IBM Power8 models, IBM Power AC922, IBM Power IC922, and IBM Power LC922 is deprecated in OpenShift Container Platform 4.13. Red Hat recommends that you use later hardware models. Hardware requirements Six IBM Power bare metal servers or six LPARs across multiple PowerVM servers Operating system requirements One instance of an IBM Power9 or Power10 processor-based system On your IBM Power instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Dedicated physical adapter, or SR-IOV virtual function Available by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM vNIC Storage / main memory 100 GB / 16 GB for OpenShift Container Platform control plane machines 100 GB / 8 GB for OpenShift Container Platform compute machines 100 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 3.4.4. Recommended IBM Power system requirements Hardware requirements Six IBM Power bare metal servers or six LPARs across multiple PowerVM servers Operating system requirements One instance of an IBM Power9 or Power10 processor-based system On your IBM Power instance, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Dedicated physical adapter, or SR-IOV virtual function Available by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM vNIC Storage / main memory 120 GB / 32 GB for OpenShift Container Platform control plane machines 120 GB / 32 GB for OpenShift Container Platform compute machines 120 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 3.4.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 3.4.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 3.4.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.4.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 3.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 3.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 3.4.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 3.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 3.4.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 3.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 3.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 3.4.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.4.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 3.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 3.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 3.8.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note After installation, you cannot modify these parameters in the install-config.yaml file. 3.8.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 3.9. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 3.8.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you use the Red Hat OpenShift Networking OpenShift SDN network plugin, only the IPv4 address family is supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 3.10. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 3.8.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 3.11. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are ppc64le (the default). String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 3.8.2. Sample install-config.yaml file for IBM Power You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Power infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Provide the contents of the certificate file that you used for your mirror registry. 18 Provide the imageContentSources section from the output of the command to mirror the repository. 3.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.8.4. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 3.9. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 3.9.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 3.12. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 3.13. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 3.14. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 3.15. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 3.16. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 3.17. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 3.18. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 3.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program (without an architecture postfix) runs on ppc64le only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 3.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Power infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Follow either the steps to use an ISO image or network PXE booting to install RHCOS on the machines. 3.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 3.11.1.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 3.11.1.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none 3.11.2. Installing RHCOS by using PXE booting You can use PXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.13-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE installation for the RHCOS images and begin the installation. Modify the following example menu entry for your environment and verify that the image and Ignition files are properly accessible: 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 3.11.3. Enabling multipathing with kernel arguments on RHCOS In OpenShift Container Platform 4.9 or later, during installation, you can enable multipathing for provisioned nodes. RHCOS supports multipathing on the primary disk. Multipathing provides added benefits of stronger resilience to hardware failure to achieve higher host availability. During the initial cluster creation, you might want to add kernel arguments to all master or worker nodes. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. Create a machine config file. For example, create a 99-master-kargs-mpath.yaml that instructs the cluster to add the master label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' To enable multipathing on worker nodes: Create a machine config file. For example, create a 99-worker-kargs-mpath.yaml that instructs the cluster to add the worker label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' You can now continue on to create the cluster. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . In case of MPIO failure, use the bootlist command to update the boot device list with alternate logical device names. The command displays a boot list and it designates the possible boot devices for when the system is booted in normal mode. To display a boot list and specify the possible boot devices if the system is booted in normal mode, enter the following command: USD bootlist -m normal -o sda To update the boot list for normal mode and add alternate device names, enter the following command: USD bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde If the original boot disk path is down, the node reboots from the alternate device registered in the normal boot device list. 3.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 3.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 3.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Configure the Operators that are not available. 3.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 3.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.15.2.1. Changing the image registry's management state To start the image registry, you must change the Image Registry Operator configuration's managementState from Removed to Managed . Procedure Change managementState Image Registry Operator configuration from Removed to Managed . For example: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 3.15.2.2. Configuring registry storage for IBM Power As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Power. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 3.15.2.3. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 3.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. Additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. 3.17. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "./openshift-install create manifests --dir <installation_directory>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "bootlist -m normal -o sda", "bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_power/installing-restricted-networks-ibm-power
Chapter 4. Insights for Red Hat Enterprise Linux advisor service executive report
Chapter 4. Insights for Red Hat Enterprise Linux advisor service executive report You can download a high-level report summarizing the status of your infrastructure and designed for an executive audience. Executive reports are one to two-page PDF files showing the following information: Identified recommendations by severity Recently identified recommendations by category Top three recommendations in your infrastructure based on the greatest total risk and the greatest number of systems exposed 4.1. Downloading an advisor service executive report Use the following procedure to download an executive report from the advisor service. Procedure Navigate to the Operations > Advisor > Recommendations page and log in if necessary. Located in the upper-right corner of the Recommendations page, click the Download executive report link. Select to open or save the file and click OK . If downloaded, check your download location for the PDF file.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_advisor_service_reports/insights-report-service-exec-report
4.158. lohit-bengali-fonts
4.158. lohit-bengali-fonts 4.158.1. RHEA-2011:1141 - lohit-bengali-fonts enhancement update An updated lohit-bengali-fonts package which adds one enhancement is now available for Red Hat Enterprise Linux 6. The lohit-bengali-fonts package provides a free Bengali TrueType/OpenType font. Enhancement BZ# 691285 Unicode 6.0, the most recent major version of the Unicode standard, introduces the Indian Rupee Sign (U+20B9), the new official Indian currency symbol. With this update, the lohit-bengali-fonts package now includes a glyph for this new character. All users requiring the Indian rupee sign should install this updated package, which adds this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/lohit-bengali-fonts
Chapter 3. Creating and building an application using the web console
Chapter 3. Creating and building an application using the web console 3.1. Before you begin Review Accessing the web console . You must be able to access a running instance of OpenShift Container Platform. If you do not have access, contact your cluster administrator. 3.2. Logging in to the web console You can log in to the OpenShift Container Platform web console to access and manage your cluster. Prerequisites You must have access to an OpenShift Container Platform cluster. Procedure Log in to the OpenShift Container Platform web console using your login credentials. You are redirected to the Projects page. For non-administrative users, the default view is the Developer perspective. For cluster administrators, the default view is the Administrator perspective. If you do not have cluster-admin privileges, you will not see the Administrator perspective in your web console. The web console provides two perspectives: the Administrator perspective and Developer perspective. The Developer perspective provides workflows specific to the developer use cases. Figure 3.1. Perspective switcher Use the perspective switcher to switch to the Developer perspective. The Topology view with options to create an application is displayed. 3.3. Creating a new project A project enables a community of users to organize and manage their content in isolation. Projects are OpenShift Container Platform extensions to Kubernetes namespaces. Projects have additional features that enable user self-provisioning. Users must receive access to projects from administrators. Cluster administrators can allow developers to create their own projects. In most cases, users automatically have access to their own projects. Each project has its own set of objects, policies, constraints, and service accounts. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. Procedure In the +Add view, select Project Create Project . In the Name field, enter user-getting-started . Optional: In the Display name field, enter Getting Started with OpenShift . Note Display name and Description fields are optional. Click Create . You have created your first project on OpenShift Container Platform. Additional resources Default cluster roles Viewing a project using the web console Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.4. Granting view permissions OpenShift Container Platform automatically creates a few special service accounts in every project. The default service account takes responsibility for running the pods. OpenShift Container Platform uses and injects this service account into every pod that launches. The following procedure creates a RoleBinding object for the default ServiceAccount object. The service account communicates with the OpenShift Container Platform API to learn about pods, services, and resources within the project. Prerequisites You are logged in to the OpenShift Container Platform web console. You have a deployed image. You are in the Administrator perspective. Procedure Navigate to User Management and then click RoleBindings . Click Create binding . Select Namespace role binding (RoleBinding) . In the Name field, enter sa-user-account . In the Namespace field, search for and select user-getting-started . In the Role name field, search for view and select view . In the Subject field, select ServiceAccount . In the Subject namespace field, search for and select user-getting-started . In the Subject name field, enter default . Click Create . Additional resources Understanding authentication RBAC overview 3.5. Deploying your first image The simplest way to deploy an application in OpenShift Container Platform is to run an existing container image. The following procedure deploys a front end component of an application called national-parks-app . The web application displays an interactive map. The map displays the location of major national parks across the world. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. Procedure From the +Add view in the Developer perspective, click Container images to open a dialog. In the Image Name field, enter the following: quay.io/openshiftroadshow/parksmap:latest Ensure that you have the current values for the following: Application: national-parks-app Name: parksmap Select Deployment as the Resource . Select Create route to the application . In the Advanced Options section, click Labels and add labels to better identify this deployment later. Labels help identify and filter components in the web console and in the command line. Add the following labels: app=national-parks-app component=parksmap role=frontend Click Create . You are redirected to the Topology page where you can see the parksmap deployment in the national-parks-app application. Additional resources Creating applications using the Developer perspective Viewing a project using the web console Viewing the topology of your application Deleting a project using the web console 3.5.1. Examining the pod OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance, physical or virtual, to a container. The Overview panel enables you to access many features of the parksmap deployment. The Details and Resources tabs enable you to scale application pods, check build status, services, and routes. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure Click D parksmap in the Topology view to open the Overview panel. Figure 3.2. Parksmap deployment The Overview panel includes tabs for Details , Resources , and Observe . The Details tab might be displayed by default. Table 3.1. Overview panel tab definitions Tab Defintion Details Enables you to scale your application and view pod configuration such as labels, annotations, and the status of the application. Resources Displays the resources that are associated with the deployment. Pods are the basic units of OpenShift Container Platform applications. You can see how many pods are being used, what their status is, and you can view the logs. Services that are created for your pod and assigned ports are listed under the Services heading. Routes enable external access to the pods and a URL is used to access them. Observe View various Events and Metrics information as it relates to your pod. Additional resources Interacting with applications and components Scaling application pods and checking builds and routes Labels and annotations used for the Topology view 3.5.2. Scaling the application In Kubernetes, a Deployment object defines how an application deploys. In most cases, users use Pod , Service , ReplicaSets , and Deployment resources together. In most cases, OpenShift Container Platform creates the resources for you. When you deploy the national-parks-app image, a deployment resource is created. In this example, only one Pod is deployed. The following procedure scales the national-parks-image to use two instances. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure In the Topology view, click the national-parks-app application. Click the Details tab. Use the up arrow to scale the pod to two instances. Figure 3.3. Scaling application Note Application scaling can happen quickly because OpenShift Container Platform is launching a new instance of an existing image. Use the down arrow to scale the pod down to one instance. Additional resources Recommended practices for scaling the cluster Understanding horizontal pod autoscalers About the Vertical Pod Autoscaler Operator 3.6. Deploying a Python application The following procedure deploys a back-end service for the parksmap application. The Python application performs 2D geo-spatial queries against a MongoDB database to locate and return map coordinates of all national parks in the world. The deployed back-end service that is nationalparks . Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the +Add view in the Developer perspective, click Import from Git to open a dialog. Enter the following URL in the Git Repo URL field: https://github.com/openshift-roadshow/nationalparks-py.git A builder image is automatically detected. Note If the detected builder image is Dockerfile, select Edit Import Strategy . Select Builder Image and then click Python . Scroll to the General section. Ensure that you have the current values for the following: Application: national-parks-app Name: nationalparks Select Deployment as the Resource . Select Create route to the application . In the Advanced Options section, click Labels and add labels to better identify this deployment later. Labels help identify and filter components in the web console and in the command line. Add the following labels: app=national-parks-app component=nationalparks role=backend type=parksmap-backend Click Create . From the Topology view, select the nationalparks application. Note Click the Resources tab. In the Builds section, you can see your build running. Additional resources Adding services to your application Importing a codebase from Git to create an application Viewing the topology of your application Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.7. Connecting to a database Deploy and connect a MongoDB database where the national-parks-app application stores location information. Once you mark the national-parks-app application as a backend for the map visualization tool, parksmap deployment uses the OpenShift Container Platform discover mechanism to display the map automatically. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the +Add view in the Developer perspective, click Container images to open a dialog. In the Image Name field, enter quay.io/centos7/mongodb-36-centos7 . In the Runtime icon field, search for mongodb . Scroll down to the General section. Ensure that you have the current values for the following: Application: national-parks-app Name: mongodb-nationalparks Select Deployment as the Resource . Unselect the checkbox to Create route to the application . In the Advanced Options section, click Deployment to add environment variables to add the following environment variables: Table 3.2. Environment variable names and values Name Value MONGODB_USER mongodb MONGODB_PASSWORD mongodb MONGODB_DATABASE mongodb MONGODB_ADMIN_PASSWORD mongodb Click Create . Additional resources Adding services to your application Viewing a project using the web console Viewing the topology of your application Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.7.1. Creating a secret The Secret object provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. The following procedure adds the secret nationalparks-mongodb-parameters and mounts it to the nationalparks workload. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the Developer perspective, navigate to Secrets on the left hand navigation and click Secrets . Click Create Key/value secret . In the Secret name field, enter nationalparks-mongodb-parameters . Enter the following values for Key and Value : Table 3.3. Secret keys and values Key Value MONGODB_USER mongodb DATABASE_SERVICE_NAME mongodb-nationalparks MONGODB_PASSWORD mongodb MONGODB_DATABASE mongodb MONGODB_ADMIN_PASSWORD mongodb Click Create . Click Add Secret to workload . From the drop down menu, select nationalparks as the workload to add. Click Save . This change in configuration triggers a new rollout of the nationalparks deployment with the environment variables properly injected. Additional resources Understanding secrets 3.7.2. Loading data and displaying the national parks map You deployed the parksmap and nationalparks applications and then deployed the mongodb-nationalparks database. However, no data has been loaded into the database. Before loading the data, add the proper labels to the mongodb-nationalparks and nationalparks deployment. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the Topology view, navigate to nationalparks deployment and click Resources and retrieve your route information. Copy and paste the URL into your web browser and add the following at the end of the URL: /ws/data/load Example output Items inserted in database: 2893 From the Topology view, navigate to parksmap deployment and click Resources and retrieve your route information. Copy and paste the URL into your web browser to view your national parks across the world map. Figure 3.4. National parks across the world Additional resources Providing access permissions to your project using the Developer perspective Labels and annotations used for the Topology view
[ "/ws/data/load", "Items inserted in database: 2893" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/getting_started/openshift-web-console
Chapter 2. Container security
Chapter 2. Container security 2.1. Understanding container security Securing a containerized application relies on multiple levels of security: Container security begins with a trusted base container image and continues through the container build process as it moves through your CI/CD pipeline. Important Image streams by default do not automatically update. This default behavior might create a security issue because security updates to images referenced by an image stream do not automatically occur. For information about how to override this default behavior, see Configuring periodic importing of imagestreamtags . When a container is deployed, its security depends on it running on secure operating systems and networks, and establishing firm boundaries between the container itself and the users and hosts that interact with it. Continued security relies on being able to scan container images for vulnerabilities and having an efficient way to correct and replace vulnerable images. Beyond what a platform such as OpenShift Container Platform offers out of the box, your organization will likely have its own security demands. Some level of compliance verification might be needed before you can even bring OpenShift Container Platform into your data center. Likewise, you may need to add your own agents, specialized hardware drivers, or encryption features to OpenShift Container Platform, before it can meet your organization's security standards. This guide provides a high-level walkthrough of the container security measures available in OpenShift Container Platform, including solutions for the host layer, the container and orchestration layer, and the build and application layer. It then points you to specific OpenShift Container Platform documentation to help you achieve those security measures. This guide contains the following information: Why container security is important and how it compares with existing security standards. Which container security measures are provided by the host (RHCOS and RHEL) layer and which are provided by OpenShift Container Platform. How to evaluate your container content and sources for vulnerabilities. How to design your build and deployment process to proactively check container content. How to control access to containers through authentication and authorization. How networking and attached storage are secured in OpenShift Container Platform. Containerized solutions for API management and SSO. The goal of this guide is to understand the incredible security benefits of using OpenShift Container Platform for your containerized workloads and how the entire Red Hat ecosystem plays a part in making and keeping containers secure. It will also help you understand how you can engage with the OpenShift Container Platform to achieve your organization's security goals. 2.1.1. What are containers? Containers package an application and all its dependencies into a single image that can be promoted from development, to test, to production, without change. A container might be part of a larger application that works closely with other containers. Containers provide consistency across environments and multiple deployment targets: physical servers, virtual machines (VMs), and private or public cloud. Some of the benefits of using containers include: Infrastructure Applications Sandboxed application processes on a shared Linux operating system kernel Package my application and all of its dependencies Simpler, lighter, and denser than virtual machines Deploy to any environment in seconds and enable CI/CD Portable across different environments Easily access and share containerized components See Understanding Linux containers from the Red Hat Customer Portal to find out more about Linux containers. To learn about RHEL container tools, see Building, running, and managing containers in the RHEL product documentation. 2.1.2. What is OpenShift Container Platform? Automating how containerized applications are deployed, run, and managed is the job of a platform such as OpenShift Container Platform. At its core, OpenShift Container Platform relies on the Kubernetes project to provide the engine for orchestrating containers across many nodes in scalable data centers. Kubernetes is a project, which can run using different operating systems and add-on components that offer no guarantees of supportability from the project. As a result, the security of different Kubernetes platforms can vary. OpenShift Container Platform is designed to lock down Kubernetes security and integrate the platform with a variety of extended components. To do this, OpenShift Container Platform draws on the extensive Red Hat ecosystem of open source technologies that include the operating systems, authentication, storage, networking, development tools, base container images, and many other components. OpenShift Container Platform can leverage Red Hat's experience in uncovering and rapidly deploying fixes for vulnerabilities in the platform itself as well as the containerized applications running on the platform. Red Hat's experience also extends to efficiently integrating new components with OpenShift Container Platform as they become available and adapting technologies to individual customer needs. Additional resources OpenShift Container Platform architecture OpenShift Security Guide 2.2. Understanding host and VM security Both containers and virtual machines provide ways of separating applications running on a host from the operating system itself. Understanding RHCOS, which is the operating system used by OpenShift Container Platform, will help you see how the host systems protect containers and hosts from each other. 2.2.1. Securing containers on Red Hat Enterprise Linux CoreOS (RHCOS) Containers simplify the act of deploying many applications to run on the same host, using the same kernel and container runtime to spin up each container. The applications can be owned by many users and, because they are kept separate, can run different, and even incompatible, versions of those applications at the same time without issue. In Linux, containers are just a special type of process, so securing containers is similar in many ways to securing any other running process. An environment for running containers starts with an operating system that can secure the host kernel from containers and other processes running on the host, as well as secure containers from each other. Because OpenShift Container Platform 4.18 runs on RHCOS hosts, with the option of using Red Hat Enterprise Linux (RHEL) as worker nodes, the following concepts apply by default to any deployed OpenShift Container Platform cluster. These RHEL security features are at the core of what makes running containers in OpenShift Container Platform more secure: Linux namespaces enable creating an abstraction of a particular global system resource to make it appear as a separate instance to processes within a namespace. Consequently, several containers can use the same computing resource simultaneously without creating a conflict. Container namespaces that are separate from the host by default include mount table, process table, network interface, user, control group, UTS, and IPC namespaces. Those containers that need direct access to host namespaces need to have elevated permissions to request that access. See Building, running, and managing containers from the RHEL 9 container documentation for details on the types of namespaces. SELinux provides an additional layer of security to keep containers isolated from each other and from the host. SELinux allows administrators to enforce mandatory access controls (MAC) for every user, application, process, and file. Warning Disabling SELinux on RHCOS is not supported. CGroups (control groups) limit, account for, and isolate the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. CGroups are used to ensure that containers on the same host are not impacted by each other. Secure computing mode (seccomp) profiles can be associated with a container to restrict available system calls. See page 94 of the Red Hat OpenShift security guide for details about seccomp. Deploying containers using RHCOS reduces the attack surface by minimizing the host environment and tuning it for containers. The CRI-O container engine further reduces that attack surface by implementing only those features required by Kubernetes and OpenShift Container Platform to run and manage containers, as opposed to other container engines that implement desktop-oriented standalone features. RHCOS is a version of Red Hat Enterprise Linux (RHEL) that is specially configured to work as control plane (master) and worker nodes on OpenShift Container Platform clusters. So RHCOS is tuned to efficiently run container workloads, along with Kubernetes and OpenShift Container Platform services. To further protect RHCOS systems in OpenShift Container Platform clusters, most containers, except those managing or monitoring the host system itself, should run as a non-root user. Dropping the privilege level or creating containers with the least amount of privileges possible is recommended best practice for protecting your own OpenShift Container Platform clusters. Additional resources How nodes enforce resource constraints Managing security context constraints Supported platforms for OpenShift clusters Requirements for a cluster with user-provisioned infrastructure Choosing how to configure RHCOS Ignition Kernel arguments Kernel modules Disk encryption Chrony time service About the OpenShift Update Service FIPS cryptography 2.2.2. Comparing virtualization and containers Traditional virtualization provides another way to keep application environments separate on the same physical host. However, virtual machines work in a different way than containers. Virtualization relies on a hypervisor spinning up guest virtual machines (VMs), each of which has its own operating system (OS), represented by a running kernel, as well as the running application and its dependencies. With VMs, the hypervisor isolates the guests from each other and from the host kernel. Fewer individuals and processes have access to the hypervisor, reducing the attack surface on the physical server. That said, security must still be monitored: one guest VM might be able to use hypervisor bugs to gain access to another VM or the host kernel. And, when the OS needs to be patched, it must be patched on all guest VMs using that OS. Containers can be run inside guest VMs, and there might be use cases where this is desirable. For example, you might be deploying a traditional application in a container, perhaps to lift-and-shift an application to the cloud. Container separation on a single host, however, provides a more lightweight, flexible, and easier-to-scale deployment solution. This deployment model is particularly appropriate for cloud-native applications. Containers are generally much smaller than VMs and consume less memory and CPU. See Linux Containers Compared to KVM Virtualization in the RHEL 7 container documentation to learn about the differences between container and VMs. 2.2.3. Securing OpenShift Container Platform When you deploy OpenShift Container Platform, you have the choice of an installer-provisioned infrastructure (there are several available platforms) or your own user-provisioned infrastructure. Some low-level security-related configuration, such as enabling FIPS mode or adding kernel modules required at first boot, might benefit from a user-provisioned infrastructure. Likewise, user-provisioned infrastructure is appropriate for disconnected OpenShift Container Platform deployments. Keep in mind that, when it comes to making security enhancements and other configuration changes to OpenShift Container Platform, the goals should include: Keeping the underlying nodes as generic as possible. You want to be able to easily throw away and spin up similar nodes quickly and in prescriptive ways. Managing modifications to nodes through OpenShift Container Platform as much as possible, rather than making direct, one-off changes to the nodes. In pursuit of those goals, most node changes should be done during installation through Ignition or later using MachineConfigs that are applied to sets of nodes by the Machine Config Operator. Examples of security-related configuration changes you can do in this way include: Adding kernel arguments Adding kernel modules Enabling support for FIPS cryptography Configuring disk encryption Configuring the chrony time service Besides the Machine Config Operator, there are several other Operators available to configure OpenShift Container Platform infrastructure that are managed by the Cluster Version Operator (CVO). The CVO is able to automate many aspects of OpenShift Container Platform cluster updates. Additional resources FIPS cryptography 2.3. Hardening RHCOS RHCOS was created and tuned to be deployed in OpenShift Container Platform with few if any changes needed to RHCOS nodes. Every organization adopting OpenShift Container Platform has its own requirements for system hardening. As a RHEL system with OpenShift-specific modifications and features added (such as Ignition, ostree, and a read-only /usr to provide limited immutability), RHCOS can be hardened just as you would any RHEL system. Differences lie in the ways you manage the hardening. A key feature of OpenShift Container Platform and its Kubernetes engine is to be able to quickly scale applications and infrastructure up and down as needed. Unless it is unavoidable, you do not want to make direct changes to RHCOS by logging into a host and adding software or changing settings. You want to have the OpenShift Container Platform installer and control plane manage changes to RHCOS so new nodes can be spun up without manual intervention. So, if you are setting out to harden RHCOS nodes in OpenShift Container Platform to meet your security needs, you should consider both what to harden and how to go about doing that hardening. 2.3.1. Choosing what to harden in RHCOS The RHEL 9 Security Hardening guide describes how you should approach security for any RHEL system. Use this guide to learn how to approach cryptography, evaluate vulnerabilities, and assess threats to various services. Likewise, you can learn how to scan for compliance standards, check file integrity, perform auditing, and encrypt storage devices. With the knowledge of what features you want to harden, you can then decide how to harden them in RHCOS. 2.3.2. Choosing how to harden RHCOS Direct modification of RHCOS systems in OpenShift Container Platform is discouraged. Instead, you should think of modifying systems in pools of nodes, such as worker nodes and control plane nodes. When a new node is needed, in non-bare metal installs, you can request a new node of the type you want and it will be created from an RHCOS image plus the modifications you created earlier. There are opportunities for modifying RHCOS before installation, during installation, and after the cluster is up and running. 2.3.2.1. Hardening before installation For bare metal installations, you can add hardening features to RHCOS before beginning the OpenShift Container Platform installation. For example, you can add kernel options when you boot the RHCOS installer to turn security features on or off, such as various SELinux booleans or low-level settings, such as symmetric multithreading. Warning Disabling SELinux on RHCOS nodes is not supported. Although bare metal RHCOS installations are more difficult, they offer the opportunity of getting operating system changes in place before starting the OpenShift Container Platform installation. This can be important when you need to ensure that certain features, such as disk encryption or special networking settings, be set up at the earliest possible moment. 2.3.2.2. Hardening during installation You can interrupt the OpenShift Container Platform installation process and change Ignition configs. Through Ignition configs, you can add your own files and systemd services to the RHCOS nodes. You can also make some basic security-related changes to the install-config.yaml file used for installation. Contents added in this way are available at each node's first boot. 2.3.2.3. Hardening after the cluster is running After the OpenShift Container Platform cluster is up and running, there are several ways to apply hardening features to RHCOS: Daemon set: If you need a service to run on every node, you can add that service with a Kubernetes DaemonSet object . Machine config: MachineConfig objects contain a subset of Ignition configs in the same format. By applying machine configs to all worker or control plane nodes, you can ensure that the node of the same type that is added to the cluster has the same changes applied. All of the features noted here are described in the OpenShift Container Platform product documentation. Additional resources OpenShift Security Guide Choosing how to configure RHCOS Modifying Nodes Manually creating the installation configuration file Creating the Kubernetes manifest and Ignition config files Installing RHCOS by using an ISO image Customizing nodes Adding kernel arguments to nodes Optional configuration parameters Support for FIPS cryptography RHEL core crypto components 2.4. Container image signatures Red Hat delivers signatures for the images in the Red Hat Container Registries. Those signatures can be automatically verified when being pulled to OpenShift Container Platform 4 clusters by using the Machine Config Operator (MCO). Quay.io serves most of the images that make up OpenShift Container Platform, and only the release image is signed. Release images refer to the approved OpenShift Container Platform images, offering a degree of protection against supply chain attacks. However, some extensions to OpenShift Container Platform, such as logging, monitoring, and service mesh, are shipped as Operators from the Operator Lifecycle Manager (OLM). Those images ship from the Red Hat Ecosystem Catalog Container images registry. To verify the integrity of those images between Red Hat registries and your infrastructure, enable signature verification. 2.4.1. Enabling signature verification for Red Hat Container Registries Enabling container signature validation for Red Hat Container Registries requires writing a signature verification policy file specifying the keys to verify images from these registries. For RHEL8 nodes, the registries are already defined in /etc/containers/registries.d by default. Procedure Create a Butane config file, 51-worker-rh-registry-trust.bu , containing the necessary configuration for the worker nodes. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.18.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Use Butane to generate a machine config YAML file, 51-worker-rh-registry-trust.yaml , containing the file to be written to disk on the worker nodes: USD butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml Apply the created machine config: USD oc apply -f 51-worker-rh-registry-trust.yaml Check that the worker machine config pool has rolled out with the new machine config: Check that the new machine config was created: USD oc get mc Sample output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2 1 New machine config 2 New rendered machine config Check that the worker machine config pool is updating with the new machine config: USD oc get mcp Sample output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1 1 When the UPDATING field is True , the machine config pool is updating with the new machine config. When the field becomes False , the worker machine config pool has rolled out to the new machine config. If your cluster uses any RHEL7 worker nodes, when the worker machine config pool is updated, create YAML files on those nodes in the /etc/containers/registries.d directory, which specify the location of the detached signatures for a given registry server. The following example works only for images hosted in registry.access.redhat.com and registry.redhat.io . Start a debug session to each RHEL7 worker node: USD oc debug node/<node_name> Change your root directory to /host : sh-4.2# chroot /host Create a /etc/containers/registries.d/registry.redhat.io.yaml file that contains the following: docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Create a /etc/containers/registries.d/registry.access.redhat.com.yaml file that contains the following: docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore Exit the debug session. 2.4.2. Verifying the signature verification configuration After you apply the machine configs to the cluster, the Machine Config Controller detects the new MachineConfig object and generates a new rendered-worker-<hash> version. Prerequisites You enabled signature verification by using a machine config file. Procedure On the command line, run the following command to display information about a desired worker: USD oc describe machineconfigpool/worker Example output of initial worker monitoring Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none> Run the oc describe command again: USD oc describe machineconfigpool/worker Example output after the worker is updated ... Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 ... Note The Observed Generation parameter shows an increased count based on the generation of the controller-produced configuration. This controller updates this value even if it fails to process the specification and generate a revision. The Configuration Source value points to the 51-worker-rh-registry-trust configuration. Confirm that the policy.json file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/policy.json Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` { "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } } Confirm that the registry.redhat.io.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore Confirm that the registry.access.redhat.com.yaml file exists with the following command: USD oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml Example output Starting pod/<node>-debug ... To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore 2.4.3. Understanding the verification of container images lacking verifiable signatures Each OpenShift Container Platform release image is immutable and signed with a Red Hat production key. During an OpenShift Container Platform update or installation, a release image might deploy container images that do not have verifiable signatures. Each signed release image digest is immutable. Each reference in the release image is to the immutable digest of another image, so the contents can be trusted transitively. In other words, the signature on the release image validates all release contents. For example, the image references lacking a verifiable signature are contained in the signed OpenShift Container Platform release image: Example release info output USD oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2 1 Signed release image SHA. 2 Container image lacking a verifiable signature included in the release. 2.4.3.1. Automated verification during updates Verification of signatures is automatic. The OpenShift Cluster Version Operator (CVO) verifies signatures on the release images during an OpenShift Container Platform update. This is an internal process. An OpenShift Container Platform installation or update fails if the automated verification fails. Verification of signatures can also be done manually using the skopeo command-line utility. Additional resources Introduction to OpenShift Updates 2.4.3.2. Using skopeo to verify signatures of Red Hat container images You can verify the signatures for container images included in an OpenShift Container Platform release image by pulling those signatures from OCP release mirror site . Because the signatures on the mirror site are not in a format readily understood by Podman or CRI-O, you can use the skopeo standalone-verify command to verify that the your release images are signed by Red Hat. Prerequisites You have installed the skopeo command-line utility. Procedure Get the full SHA for your release by running the following command: USD oc adm release info <release_version> \ 1 1 Substitute <release_version> with your release number, for example, 4.14.3 . Example output snippet --- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 --- Pull down the Red Hat release key by running the following command: USD curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt Get the signature file for the specific release that you want to verify by running the following command: USD curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \ 1 1 Replace <sha_from_version> with SHA value from the full link to the mirror site that matches the SHA of your release. For example, the link to the signature for the 4.12.23 release is https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55/signature-1 , and the SHA value is e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 . Get the manifest for the release image by running the following command: USD skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \ 1 1 Replace <quay_link_to_release> with the output of the oc adm release info command. For example, quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 . Use skopeo to verify the signature: USD skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key where: <release_number> Specifies the release number, for example 4.14.3 . <arch> Specifies the architecture, for example x86_64 . Example output Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 2.4.4. Additional resources Machine Config Overview 2.5. Understanding compliance For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization's corporate governance framework. 2.5.1. Understanding compliance and risk management FIPS compliance is one of the most critical components required in highly secure environments, to ensure that only supported cryptographic technologies are allowed on nodes. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. To understand Red Hat's view of OpenShift Container Platform compliance frameworks, refer to the Risk Management and Regulatory Readiness chapter of the OpenShift Security Guide Book . Additional resources Installing a cluster in FIPS mode 2.6. Securing container content To ensure the security of the content inside your containers you need to start with trusted base images, such as Red Hat Universal Base Images, and add trusted software. To check the ongoing security of your container images, there are both Red Hat and third-party tools for scanning images. 2.6.1. Securing inside the container Applications and infrastructures are composed of readily available components, many of which are open source packages such as, the Linux operating system, JBoss Web Server, PostgreSQL, and Node.js. Containerized versions of these packages are also available. However, you need to know where the packages originally came from, what versions are used, who built them, and whether there is any malicious code inside them. Some questions to answer include: Will what is inside the containers compromise your infrastructure? Are there known vulnerabilities in the application layer? Are the runtime and operating system layers current? By building your containers from Red Hat Universal Base Images (UBI) you are assured of a foundation for your container images that consists of the same RPM-packaged software that is included in Red Hat Enterprise Linux. No subscriptions are required to either use or redistribute UBI images. To assure ongoing security of the containers themselves, security scanning features, used directly from RHEL or added to OpenShift Container Platform, can alert you when an image you are using has vulnerabilities. OpenSCAP image scanning is available in RHEL and the Red Hat Quay Container Security Operator can be added to check container images used in OpenShift Container Platform. 2.6.2. Creating redistributable images with UBI To create containerized applications, you typically start with a trusted base image that offers the components that are usually provided by the operating system. These include the libraries, utilities, and other features the application expects to see in the operating system's file system. Red Hat Universal Base Images (UBI) were created to encourage anyone building their own containers to start with one that is made entirely from Red Hat Enterprise Linux rpm packages and other content. These UBI images are updated regularly to keep up with security patches and free to use and redistribute with container images built to include your own software. Search the Red Hat Ecosystem Catalog to both find and check the health of different UBI images. As someone creating secure container images, you might be interested in these two general types of UBI images: UBI : There are standard UBI images for RHEL 7, 8, and 9 ( ubi7/ubi , ubi8/ubi , and ubi9/ubi ), as well as minimal images based on those systems ( ubi7/ubi-minimal , ubi8/ubi-mimimal , and ubi9/ubi-minimal). All of these images are preconfigured to point to free repositories of RHEL software that you can add to the container images you build, using standard yum and dnf commands. Red Hat encourages people to use these images on other distributions, such as Fedora and Ubuntu. Red Hat Software Collections : Search the Red Hat Ecosystem Catalog for rhscl/ to find images created to use as base images for specific types of applications. For example, there are Apache httpd ( rhscl/httpd-* ), Python ( rhscl/python-* ), Ruby ( rhscl/ruby-* ), Node.js ( rhscl/nodejs-* ) and Perl ( rhscl/perl-* ) rhscl images. Keep in mind that while UBI images are freely available and redistributable, Red Hat support for these images is only available through Red Hat product subscriptions. See Using Red Hat Universal Base Images in the Red Hat Enterprise Linux documentation for information on how to use and build on standard, minimal and init UBI images. 2.6.3. Security scanning in RHEL For Red Hat Enterprise Linux (RHEL) systems, OpenSCAP scanning is available from the openscap-utils package. In RHEL, you can use the openscap-podman command to scan images for vulnerabilities. See Scanning containers and container images for vulnerabilities in the Red Hat Enterprise Linux documentation. OpenShift Container Platform enables you to leverage RHEL scanners with your CI/CD process. For example, you can integrate static code analysis tools that test for security flaws in your source code and software composition analysis tools that identify open source libraries to provide metadata on those libraries such as known vulnerabilities. 2.6.3.1. Scanning OpenShift images For the container images that are running in OpenShift Container Platform and are pulled from Red Hat Quay registries, you can use an Operator to list the vulnerabilities of those images. The Red Hat Quay Container Security Operator can be added to OpenShift Container Platform to provide vulnerability reporting for images added to selected namespaces. Container image scanning for Red Hat Quay is performed by the Clair . In Red Hat Quay, Clair can search for and report vulnerabilities in images built from RHEL, CentOS, Oracle, Alpine, Debian, and Ubuntu operating system software. 2.6.4. Integrating external scanning OpenShift Container Platform makes use of object annotations to extend functionality. External tools, such as vulnerability scanners, can annotate image objects with metadata to summarize results and control pod execution. This section describes the recognized format of this annotation so it can be reliably used in consoles to display useful data to users. 2.6.4.1. Image metadata There are different types of image quality data, including package vulnerabilities and open source software (OSS) license compliance. Additionally, there may be more than one provider of this metadata. To that end, the following annotation format has been reserved: Table 2.1. Annotation key format Component Description Acceptable values qualityType Metadata type vulnerability license operations policy providerId Provider ID string openscap redhatcatalog redhatinsights blackduck jfrog 2.6.4.1.1. Example annotation keys The value of the image quality annotation is structured data that must adhere to the following format: Table 2.2. Annotation value format Field Required? Description Type name Yes Provider display name String timestamp Yes Scan timestamp String description No Short description String reference Yes URL of information source or more details. Required so user may validate the data. String scannerVersion No Scanner version String compliant No Compliance pass or fail Boolean summary No Summary of issues found List (see table below) The summary field must adhere to the following format: Table 2.3. Summary field value format Field Description Type label Display label for component (for example, "critical," "important," "moderate," "low," or "health") String data Data for this component (for example, count of vulnerabilities found or score) String severityIndex Component index allowing for ordering and assigning graphical representation. The value is range 0..3 where 0 = low. Integer reference URL of information source or more details. Optional. String 2.6.4.1.2. Example annotation values This example shows an OpenSCAP annotation for an image with vulnerability summary data and a compliance boolean: OpenSCAP annotation { "name": "OpenSCAP", "description": "OpenSCAP vulnerability score", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://www.open-scap.org/930492", "compliant": true, "scannerVersion": "1.2", "summary": [ { "label": "critical", "data": "4", "severityIndex": 3, "reference": null }, { "label": "important", "data": "12", "severityIndex": 2, "reference": null }, { "label": "moderate", "data": "8", "severityIndex": 1, "reference": null }, { "label": "low", "data": "26", "severityIndex": 0, "reference": null } ] } This example shows the Container images section of the Red Hat Ecosystem Catalog annotation for an image with health index data with an external URL for additional details: Red Hat Ecosystem Catalog annotation { "name": "Red Hat Ecosystem Catalog", "description": "Container health index", "timestamp": "2016-09-08T05:04:46Z", "reference": "https://access.redhat.com/errata/RHBA-2016:1566", "compliant": null, "scannerVersion": "1.2", "summary": [ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ] } 2.6.4.2. Annotating image objects While image stream objects are what an end user of OpenShift Container Platform operates against, image objects are annotated with security metadata. Image objects are cluster-scoped, pointing to a single image that may be referenced by many image streams and tags. 2.6.4.2.1. Example annotate CLI command Replace <image> with an image digest, for example sha256:401e359e0f45bfdcf004e258b72e253fd07fba8cc5c6f2ed4f4608fb119ecc2 : USD oc annotate image <image> \ quality.images.openshift.io/vulnerability.redhatcatalog='{ \ "name": "Red Hat Ecosystem Catalog", \ "description": "Container health index", \ "timestamp": "2020-06-01T05:04:46Z", \ "compliant": null, \ "scannerVersion": "1.2", \ "reference": "https://access.redhat.com/errata/RHBA-2020:2347", \ "summary": "[ \ { "label": "Health index", "data": "B", "severityIndex": 1, "reference": null } ]" }' 2.6.4.3. Controlling pod execution Use the images.openshift.io/deny-execution image policy to programmatically control if an image can be run. 2.6.4.3.1. Example annotation annotations: images.openshift.io/deny-execution: true 2.6.4.4. Integration reference In most cases, external tools such as vulnerability scanners develop a script or plugin that watches for image updates, performs scanning, and annotates the associated image object with the results. Typically this automation calls the OpenShift Container Platform 4.18 REST APIs to write the annotation. See OpenShift Container Platform REST APIs for general information on the REST APIs. 2.6.4.4.1. Example REST API call The following example call using curl overrides the value of the annotation. Be sure to replace the values for <token> , <openshift_server> , <image_id> , and <image_annotation> . Patch API call USD curl -X PATCH \ -H "Authorization: Bearer <token>" \ -H "Content-Type: application/merge-patch+json" \ https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> \ --data '{ <image_annotation> }' The following is an example of PATCH payload data: Patch call data { "metadata": { "annotations": { "quality.images.openshift.io/vulnerability.redhatcatalog": "{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }" } } } Additional resources Image stream objects 2.7. Using container registries securely Container registries store container images to: Make images accessible to others Organize images into repositories that can include multiple versions of an image Optionally limit access to images, based on different authentication methods, or make them publicly available There are public container registries, such as Quay.io and Docker Hub where many people and organizations share their images. The Red Hat Registry offers supported Red Hat and partner images, while the Red Hat Ecosystem Catalog offers detailed descriptions and health checks for those images. To manage your own registry, you could purchase a container registry such as Red Hat Quay . From a security standpoint, some registries provide special features to check and improve the health of your containers. For example, Red Hat Quay offers container vulnerability scanning with Clair security scanner, build triggers to automatically rebuild images when source code changes in GitHub and other locations, and the ability to use role-based access control (RBAC) to secure access to images. 2.7.1. Knowing where containers come from? There are tools you can use to scan and track the contents of your downloaded and deployed container images. However, there are many public sources of container images. When using public container registries, you can add a layer of protection by using trusted sources. 2.7.2. Immutable and certified containers Consuming security updates is particularly important when managing immutable containers . Immutable containers are containers that will never be changed while running. When you deploy immutable containers, you do not step into the running container to replace one or more binaries. From an operational standpoint, you rebuild and redeploy an updated container image to replace a container instead of changing it. Red Hat certified images are: Free of known vulnerabilities in the platform components or layers Compatible across the RHEL platforms, from bare metal to cloud Supported by Red Hat The list of known vulnerabilities is constantly evolving, so you must track the contents of your deployed container images, as well as newly downloaded images, over time. You can use Red Hat Security Advisories (RHSAs) to alert you to any newly discovered issues in Red Hat certified container images, and direct you to the updated image. Alternatively, you can go to the Red Hat Ecosystem Catalog to look up that and other security-related issues for each Red Hat image. 2.7.3. Getting containers from Red Hat Registry and Ecosystem Catalog Red Hat lists certified container images for Red Hat products and partner offerings from the Container Images section of the Red Hat Ecosystem Catalog. From that catalog, you can see details of each image, including CVE, software packages listings, and health scores. Red Hat images are actually stored in what is referred to as the Red Hat Registry , which is represented by a public container registry ( registry.access.redhat.com ) and an authenticated registry ( registry.redhat.io ). Both include basically the same set of container images, with registry.redhat.io including some additional images that require authentication with Red Hat subscription credentials. Container content is monitored for vulnerabilities by Red Hat and updated regularly. When Red Hat releases security updates, such as fixes to glibc , DROWN , or Dirty Cow , any affected container images are also rebuilt and pushed to the Red Hat Registry. Red Hat uses a health index to reflect the security risk for each container provided through the Red Hat Ecosystem Catalog. Because containers consume software provided by Red Hat and the errata process, old, stale containers are insecure whereas new, fresh containers are more secure. To illustrate the age of containers, the Red Hat Ecosystem Catalog uses a grading system. A freshness grade is a measure of the oldest and most severe security errata available for an image. "A" is more up to date than "F". See Container Health Index grades as used inside the Red Hat Ecosystem Catalog for more details on this grading system. See the Red Hat Product Security Center for details on security updates and vulnerabilities related to Red Hat software. Check out Red Hat Security Advisories to search for specific advisories and CVEs. 2.7.4. OpenShift Container Registry OpenShift Container Platform includes the OpenShift Container Registry , a private registry running as an integrated component of the platform that you can use to manage your container images. The OpenShift Container Registry provides role-based access controls that allow you to manage who can pull and push which container images. OpenShift Container Platform also supports integration with other private registries that you might already be using, such as Red Hat Quay. Additional resources Integrated OpenShift image registry 2.7.5. Storing containers using Red Hat Quay Red Hat Quay is an enterprise-quality container registry product from Red Hat. Development for Red Hat Quay is done through the upstream Project Quay . Red Hat Quay is available to deploy on-premise or through the hosted version of Red Hat Quay at Quay.io . Security-related features of Red Hat Quay include: Time machine : Allows images with older tags to expire after a set period of time or based on a user-selected expiration time. Repository mirroring : Lets you mirror other registries for security reasons, such hosting a public repository on Red Hat Quay behind a company firewall, or for performance reasons, to keep registries closer to where they are used. Action log storage : Save Red Hat Quay logging output to Elasticsearch storage or Splunk to allow for later search and analysis. Clair : Scan images against a variety of Linux vulnerability databases, based on the origins of each container image. Internal authentication : Use the default local database to handle RBAC authentication to Red Hat Quay or choose from LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token authentication. External authorization (OAuth) : Allow authorization to Red Hat Quay from GitHub, GitHub Enterprise, or Google Authentication. Access settings : Generate tokens to allow access to Red Hat Quay from docker, rkt, anonymous access, user-created accounts, encrypted client passwords, or prefix username autocompletion. Ongoing integration of Red Hat Quay with OpenShift Container Platform continues, with several OpenShift Container Platform Operators of particular interest. The Quay Bridge Operator lets you replace the internal OpenShift image registry with Red Hat Quay. The Red Hat Quay Container Security Operator lets you check vulnerabilities of images running in OpenShift Container Platform that were pulled from Red Hat Quay registries. 2.8. Securing the build process In a container environment, the software build process is the stage in the life cycle where application code is integrated with the required runtime libraries. Managing this build process is key to securing the software stack. 2.8.1. Building once, deploying everywhere Using OpenShift Container Platform as the standard platform for container builds enables you to guarantee the security of the build environment. Adhering to a "build once, deploy everywhere" philosophy ensures that the product of the build process is exactly what is deployed in production. It is also important to maintain the immutability of your containers. You should not patch running containers, but rebuild and redeploy them. As your software moves through the stages of building, testing, and production, it is important that the tools making up your software supply chain be trusted. The following figure illustrates the process and tools that could be incorporated into a trusted software supply chain for containerized software: OpenShift Container Platform can be integrated with trusted code repositories (such as GitHub) and development platforms (such as Che) for creating and managing secure code. Unit testing could rely on Cucumber and JUnit . You could inspect your containers for vulnerabilities and compliance issues with Anchore or Twistlock, and use image scanning tools such as AtomicScan or Clair. Tools such as Sysdig could provide ongoing monitoring of your containerized applications. 2.8.2. Managing builds You can use Source-to-Image (S2I) to combine source code and base images. Builder images make use of S2I to enable your development and operations teams to collaborate on a reproducible build environment. With Red Hat S2I images available as Universal Base Image (UBI) images, you can now freely redistribute your software with base images built from real RHEL RPM packages. Red Hat has removed subscription restrictions to allow this. When developers commit code with Git for an application using build images, OpenShift Container Platform can perform the following functions: Trigger, either by using webhooks on the code repository or other automated continuous integration (CI) process, to automatically assemble a new image from available artifacts, the S2I builder image, and the newly committed code. Automatically deploy the newly built image for testing. Promote the tested image to production where it can be automatically deployed using a CI process. You can use the integrated OpenShift Container Registry to manage access to final images. Both S2I and native build images are automatically pushed to your OpenShift Container Registry. In addition to the included Jenkins for CI, you can also integrate your own build and CI environment with OpenShift Container Platform using RESTful APIs, as well as use any API-compliant image registry. 2.8.3. Securing inputs during builds In some scenarios, build operations require credentials to access dependent resources, but it is undesirable for those credentials to be available in the final application image produced by the build. You can define input secrets for this purpose. For example, when building a Node.js application, you can set up your private mirror for Node.js modules. To download modules from that private mirror, you must supply a custom .npmrc file for the build that contains a URL, user name, and password. For security reasons, you do not want to expose your credentials in the application image. Using this example scenario, you can add an input secret to a new BuildConfig object: Create the secret, if it does not exist: USD oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc This creates a new secret named secret-npmrc , which contains the base64 encoded content of the ~/.npmrc file. Add the secret to the source section in the existing BuildConfig object: source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc To include the secret in a new BuildConfig object, run the following command: USD oc new-build \ openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git \ --build-secret secret-npmrc 2.8.4. Designing your build process You can design your container image management and build process to use container layers so that you can separate control. For example, an operations team manages base images, while architects manage middleware, runtimes, databases, and other solutions. Developers can then focus on application layers and focus on writing code. Because new vulnerabilities are identified daily, you need to proactively check container content over time. To do this, you should integrate automated security testing into your build or CI process. For example: SAST / DAST - Static and Dynamic security testing tools. Scanners for real-time checking against known vulnerabilities. Tools like these catalog the open source packages in your container, notify you of any known vulnerabilities, and update you when new vulnerabilities are discovered in previously scanned packages. Your CI process should include policies that flag builds with issues discovered by security scans so that your team can take appropriate action to address those issues. You should sign your custom built containers to ensure that nothing is tampered with between build and deployment. Using GitOps methodology, you can use the same CI/CD mechanisms to manage not only your application configurations, but also your OpenShift Container Platform infrastructure. 2.8.5. Building Knative serverless applications Relying on Kubernetes and Kourier, you can build, deploy, and manage serverless applications by using OpenShift Serverless in OpenShift Container Platform. As with other builds, you can use S2I images to build your containers, then serve them using Knative services. View Knative application builds through the Topology view of the OpenShift Container Platform web console. 2.8.6. Additional resources Understanding image builds Triggering and modifying builds Creating build inputs Input secrets and config maps OpenShift Serverless overview Viewing application composition using the Topology view 2.9. Deploying containers You can use a variety of techniques to make sure that the containers you deploy hold the latest production-quality content and that they have not been tampered with. These techniques include setting up build triggers to incorporate the latest code and using signatures to ensure that the container comes from a trusted source and has not been modified. 2.9.1. Controlling container deployments with triggers If something happens during the build process, or if a vulnerability is discovered after an image has been deployed, you can use tooling for automated, policy-based deployment to remediate. You can use triggers to rebuild and replace images, ensuring the immutable containers process, instead of patching running containers, which is not recommended. For example, you build an application using three container image layers: core, middleware, and applications. An issue is discovered in the core image and that image is rebuilt. After the build is complete, the image is pushed to your OpenShift Container Registry. OpenShift Container Platform detects that the image has changed and automatically rebuilds and deploys the application image, based on the defined triggers. This change incorporates the fixed libraries and ensures that the production code is identical to the most current image. You can use the oc set triggers command to set a deployment trigger. For example, to set a trigger for a deployment called deployment-example: USD oc set triggers deploy/deployment-example \ --from-image=example:latest \ --containers=web 2.9.2. Controlling what image sources can be deployed It is important that the intended images are actually being deployed, that the images including the contained content are from trusted sources, and they have not been altered. Cryptographic signing provides this assurance. OpenShift Container Platform enables cluster administrators to apply security policy that is broad or narrow, reflecting deployment environment and security requirements. Two parameters define this policy: one or more registries, with optional project namespace trust type, such as accept, reject, or require public key(s) You can use these policy parameters to allow, deny, or require a trust relationship for entire registries, parts of registries, or individual images. Using trusted public keys, you can ensure that the source is cryptographically verified. The policy rules apply to nodes. Policy may be applied uniformly across all nodes or targeted for different node workloads (for example, build, zone, or environment). Example image signature policy file { "default": [{"type": "reject"}], "transports": { "docker": { "access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "atomic": { "172.30.1.1:5000/openshift": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "172.30.1.1:5000/production": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/example.com/pubkey" } ], "172.30.1.1:5000": [{"type": "reject"}] } } } The policy can be saved onto a node as /etc/containers/policy.json . Saving this file to a node is best accomplished using a new MachineConfig object. This example enforces the following rules: Require images from the Red Hat Registry ( registry.access.redhat.com ) to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the openshift namespace to be signed by the Red Hat public key. Require images from your OpenShift Container Registry in the production namespace to be signed by the public key for example.com . Reject all other registries not specified by the global default definition. 2.9.3. Using signature transports A signature transport is a way to store and retrieve the binary signature blob. There are two types of signature transports. atomic : Managed by the OpenShift Container Platform API. docker : Served as a local file or by a web server. The OpenShift Container Platform API manages signatures that use the atomic transport type. You must store the images that use this signature type in your OpenShift Container Registry. Because the docker/distribution extensions API auto-discovers the image signature endpoint, no additional configuration is required. Signatures that use the docker transport type are served by local file or web server. These signatures are more flexible; you can serve images from any container image registry and use an independent server to deliver binary signatures. However, the docker transport type requires additional configuration. You must configure the nodes with the URI of the signature server by placing arbitrarily-named YAML files into a directory on the host system, /etc/containers/registries.d by default. The YAML configuration files contain a registry URI and a signature server URI, or sigstore : Example registries.d file docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore In this example, the Red Hat Registry, access.redhat.com , is the signature server that provides signatures for the docker transport type. Its URI is defined in the sigstore parameter. You might name this file /etc/containers/registries.d/redhat.com.yaml and use the Machine Config Operator to automatically place the file on each node in your cluster. No service restart is required since policy and registries.d files are dynamically loaded by the container runtime. 2.9.4. Creating secrets and config maps The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, and private source repository credentials. Secrets decouple sensitive content from pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. For example, to add a secret to your deployment configuration so that it can access a private image repository, do the following: Procedure Log in to the OpenShift Container Platform web console. Create a new project. Navigate to Resources Secrets and create a new secret. Set Secret Type to Image Secret and Authentication Type to Image Registry Credentials to enter credentials for accessing a private image repository. When creating a deployment configuration (for example, from the Add to Project Deploy Image page), set the Pull Secret to your new secret. Config maps are similar to secrets, but are designed to support working with strings that do not contain sensitive information. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. 2.9.5. Automating continuous deployment You can integrate your own continuous deployment (CD) tooling with OpenShift Container Platform. By leveraging CI/CD and OpenShift Container Platform, you can automate the process of rebuilding the application to incorporate the latest fixes, testing, and ensuring that it is deployed everywhere within the environment. Additional resources Input secrets and config maps 2.10. Securing the container platform OpenShift Container Platform and Kubernetes APIs are key to automating container management at scale. APIs are used to: Validate and configure the data for pods, services, and replication controllers. Perform project validation on incoming requests and invoke triggers on other major system components. Security-related features in OpenShift Container Platform that are based on Kubernetes include: Multitenancy, which combines Role-Based Access Controls and network policies to isolate containers at multiple levels. Admission plugins, which form boundaries between an API and those making requests to the API. OpenShift Container Platform uses Operators to automate and simplify the management of Kubernetes-level security features. 2.10.1. Isolating containers with multitenancy Multitenancy allows applications on an OpenShift Container Platform cluster that are owned by multiple users, and run across multiple hosts and namespaces, to remain isolated from each other and from outside attacks. You obtain multitenancy by applying role-based access control (RBAC) to Kubernetes namespaces. In Kubernetes, namespaces are areas where applications can run in ways that are separate from other applications. OpenShift Container Platform uses and extends namespaces by adding extra annotations, including MCS labeling in SELinux, and identifying these extended namespaces as projects . Within the scope of a project, users can maintain their own cluster resources, including service accounts, policies, constraints, and various other objects. RBAC objects are assigned to projects to authorize selected users to have access to those projects. That authorization takes the form of rules, roles, and bindings: Rules define what a user can create or access in a project. Roles are collections of rules that you can bind to selected users or groups. Bindings define the association between users or groups and roles. Local RBAC roles and bindings attach a user or group to a particular project. Cluster RBAC can attach cluster-wide roles and bindings to all projects in a cluster. There are default cluster roles that can be assigned to provide admin , basic-user , cluster-admin , and cluster-status access. 2.10.2. Protecting control plane with admission plugins While RBAC controls access rules between users and groups and available projects, admission plugins define access to the OpenShift Container Platform master API. Admission plugins form a chain of rules that consist of: Default admissions plugins: These implement a default set of policies and resources limits that are applied to components of the OpenShift Container Platform control plane. Mutating admission plugins: These plugins dynamically extend the admission chain. They call out to a webhook server and can both authenticate a request and modify the selected resource. Validating admission plugins: These validate requests for a selected resource and can both validate the request and ensure that the resource does not change again. API requests go through admissions plugins in a chain, with any failure along the way causing the request to be rejected. Each admission plugin is associated with particular resources and only responds to requests for those resources. 2.10.2.1. Security context constraints (SCCs) You can use security context constraints (SCCs) to define a set of conditions that a pod must run with to be accepted into the system. Some aspects that can be managed by SCCs include: Running of privileged containers Capabilities a container can request to be added Use of host directories as volumes SELinux context of the container Container user ID If you have the required permissions, you can adjust the default SCC policies to be more permissive, if required. 2.10.2.2. Granting roles to service accounts You can assign roles to service accounts, in the same way that users are assigned role-based access. There are three default service accounts created for each project. A service account: is limited in scope to a particular project derives its name from its project is automatically assigned an API token and credentials to access the OpenShift Container Registry Service accounts associated with platform components automatically have their keys rotated. 2.10.3. Authentication and authorization 2.10.3.1. Controlling access using OAuth You can use API access control via authentication and authorization for securing your container platform. The OpenShift Container Platform master includes a built-in OAuth server. Users can obtain OAuth access tokens to authenticate themselves to the API. As an administrator, you can configure OAuth to authenticate using an identity provider , such as LDAP, GitHub, or Google. The identity provider is used by default for new OpenShift Container Platform deployments, but you can configure this at initial installation time or postinstallation. 2.10.3.2. API access control and management Applications can have multiple, independent API services which have different endpoints that require management. OpenShift Container Platform includes a containerized version of the 3scale API gateway so that you can manage your APIs and control access. 3scale gives you a variety of standard options for API authentication and security, which can be used alone or in combination to issue credentials and control access: standard API keys, application ID and key pair, and OAuth 2.0. You can restrict access to specific endpoints, methods, and services and apply access policy for groups of users. Application plans allow you to set rate limits for API usage and control traffic flow for groups of developers. For a tutorial on using APIcast v2, the containerized 3scale API Gateway, see Running APIcast on Red Hat OpenShift in the 3scale documentation. 2.10.3.3. Red Hat Single Sign-On The Red Hat Single Sign-On server enables you to secure your applications by providing web single sign-on capabilities based on standards, including SAML 2.0, OpenID Connect, and OAuth 2.0. The server can act as a SAML or OpenID Connect-based identity provider (IdP), mediating with your enterprise user directory or third-party identity provider for identity information and your applications using standards-based tokens. You can integrate Red Hat Single Sign-On with LDAP-based directory services including Microsoft Active Directory and Red Hat Enterprise Linux Identity Management. 2.10.3.4. Secure self-service web console OpenShift Container Platform provides a self-service web console to ensure that teams do not access other environments without authorization. OpenShift Container Platform ensures a secure multitenant master by providing the following: Access to the master uses Transport Layer Security (TLS) Access to the API Server uses X.509 certificates or OAuth access tokens Project quota limits the damage that a rogue token could do The etcd service is not exposed directly to the cluster 2.10.4. Managing certificates for the platform OpenShift Container Platform has multiple components within its framework that use REST-based HTTPS communication leveraging encryption via TLS certificates. OpenShift Container Platform's installer configures these certificates during installation. There are some primary components that generate this traffic: masters (API server and controllers) etcd nodes registry router 2.10.4.1. Configuring custom certificates You can configure custom serving certificates for the public hostnames of the API server and web console during initial installation or when redeploying certificates. You can also use a custom CA. Additional resources Introduction to OpenShift Container Platform Using RBAC to define and apply permissions About admission plugins Managing security context constraints SCC reference commands Examples of granting roles to service accounts Configuring the internal OAuth server Understanding identity provider configuration Certificate types and descriptions Proxy certificates 2.11. Securing networks Network security can be managed at several levels. At the pod level, network namespaces can prevent containers from seeing other pods or the host system by restricting network access. Network policies give you control over allowing and rejecting connections. You can manage ingress and egress traffic to and from your containerized applications. 2.11.1. Using network namespaces OpenShift Container Platform uses software-defined networking (SDN) to provide a unified cluster network that enables communication between containers across the cluster. Network policy mode, by default, makes all pods in a project accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Using multitenant mode, you can provide project-level isolation for pods and services. 2.11.2. Isolating pods with network policies Using network policies , you can isolate pods from each other in the same project. Network policies can deny all network access to a pod, only allow connections for the Ingress Controller, reject connections from pods in other projects, or set similar rules for how networks behave. Additional resources About network policy 2.11.3. Using multiple pod networks Each running container has only one network interface by default. The Multus CNI plugin lets you create multiple CNI networks, and then attach any of those networks to your pods. In that way, you can do things like separate private data onto a more restricted network and have multiple network interfaces on each node. Additional resources Using multiple networks 2.11.4. Isolating applications OpenShift Container Platform enables you to segment network traffic on a single cluster to make multitenant clusters that isolate users, teams, applications, and environments from non-global resources. 2.11.5. Securing ingress traffic There are many security implications related to how you configure access to your Kubernetes services from outside of your OpenShift Container Platform cluster. Besides exposing HTTP and HTTPS routes, ingress routing allows you to set up NodePort or LoadBalancer ingress types. NodePort exposes an application's service API object from each cluster worker. LoadBalancer lets you assign an external load balancer to an associated service API object in your OpenShift Container Platform cluster. Additional resources Configuring ingress cluster traffic 2.11.6. Securing egress traffic OpenShift Container Platform provides the ability to control egress traffic using either a router or firewall method. For example, you can use the IP allow list to control database access. A cluster administrator can assign one or more egress IP addresses to a project by configuring an egress IP address . Likewise, a cluster administrator can prevent egress traffic from going outside of an OpenShift Container Platform cluster using an egress firewall. By assigning a fixed egress IP address, you can have all outgoing traffic assigned to that IP address for a particular project. With the egress firewall, you can prevent a pod from connecting to an external network, prevent a pod from connecting to an internal network, or limit a pod's access to specific internal subnets. Additional resources Configuring an egress firewall for a project Configuring IPsec encryption 2.12. Securing attached storage OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. In particular, OpenShift Container Platform can use storage types that support the Container Storage Interface. 2.12.1. Persistent volume plugins Containers are useful for both stateless and stateful applications. Protecting attached storage is a key element of securing stateful services. Using the Container Storage Interface (CSI), OpenShift Container Platform can incorporate storage from any storage back end that supports the CSI interface. OpenShift Container Platform provides plugins for multiple types of storage, including: Red Hat OpenShift Data Foundation * AWS Elastic Block Stores (EBS) * AWS Elastic File System (EFS) * Azure Disk * Azure File * OpenStack Cinder * GCE Persistent Disks * VMware vSphere * Network File System (NFS) FlexVolume Fibre Channel iSCSI Plugins for those storage types with dynamic provisioning are marked with an asterisk (*). Data in transit is encrypted via HTTPS for all OpenShift Container Platform components communicating with each other. You can mount a persistent volume (PV) on a host in any way supported by your storage type. Different types of storage have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV has its own set of access modes describing that specific PV's capabilities, such as ReadWriteOnce , ReadOnlyMany , and ReadWriteMany . 2.12.2. Shared storage For shared storage providers like NFS, the PV registers its group ID (GID) as an annotation on the PV resource. Then, when the PV is claimed by the pod, the annotated GID is added to the supplemental groups of the pod, giving that pod access to the contents of the shared storage. 2.12.3. Block storage For block storage providers like AWS Elastic Block Store (EBS), GCE Persistent Disks, and iSCSI, OpenShift Container Platform uses SELinux capabilities to secure the root of the mounted volume for non-privileged pods, making the mounted volume owned by and only visible to the container with which it is associated. Additional resources Understanding persistent storage Configuring CSI volumes Dynamic provisioning Persistent storage using NFS Persistent storage using AWS Elastic Block Store Persistent storage using GCE Persistent Disk 2.13. Monitoring cluster events and logs The ability to monitor and audit an OpenShift Container Platform cluster is an important part of safeguarding the cluster and its users against inappropriate usage. There are two main sources of cluster-level information that are useful for this purpose: events and logging. 2.13.1. Watching cluster events Cluster administrators are encouraged to familiarize themselves with the Event resource type and review the list of system events to determine which events are of interest. Events are associated with a namespace, either the namespace of the resource they are related to or, for cluster events, the default namespace. The default namespace holds relevant events for monitoring or auditing a cluster, such as node events and resource events related to infrastructure components. The master API and oc command do not provide parameters to scope a listing of events to only those related to nodes. A simple approach would be to use grep : USD oc get event -n default | grep Node Example output 1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure ... A more flexible approach is to output the events in a form that other tools can process. For example, the following example uses the jq tool against JSON output to extract only NodeHasDiskPressure events: USD oc get events -n default -o json \ | jq '.items[] | select(.involvedObject.kind == "Node" and .reason == "NodeHasDiskPressure")' Example output { "apiVersion": "v1", "count": 3, "involvedObject": { "kind": "Node", "name": "origin-node-1.example.local", "uid": "origin-node-1.example.local" }, "kind": "Event", "reason": "NodeHasDiskPressure", ... } Events related to resource creation, modification, or deletion can also be good candidates for detecting misuse of the cluster. The following query, for example, can be used to look for excessive pulling of images: USD oc get events --all-namespaces -o json \ | jq '[.items[] | select(.involvedObject.kind == "Pod" and .reason == "Pulling")] | length' Example output 4 Note When a namespace is deleted, its events are deleted as well. Events can also expire and are deleted to prevent filling up etcd storage. Events are not stored as a permanent record and frequent polling is necessary to capture statistics over time. 2.13.2. Logging Using the oc log command, you can view container logs, build configs and deployments in real time. Different can users have access different access to logs: Users who have access to a project are able to see the logs for that project by default. Users with admin roles can access all container logs. To save your logs for further audit and analysis, you can enable the cluster-logging add-on feature to collect, manage, and view system, container, and audit logs. You can deploy, manage, and upgrade OpenShift Logging through the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator. 2.13.3. Audit logs With audit logs , you can follow a sequence of activities associated with how a user, administrator, or other OpenShift Container Platform component is behaving. API audit logging is done on each server. Additional resources List of system events Viewing audit logs
[ "variant: openshift version: 4.18.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml", "oc apply -f 51-worker-rh-registry-trust.yaml", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1", "oc debug node/<node_name>", "sh-4.2# chroot /host", "docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc describe machineconfigpool/worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none>", "oc describe machineconfigpool/worker", "Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3", "oc debug node/<node> -- chroot /host cat /etc/containers/policy.json", "Starting pod/<node>-debug To use host binaries, run `chroot /host` { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2", "oc adm release info <release_version> \\ 1", "--- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 ---", "curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt", "curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \\ 1", "skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \\ 1", "skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key", "Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55", "quality.images.openshift.io/<qualityType>.<providerId>: {}", "quality.images.openshift.io/vulnerability.blackduck: {} quality.images.openshift.io/vulnerability.jfrog: {} quality.images.openshift.io/license.blackduck: {} quality.images.openshift.io/vulnerability.openscap: {}", "{ \"name\": \"OpenSCAP\", \"description\": \"OpenSCAP vulnerability score\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://www.open-scap.org/930492\", \"compliant\": true, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"critical\", \"data\": \"4\", \"severityIndex\": 3, \"reference\": null }, { \"label\": \"important\", \"data\": \"12\", \"severityIndex\": 2, \"reference\": null }, { \"label\": \"moderate\", \"data\": \"8\", \"severityIndex\": 1, \"reference\": null }, { \"label\": \"low\", \"data\": \"26\", \"severityIndex\": 0, \"reference\": null } ] }", "{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://access.redhat.com/errata/RHBA-2016:1566\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ] }", "oc annotate image <image> quality.images.openshift.io/vulnerability.redhatcatalog='{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2020-06-01T05:04:46Z\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"reference\": \"https://access.redhat.com/errata/RHBA-2020:2347\", \"summary\": \"[ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ]\" }'", "annotations: images.openshift.io/deny-execution: true", "curl -X PATCH -H \"Authorization: Bearer <token>\" -H \"Content-Type: application/merge-patch+json\" https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> --data '{ <image_annotation> }'", "{ \"metadata\": { \"annotations\": { \"quality.images.openshift.io/vulnerability.redhatcatalog\": \"{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }\" } } }", "oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc", "source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc", "oc new-build openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git --build-secret secret-npmrc", "oc set triggers deploy/deployment-example --from-image=example:latest --containers=web", "{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"atomic\": { \"172.30.1.1:5000/openshift\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"172.30.1.1:5000/production\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/example.com/pubkey\" } ], \"172.30.1.1:5000\": [{\"type\": \"reject\"}] } } }", "docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc get event -n default | grep Node", "1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure", "oc get events -n default -o json | jq '.items[] | select(.involvedObject.kind == \"Node\" and .reason == \"NodeHasDiskPressure\")'", "{ \"apiVersion\": \"v1\", \"count\": 3, \"involvedObject\": { \"kind\": \"Node\", \"name\": \"origin-node-1.example.local\", \"uid\": \"origin-node-1.example.local\" }, \"kind\": \"Event\", \"reason\": \"NodeHasDiskPressure\", }", "oc get events --all-namespaces -o json | jq '[.items[] | select(.involvedObject.kind == \"Pod\" and .reason == \"Pulling\")] | length'", "4" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_and_compliance/container-security-1
Managing hybrid and multicloud resources
Managing hybrid and multicloud resources Red Hat OpenShift Data Foundation 4.15 Instructions for how to manage storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa). Red Hat Storage Documentation Team Abstract This document explains how to manage storage resources across a hybrid cloud or multicloud environment. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Chapter 1. About the Multicloud Object Gateway The Multicloud Object Gateway (MCG) is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-premise, in multiple clusters, and with cloud-native storage. Chapter 2. Accessing the Multicloud Object Gateway with your applications You can access the object service with any application targeting AWS S3 or code that uses AWS S3 Software Development Kit (SDK). Applications need to specify the Multicloud Object Gateway (MCG) endpoint, an access key, and a secret access key. You can use your terminal or the MCG CLI to retrieve this information. For information on accessing the RADOS Object Gateway (RGW) S3 endpoint, see Accessing the RADOS Object Gateway S3 endpoint . Prerequisites A running OpenShift Data Foundation Platform. Download the MCG command-line interface for easier management. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page . Note Choose the correct Product Variant according to your architecture. You can access the relevant endpoint, access key, and secret access key in two ways: Accessing the Multicloud Object Gateway from the terminal Accessing the Multicloud Object Gateway from the MCG command-line interface For example: Accessing the MCG bucket(s) using the virtual-hosted style If the client application tries to access https:// <bucket-name> .s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com <bucket-name> is the name of the MCG bucket For example, https://mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com A DNS entry is needed for mcg-test-bucket.s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com to point to the S3 Service. Important Ensure that you have a DNS entry in order to point the client application to the MCG buckets using the virtual-hosted style. 2.1. Accessing the Multicloud Object Gateway from the terminal Procedure Run the describe command to view information about the Multicloud Object Gateway (MCG) endpoint, including its access key ( AWS_ACCESS_KEY_ID value) and secret access key ( AWS_SECRET_ACCESS_KEY value). The output will look similar to the following: 1 access key ( AWS_ACCESS_KEY_ID value) 2 secret access key ( AWS_SECRET_ACCESS_KEY value) 3 MCG endpoint Note The output from the oc describe noobaa command lists the internal and external DNS names that are available. When using the internal DNS, the traffic is free. The external DNS uses Load Balancing to process the traffic, and therefore has a cost per hour. 2.2. Accessing the Multicloud Object Gateway from the MCG command-line interface Prerequisites Download the MCG command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Procedure Run the status command to access the endpoint, access key, and secret access key: The output will look similar to the following: 1 endpoint 2 access key 3 secret access key You have the relevant endpoint, access key, and secret access key in order to connect to your applications. For example: If AWS S3 CLI is the application, the following command will list the buckets in OpenShift Data Foundation: 2.3. Support of Multicloud Object Gateway data bucket APIs The following table lists the Multicloud Object Gateway (MCG) data bucket APIs and their support levels. Data buckets Support List buckets Supported Delete bucket Supported Replication configuration is part of MCG bucket class configuration Create bucket Supported A different set of canned ACLs Post bucket Not supported Put bucket Partially supported Replication configuration is part of MCG bucket class configuration Bucket lifecycle Partially supported Object expiration only Policy (Buckets, Objects) Partially supported Bucket policies are supported Bucket Website Supported Bucket ACLs (Get, Put) Supported A different set of canned ACLs Bucket Location Partialy Returns a default value only Bucket Notification Not supported Bucket Object Versions Supoorted Get Bucket Info (HEAD) Supported Bucket Request Payment Partially supported Returns the bucket owner Put Object Supported Delete Object Supported Get Object Supported Object ACLs (Get, Put) Supported Get Object Info (HEAD) Supported POST Object Supported Copy Object Supported Multipart Uploads Supported Object Tagging Supported Storage Class Not supported Note No support for cors, metrics, inventory, analytics, inventory, logging, notifications, accelerate, replication, request payment, locks verbs Chapter 3. Adding storage resources for hybrid or Multicloud 3.1. Creating a new backing store Use this procedure to create a new backing store in OpenShift Data Foundation. Prerequisites Administrator access to OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Backing Store tab. Click Create Backing Store . On the Create New Backing Store page, perform the following: Enter a Backing Store Name . Select a Provider . Select a Region . Optional: Enter an Endpoint . Select a Secret from the drop-down list, or create your own secret. Optionally, you can Switch to Credentials view which lets you fill in the required secrets. For more information on creating an OCP secret, see the section Creating the secret in the Openshift Container Platform documentation. Each backingstore requires a different secret. For more information on creating the secret for a particular backingstore, see the Section 3.3, "Adding storage resources for hybrid or Multicloud using the MCG command line interface" and follow the procedure for the addition of storage resources using a YAML. Note This menu is relevant for all providers except Google Cloud and local PVC. Enter the Target bucket . The target bucket is a container storage that is hosted on the remote cloud service. It allows you to create a connection that tells the MCG that it can use this bucket for the system. Click Create Backing Store . Verification steps In the OpenShift Web Console, click Storage -> Object Storage . Click the Backing Store tab to view all the backing stores. 3.2. Overriding the default backing store You can use the manualDefaultBackingStore flag to override the default NooBaa backing store and remove it if you do not want to use the default backing store configuration. This provides flexibility to customize your backing store configuration and tailor it to your specific needs. By leveraging this feature, you can further optimize your system and enhance its performance. Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Download the Multicloud Object Gateway (MCG) command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Check if noobaa-default-backing-store is present: Patch the NooBaa CR to enable manualDefaultBackingStore : Important Use the Multicloud Object Gateway CLI to create a new backing store and update accounts. Create a new default backing store to override the default backing store. For example: Replace NEW-DEFAULT-BACKING-STORE with the name you want for your new default backing store. Update the admin account to use the new default backing store as its default resource: Replace NEW-DEFAULT-BACKING-STORE with the name of the backing store from the step. Updating the default resource for admin accounts ensures that the new configuration is used throughout your system. Configure the default-bucketclass to use the new default backingstore: Optional: Delete the noobaa-default-backing-store. Delete all instances of and buckets associated with noobaa-default-backing-store and update the accounts using it as resource. Delete the noobaa-default-backing-store: You must enable the manualDefaultBackingStore flag before proceeding. Additionally, it is crucial to update all accounts that use the default resource and delete all instances of and buckets associated with the default backing store to ensure a smooth transition. 3.3. Adding storage resources for hybrid or Multicloud using the MCG command line interface The Multicloud Object Gateway (MCG) simplifies the process of spanning data across the cloud provider and clusters. Add a backing storage that can be used by the MCG. Depending on the type of your deployment, you can choose one of the following procedures to create a backing storage: For creating an AWS-backed backingstore, see Section 3.3.1, "Creating an AWS-backed backingstore" For creating an AWS-STS-backed backingstore, see Section 3.3.2, "Creating an AWS-STS-backed backingstore" For creating an IBM COS-backed backingstore, see Section 3.3.3, "Creating an IBM COS-backed backingstore" For creating an Azure-backed backingstore, see Section 3.3.4, "Creating an Azure-backed backingstore" For creating a GCP-backed backingstore, see Section 3.3.5, "Creating a GCP-backed backingstore" For creating a local Persistent Volume-backed backingstore, see Section 3.3.6, "Creating a local Persistent Volume-backed backingstore" For VMware deployments, skip to Section 3.4, "Creating an s3 compatible Multicloud Object Gateway backingstore" for further instructions. 3.3.1. Creating an AWS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument indicates to the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> Supply and encode your own AWS access key ID and secret access key using Base64, and use the results for <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . <backingstore-secret-name> The name of the backingstore secret created in the step. Apply the following YAML for a specific backing store: <bucket-name> The existing AWS bucket name. <backingstore-secret-name> The name of the backingstore secret created in the step. 3.3.2. Creating an AWS-STS-backed backingstore Amazon Web Services Security Token Service (AWS STS) is an AWS feature and it is a way to authenticate using short-lived credentials. Creating an AWS-STS-backed backingstore involves the following: Creating an AWS role using a script, which helps to get the temporary security credentials for the role session Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster Creating backingstore in AWS STS OpenShift cluster 3.3.2.1. Creating an AWS role using a script You need to create a role and pass the role Amazon resource name (ARN) while installing the OpenShift Data Foundation operator. Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Procedure Create an AWS role using a script that matches OpenID Connect (OIDC) configuration for Multicloud Object Gateway (MCG) on OpenShift Data Foundation. The following example shows the details that are required to create the role: where 123456789123 Is the AWS account ID mybucket Is the bucket name (using public bucket configuration) us-east-2 Is the AWS region openshift-storage Is the namespace name Sample script 3.3.2.2. Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script . Procedure Install OpenShift Data Foundation Operator from the Operator Hub. During the installation add the role ARN in the ARN Details field. Make sure that the Update approval field is set to Manual . 3.3.2.3. Creating a new AWS STS backingstore Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script . Install OpenShift Data Foundation Operator. For more information, see Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster . Procedure Install Multicloud Object Gateway (MCG). It is installed with the default backingstore by using the short-lived credentials. After the MCG system is ready, you can create more backingstores of the type aws-sts-s3 using the following MCG command line interface command: where backingstore-name Name of the backingstore aws-sts-role-arn The AWS STS role ARN which will assume role region The AWS bucket region target-bucket The target bucket name on the cloud 3.3.3. Creating an IBM COS-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For example, For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , and <IBM COS ENDPOINT> An IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. To generate the above keys on IBM cloud, you must include HMAC credentials while creating the service credentials for your target bucket. <bucket-name> An existing IBM bucket name. This argument indicates MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using an YAML Create a secret with the credentials: <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> Provide and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> The name of the backingstore secret. Apply the following YAML for a specific backing store: <bucket-name> an existing IBM COS bucket name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <endpoint> A regional endpoint that corresponds to the location of the existing IBM bucket name. This argument indicates to MCG about the endpoint to use for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> The name of the secret created in the step. 3.3.4. Creating an Azure-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> The name of the backingstore. <AZURE ACCOUNT KEY> and <AZURE ACCOUNT NAME> An AZURE account key and account name you created for this purpose. <blob container name> An existing Azure blob container name. This argument indicates to MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <AZURE ACCOUNT NAME ENCODED IN BASE64> and <AZURE ACCOUNT KEY ENCODED IN BASE64> Supply and encode your own Azure Account Name and Account Key using Base64, and use the results in place of these attributes respectively. <backingstore-secret-name> A unique name of backingstore secret. Apply the following YAML for a specific backing store: <blob-container-name> An existing Azure blob container name. This argument indicates to the MCG about the bucket to use as a target bucket for its backingstore, and subsequently, data storage and administration. <backingstore-secret-name> with the name of the secret created in the step. 3.3.5. Creating a GCP-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Using the MCG command-line interface From the MCG command-line interface, run the following command: <backingstore_name> Name of the backingstore. <PATH TO GCP PRIVATE KEY JSON FILE> A path to your GCP private key created for this purpose. <GCP bucket name> An existing GCP object storage bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. The output will be similar to the following: Adding storage resources using a YAML Create a secret with the credentials: <GCP PRIVATE KEY ENCODED IN BASE64> Provide and encode your own GCP service account private key using Base64, and use the results for this attribute. <backingstore-secret-name> A unique name of the backingstore secret. Apply the following YAML for a specific backing store: <target bucket> An existing Google storage bucket. This argument indicates to the MCG about the bucket to use as a target bucket for its backing store, and subsequently, data storage dfdand administration. <backingstore-secret-name> The name of the secret created in the step. 3.3.6. Creating a local Persistent Volume-backed backingstore Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Note Choose the correct Product Variant according to your architecture. Procedure Adding storage resources using the MCG command-line interface From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. Adding storage resources using YAML Apply the following YAML for a specific backing store: <backingstore_name > The name of the backingstore. <NUMBER OF VOLUMES> The number of volumes you would like to create. Note that increasing the number of volumes scales up the storage. <VOLUME SIZE> Required size in GB of each volume. <CPU REQUEST> Guaranteed amount of CPU requested in CPU unit m . <MEMORY REQUEST> Guaranteed amount of memory requested. <CPU LIMIT> Maximum amount of CPU that can be consumed in CPU unit m . <MEMORY LIMIT> Maximum amount of memory that can be consumed. <LOCAL STORAGE CLASS> The local storage class name, recommended to use ocs-storagecluster-ceph-rbd . The output will be similar to the following: 3.4. Creating an s3 compatible Multicloud Object Gateway backingstore The Multicloud Object Gateway (MCG) can use any S3 compatible object storage as a backing store, for example, Red Hat Ceph Storage's RADOS Object Gateway (RGW). The following procedure shows how to create an S3 compatible MCG backing store for Red Hat Ceph Storage's RGW. Note that when the RGW is deployed, OpenShift Data Foundation operator creates an S3 compatible backingstore for MCG automatically. Procedure From the MCG command-line interface, run the following command: Note This command must be run from within the openshift-storage namespace. To get the <RGW ACCESS KEY> and <RGW SECRET KEY> , run the following command using your RGW user secret name: Decode the access key ID and the access key from Base64 and keep them. Replace <RGW USER ACCESS KEY> and <RGW USER SECRET ACCESS KEY> with the appropriate, decoded data from the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . The output will be similar to the following: You can also create the backingstore using a YAML: Create a CephObjectStore user. This also creates a secret containing the RGW credentials: Replace <RGW-Username> and <Display-name> with a unique username and display name. Apply the following YAML for an S3-Compatible backing store: Replace <backingstore-secret-name> with the name of the secret that was created with CephObjectStore in the step. Replace <bucket-name> with an existing RGW bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. To get the <RGW endpoint> , see Accessing the RADOS Object Gateway S3 endpoint . 3.5. Creating a new bucket class Bucket class is a CRD representing a class of buckets that defines tiering policies and data placements for an Object Bucket Class. Use this procedure to create a bucket class in OpenShift Data Foundation. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click Create Bucket Class . On the Create new Bucket Class page, perform the following: Select the bucket class type and enter a bucket class name. Select the BucketClass type . Choose one of the following options: Standard : data will be consumed by a Multicloud Object Gateway (MCG), deduped, compressed and encrypted. Namespace : data is stored on the NamespaceStores without performing de-duplication, compression or encryption. By default, Standard is selected. Enter a Bucket Class Name . Click . In Placement Policy , select Tier 1 - Policy Type and click . You can choose either one of the options as per your requirements. Spread allows spreading of the data across the chosen resources. Mirror allows full duplication of the data across the chosen resources. Click Add Tier to add another policy tier. Select at least one Backing Store resource from the available list if you have selected Tier 1 - Policy Type as Spread and click . Alternatively, you can also create a new backing store . Note You need to select at least 2 backing stores when you select Policy Type as Mirror in step. Review and confirm Bucket Class settings. Click Create Bucket Class . Verification steps In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab and search the new Bucket Class. 3.6. Editing a bucket class Use the following procedure to edit the bucket class components through the YAML file by clicking the edit button on the Openshift web console. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class . You are redirected to the YAML file, make the required changes in this file and click Save . 3.7. Editing backing stores for bucket class Use the following procedure to edit an existing Multicloud Object Gateway (MCG) bucket class to change the underlying backing stores used in a bucket class. Prerequisites Administrator access to OpenShift Web Console. A bucket class. Backing stores. Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Bucket Class tab. Click the Action Menu (...) to the Bucket class you want to edit. Click Edit Bucket Class Resources . On the Edit Bucket Class Resources page, edit the bucket class resources either by adding a backing store to the bucket class or by removing a backing store from the bucket class. You can also edit bucket class resources created with one or two tiers and different placement policies. To add a backing store to the bucket class, select the name of the backing store. To remove a backing store from the bucket class, uncheck the name of the backing store. Click Save . Chapter 4. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so that you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. Note A namespace bucket can only be used if its write target is available and functional. 4.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Ensure that the credentials provided for the Multicloud Object Gateway (MCG) enables you to perform the AWS S3 namespace bucket operations. You can use the AWS tool, aws-cli to verify that all the operations can be performed on the target bucket. Also, the list bucket which is using this MCG account shows the target bucket. Red Hat OpenShift Data Foundation supports the following namespace bucket operations: ListBuckets ListObjects ListMultipartUploads ListObjectVersions GetObject HeadObject CopyObject PutObject CreateMultipartUpload UploadPartCopy UploadPart ListParts AbortMultipartUpload PubObjectTagging DeleteObjectTagging GetObjectTagging GetObjectAcl PutObjectAcl DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 4.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 4.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For instance, in case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface: Note Specify the appropriate architecture for enabling the repositories using subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Ensure that Openshift Container Platform with OpenShift Data Foundation operator is already installed. Access to the Multicloud Object Gateway (MCG). Procedure On the OpenShift Web Console, navigate to Storage -> Object Storage -> Namespace Store tab. Click Create namespace store to create a namespacestore resources to be used in the namespace bucket. Enter a namespacestore name. Choose a provider and region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Enter a target bucket. Click Create . On the Namespace Store tab, verify that the newly created namespacestore is in the Ready state. Repeat steps 2 and 3 until you have created all the desired amount of resources. Navigate to Bucket Class tab and click Create Bucket Class . Choose Namespace BucketClass type radio button. Enter a BucketClass name and click . Choose a Namespace Policy Type for your namespace bucket, and then click . If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Select one Read and Write NamespaceStore which defines the read and write targets of the namespace bucket and click . Review your new bucket class details, and then click Create Bucket Class . Navigate to Bucket Class tab and verify that your newly created resource is in the Ready phase. Navigate to Object Bucket Claims tab and click Create Object Bucket Claim . Enter ObjectBucketClaim Name for the namespace bucket. Select StorageClass as openshift-storage.noobaa.io . Select the BucketClass that you created earlier for your namespacestore from the list. By default, noobaa-default-bucket-class gets selected. Click Create . The namespace bucket is created along with Object Bucket Claim for your namespace. Navigate to Object Bucket Claims tab and verify that the Object Bucket Claim created is in Bound state. Navigate to Object Buckets tab and verify that the your namespace bucket is present in the list and is in Bound state. 4.4. Sharing legacy application data with cloud native application using S3 protocol Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to do the following: Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol. Access file system datasets from both file system and S3 protocol. Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs). 4.4.1. Creating a NamespaceStore to use a file system Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage -> Object Storage . Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket. Click Create namespacestore . Enter a name for the NamespaceStore. Choose Filesystem as the provider. Choose the Persistent volume claim. Enter a folder name. If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created. Click Create . Verify the NamespaceStore is in the Ready state. 4.4.2. Creating accounts with NamespaceStore filesystem configuration You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML. Note You cannot remove a NamespaceStore filesystem configuration from an account. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface: Procedure Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface. For example: allow_bucket_create Indicates whether the account is allowed to create new buckets. Supported values are true or false . Default value is true . allowed_buckets A comma separated list of bucket names to which the user is allowed to have access and management rights. default_resource The NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC). full_permission Indicates whether the account should be allowed full permission or not. Supported values are true or false . Default value is false . new_buckets_path The filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes. nsfs_account_config A mandatory field that indicates if the account is used for NamespaceStore filesystem. nsfs_only Indicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or false . Default value is false . If it is set to 'true', it limits you from accessing other types of buckets. uid The user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem gid The group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem The MCG system sends a response with the account configuration and its S3 credentials: You can list all the custom resource definition (CRD) based accounts by using the following command: If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name: 4.4.3. Accessing legacy application data from the openshift-storage namespace When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses. In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses. Procedure Display the application namespace with scc : <application_namespace> Specify the name of the application namespace. For example: Navigate into the application namespace: For example: Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature: Check the mount point of the Persistent Volume (PV) inside your pod. Get the volume name of the PV from the pod: <pod_name> Specify the name of the pod. For example: In this example, the name of the volume for the PVC is cephfs-write-workload-generator-no-cache-pv-claim . List all the mounts in the pod, and check for the mount point of the volume that you identified in the step: For example: Confirm the mount point of the RWX PV in your pod: <mount_path> Specify the path to the mount point that you identified in the step. For example: Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses: For example: Get the information of the legacy application RWX PV that you want to make accessible from the openshift-storage namespace: <pv_name> Specify the name of the PV. For example: Ensure that the PVC from the legacy application is accessible from the openshift-storage namespace so that one or more noobaa-endpoint pods can access the PVC. Find the values of the subvolumePath and volumeHandle from the volumeAttributes . You can get these values from the YAML description of the legacy application PV: For example: Use the subvolumePath and volumeHandle values that you identified in the step to create a new PV and PVC object in the openshift-storage namespace that points to the same CephFS volume as the legacy application PV: Example YAML file : 1 The storage capacity of the PV that you are creating in the openshift-storage namespace must be the same as the original PV. 2 The volume handle for the target PV that you create in openshift-storage needs to have a different handle than the original application PV, for example, add -clone at the end of the volume handle. 3 The storage capacity of the PVC that you are creating in the openshift-storage namespace must be the same as the original PVC. Create the PV and PVC in the openshift-storage namespace using the YAML file specified in the step: <YAML_file> Specify the name of the YAML file. For example: Ensure that the PVC is available in the openshift-storage namespace: Navigate into the openshift-storage project: Create the NSFS namespacestore: <nsfs_namespacestore> Specify the name of the NSFS namespacestore. <cephfs_pvc_name> Specify the name of the CephFS PVC in the openshift-storage namespace. For example: Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example, /nsfs/legacy-namespace mountpoint: <noobaa_endpoint_pod_name> Specify the name of the noobaa-endpoint pod. For example: Create a MCG user account: <user_account> Specify the name of the MCG user account. <gid_number> Specify the GID number. <uid_number> Specify the UID number. Important Use the same UID and GID as that of the legacy application. You can find it from the output. For example: Create a MCG bucket. Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod: For example: Create the MCG bucket using the nsfs/ path: For example: Check the SELinux labels of the folders residing in the PVCs in the legacy application and openshift-storage namespaces: For example: For example: In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues. Ensure that the legacy application and openshift-storage pods use the same SELinux labels on the files. You can do this in one of the following ways: Section 4.4.3.1, "Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project" . Section 4.4.3.2, "Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC" . Delete the NSFS namespacestore: Delete the MCG bucket: For example: Delete the MCG user account: For example: Delete the NSFS namespacestore: For example: Delete the PV and PVC: Important Before you delete the PV and PVC, ensure that the PV has a retain policy configured. <cephfs_pv_name> Specify the CephFS PV name of the legacy application. <cephfs_pvc_name> Specify the CephFS PVC name of the legacy application. For example: 4.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project Display the current openshift-storage namespace with sa.scc.mcs : Edit the legacy application namespace, and modify the sa.scc.mcs with the value from the sa.scc.mcs of the openshift-storage namespace: For example: For example: Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the openshift-storage deployment. 4.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC Create a new scc with the MustRunAs and seLinuxOptions options, with the Multi Category Security (MCS) that the openshift-storage project uses. Example YAML file: Create a service account for the deployment and add it to the newly created scc . Create a service account: <service_account_name>` Specify the name of the service account. For example: Add the service account to the newly created scc : For example: Patch the legacy application deployment so that it uses the newly created service account. This allows you to specify the SELinux label in the deployment: For example: Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration: Add the following lines: <security_context_value> You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod. For example: Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly: For example" The legacy application is restarted and begins using the same SELinux labels as the openshift-storage namespace. Chapter 5. Securing Multicloud Object Gateway 5.1. Changing the default account credentials to ensure better security in the Multicloud Object Gateway Change and rotate your Multicloud Object Gateway (MCG) account credentials using the command-line interface to prevent issues with applications, and to ensure better account security. 5.1.1. Resetting the noobaa account password Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface for easier management. For instructions, see Accessing the Multicloud Object Gateway with your applications . Procedure To reset the noobaa account password, run the following command: Example: Example output: Important To access the admin account credentials run the noobaa status command from the terminal: 5.1.2. Regenerating the S3 credentials for the accounts Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface for easier management. For instructions, see Accessing the Multicloud Object Gateway with your applications . Procedure Get the account name. For listing the accounts, run the following command: Example output: Alternatively, run the oc get noobaaaccount command from the terminal: Example output: To regenerate the noobaa account S3 credentials, run the following command: Once you run the noobaa account regenerate command it will prompt a warning that says "This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials." , and ask for confirmation: Example: Example output: On approving, it will regenerate the credentials and eventually print them: 5.1.3. Regenerating the S3 credentials for the OBC Prerequisites A running OpenShift Data Foundation cluster. Download the Multicloud Object Gateway (MCG) command-line interface for easier management. For instructions, see Accessing the Multicloud Object Gateway with your applications . Procedure To get the OBC name, run the following command: Example output: Alternatively, run the oc get obc command from the terminal: Example output: To regenerate the noobaa OBC S3 credentials, run the following command: Once you run the noobaa obc regenerate command it will prompt a warning that says "This will invalidate all connections between the S3 clients and noobaa which are connected using the current credentials." , and ask for confirmation: Example: Example output: On approving, it will regenerate the credentials and eventually print them: 5.2. Enabling secured mode deployment for Multicloud Object Gateway You can specify a range of IP addresses that should be allowed to reach the Multicloud Object Gateway (MCG) load balancer services to enable secure mode deployment. This helps to control the IP addresses that can access the MCG services. Note You can disable the MCG load balancer usage by setting the disableLoadBalancerService variable in the storagecluster custom resource definition (CRD) while deploying OpenShift Data Foundation using the command line interface. This helps to restrict MCG from creating any public resources for private clusters and to disable the MCG service EXTERNAL-IP . For more information, see the Red Hat Knowledgebase article Install Red Hat OpenShift Data Foundation 4.X in internal mode using command line interface . For information about disabling MCG load balancer service after deploying OpenShift Data Foundation, see Disabling Multicloud Object Gateway external service after deploying OpenShift Data Foundation . Prerequisites A running OpenShift Data Foundation cluster. In case of a bare metal deployment, ensure that the load balancer controller supports setting the loadBalancerSourceRanges attribute in the Kubernetes services. Procedure Edit the NooBaa custom resource (CR) to specify the range of IP addresses that can access the MCG services after deploying OpenShift Data Foundation. noobaa The NooBaa CR type that controls the NooBaa system deployment. noobaa The name of the NooBaa CR. For example: loadBalancerSourceSubnets A new field that can be added under spec in the NooBaa CR to specify the IP addresses that should have access to the NooBaa services. In this example, all the IP addresses that are in the subnet 10.0.0.0/16 or 192.168.10.0/32 will be able to access MCG S3 and security token service (STS) while the other IP addresses are not allowed to access. Verification steps To verify if the specified IP addresses are set, in the OpenShift Web Console, run the following command and check if the output matches with the IP addresses provided to MCG: Chapter 6. Mirroring data for hybrid and Multicloud buckets You can use the simplified process of the Multicloud Object Gateway (MCG) to span data across cloud providers and clusters. Before you create a bucket class that reflects the data management policy and mirroring, you must add a backing storage that can be used by the MCG. For information, see Chapter 4, Chapter 3, Adding storage resources for hybrid or Multicloud . You can set up mirroring data by using the OpenShift UI, YAML or MCG command-line interface. See the following sections: Section 6.1, "Creating bucket classes to mirror data using the MCG command-line-interface" Section 6.2, "Creating bucket classes to mirror data using a YAML" 6.1. Creating bucket classes to mirror data using the MCG command-line-interface Prerequisites Ensure to download Multicloud Object Gateway (MCG) command-line interface. Procedure From the Multicloud Object Gateway (MCG) command-line interface, run the following command to create a bucket class with a mirroring policy: Set the newly created bucket class to a new bucket claim to generate a new bucket that will be mirrored between two locations: 6.2. Creating bucket classes to mirror data using a YAML Apply the following YAML. This YAML is a hybrid example that mirrors data between local Ceph storage and AWS: Add the following lines to your standard Object Bucket Claim (OBC): For more information about OBCs, see Chapter 9, Object Bucket Claim . Chapter 7. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 7.1. Introduction to bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 7.2. Using bucket policies in Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . A valid Multicloud Object Gateway user account. See Creating a user in the Multicloud Object Gateway for instructions to create a user account. Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. For example: Replace [email protected] with a valid Multicloud Object Gateway user account. Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. Additional resources There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . 7.3. Creating a user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Download the MCG command-line interface for easier management. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page . Note Choose the correct Product Variant according to your architecture. Procedure Execute the following command to create an MCG user account: <noobaa-account-name> Specify the name of the new MCG user account. --allow_bucket_create Allows the user to create new buckets. --allowed_buckets Sets the user's allowed bucket list (use commas or multiple flags). --default_resource Sets the default resource.The new buckets are created on this default resource (including the future ones). --full_permission Allows this account to access all existing and future buckets. Important You need to provide permission to access atleast one bucket or full permission to access all the buckets. Chapter 8. Multicloud Object Gateway bucket replication Data replication from one Multicloud Object Gateway (MCG) bucket to another MCG bucket provides higher resiliency and better collaboration options. These buckets can be either data buckets or namespace buckets backed by any supported storage solution (AWS S3, Azure, and so on). A replication policy is composed of a list of replication rules. Each rule defines the destination bucket, and can specify a filter based on an object key prefix. Configuring a complementing replication policy on the second bucket results in bidirectional replication. Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway, see link:Accessing the Multicloud Object Gateway with your applications. Download the Multicloud Object Gateway (MCG) command-line interface: Important Specify the appropriate architecture for enabling the repositories using the subscription manager. For instance, in case of IBM Power use the following command: Alternatively, you can install the mcg package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/packages Important Choose the correct Product Variant according to your architecture. Note Certain MCG features are only available in certain MCG versions, and the appropriate MCG CLI tool version must be used to fully utilize MCG's features. To replicate a bucket, see Replicating a bucket to another bucket . To set a bucket class replication policy, see Setting a bucket class replication policy . 8.1. Replicating a bucket to another bucket You can set the bucket replication policy in two ways: Replicating a bucket to another bucket using the MCG command-line interface . Replicating a bucket to another bucket using a YAML . 8.1.1. Replicating a bucket to another bucket using the MCG command-line interface You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of object bucket claim (OBC). You must define the replication policy parameter in a JSON file. Procedure From the MCG command-line interface, run the following command to create an OBC with a specific replication policy: <bucket-claim-name> Specify the name of the bucket claim. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . For example: 8.1.2. Replicating a bucket to another bucket using a YAML You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of object bucket claim (OBC) or you can edit the YAML later. You must provide the policy as a JSON-compliant string that adheres to the format shown in the following procedure. Procedure Apply the following YAML: <desired-bucket-claim> Specify the name of the bucket claim. <desired-namespace> Specify the namespace. <desired-bucket-name> Specify the prefix of the bucket name. "rule_id" Specify the ID number of the rule, for example, {"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. It is the prefix of the object keys that should be replicated, and you can even leave it empty, for example, {"prefix": ""} . Additional information For more information about OBCs, see Object Bucket Claim . 8.2. Setting a bucket class replication policy It is possible to set up a replication policy that automatically applies to all the buckets created under a certain bucket class. You can do this in two ways: Setting a bucket class replication policy using the MCG command-line interface . Setting a bucket class replication policy using a YAML . 8.2.1. Setting a bucket class replication policy using the MCG command-line interface You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of bucket class. You must define the replication-policy parameter in a JSON file. You can set a bucket class replication policy for the Placement and Namespace bucket classes. You can set a bucket class replication policy for the Placement and Namespace bucket classes. Procedure From the MCG command-line interface, run the following command: <bucketclass-name> Specify the name of the bucket class. <backingstores> Specify the name of a backingstore. You can pass many backingstores separated by commas. /path/to/json-file.json Is the path to a JSON file which defines the replication policy. Example JSON file: "prefix" Is optional. The prefix of the object keys gets replicated. You can leave it empty, for example, {"prefix": ""} . For example: This example creates a placement bucket class with a specific replication policy defined in the JSON file. 8.2.2. Setting a bucket class replication policy using a YAML You can set a replication policy for Multicloud Object Gateway (MCG) data bucket at the time of creation of bucket class or you can edit their YAML later. You must provide the policy as a JSON-compliant string that adheres to the format shown in the following procedure. Procedure Apply the following YAML: This YAML is an example that creates a placement bucket class. Each Object bucket claim (OBC) object that is uploaded to the bucket is filtered based on the prefix and is replicated to first.bucket . <desired-app-label> Specify a label for the app. <desired-bucketclass-name> Specify the bucket class name. <desired-namespace> Specify the namespace in which the bucket class gets created. <backingstore> Specify the name of a backingstore. You can pass many backingstores. "rule_id" Specify the ID number of the rule, for example, `{"rule_id": "rule-1"} . "destination_bucket" Specify the name of the destination bucket, for example, {"destination_bucket": "first.bucket"} . "prefix" Is optional. The prefix of the object keys gets replicated. You can leave it empty, for example, {"prefix": ""} . 8.3. Enabling log based bucket replication When creating a bucket replication policy, you can use logs so that recent data is replicated more quickly, while the default scan-based replication works on replicating the rest of the data. Important This feature requires setting up bucket logs on AWS or Azure.For more information about setting up AWS logs, see Enabling Amazon S3 server access logging . The AWS logs bucket needs to be created in the same region as the source NamespaceStore AWS bucket. Note This feature is only supported in buckets that are backed by a NamespaceStore. Buckets backed by BackingStores cannot utilized log-based replication. 8.3.1. Enabling log based bucket replication for new namespace buckets using OpenShift Web Console in Amazon Web Service environment You can optimize replication by using the event logs of the Amazon Web Service(AWS) cloud environment. You enable log based bucket replication for new namespace buckets using the web console during the creation of namespace buckets. Prerequisites Ensure that object logging is enabled in AWS. For more information, see the "Using the S3 console" section in Enabling Amazon S3 server access logging . Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, navigate to Storage -> Object Storage -> Object Bucket Claims . Click Create ObjectBucketClaim . Enter the name of ObjectBucketName and select StorageClass and BucketClass. Select the Enable replication check box to enable replication. In the Replication policy section, select the Optimize replication using event logs checkbox. Enter the name of the bucket that will contain the logs under Event log Bucket . If the logs are not stored in the root of the bucket, provide the full path without s3:// Enter a prefix to replicate only the objects whose name begins with the given prefix. 8.3.2. Enabling log based bucket replication for existing namespace buckets using YAML You can enable log based bucket replication for the existing buckets that are created using the command line interface or by applying an YAML, and not the buckets that are created using AWS S3 commands. Procedure Edit the YAML of the bucket's OBC to enable log based bucket replication. Add the following under spec : Note It is also possible to add this to the YAML of an OBC before it is created. rule_id Specify an ID of your choice for identifying the rule destination_bucket Specify the name of the target MCG bucket that the objects are copied to (optional) {"filter": {"prefix": <>}} Specify a prefix string that you can set to filter the objects that are replicated log_replication_info Specify an object that contains data related to log-based replication optimization. {"logs_location": {"logs_bucket": <>}} is set to the location of the AWS S3 server access logs. 8.3.3. Enabling log based bucket replication in Microsoft Azure Prerequisites Refer to Microsoft Azure documentation and ensure that you have completed the following tasks in the Microsoft Azure portal: Ensure that have created a new application and noted down the name, application (client) ID, and directory (tenant) ID. For information, see Register an application . Ensure that a new a new client secret is created and the application secret is noted down. Ensure that a new Log Analytics workspace is created and its name and workspace ID is noted down. For information, see Create a Log Analytics workspace . Ensure that the Reader role is assigned under Access control and members are selected and the name of the application that you registered in the step is provided. For more information, see Assign Azure roles using the Azure portal . Ensure that a new storage account is created and the Access keys are noted down. In the Monitoring section of the storage account created, select a blob and in the Diagnostic settings screen, select only StorageWrite and StorageDelete , and in the destination details add the Log Analytics workspace that you created earlier. Ensure that a blob is selected in the Diagnostic settings screen of the Monitoring section of the storage account created. Also, ensure that only StorageWrite and StorageDelete is selected and in the destination details, the Log Analytics workspace that you created earlier is added. For more information, see Diagnostic settings in Azure Monitor . Ensure that two new containers for object source and object destination are created. Administrator access to OpenShift Web Console. Procedure Create a secret with credentials to be used by the namespacestores . Create a NamespaceStore backed by a container created in Azure. For more information, see Adding a namespace bucket using the OpenShift Container Platform user interface . Create a new Namespace-Bucketclass and OBC that utilizes it. Check the object bucket name by looking in the YAML of target OBC, or by listing all S3 buckets, for example, - s3 ls . Use the following template to apply an Azure replication policy on your source OBC by adding the following in its YAML, under .spec : sync_deletion Specify a boolean value, true or false . destination_bucket Make sure to use the name of the object bucket, and not the claim. The name can be retrieved using the s3 ls command, or by looking for the value in an OBC's YAML. Verification steps Write objects to the source bucket. Wait until MCG replicates them. Delete the objects from the source bucket. Verify the objects were removed from the target bucket. 8.3.4. Enabling log-based bucket replication deletion Prerequisites Administrator access to OpenShift Web Console. AWS Server Access Logging configured for the desired bucket. Procedure In the OpenShift Web Console, navigate to Storage -> Object Storage -> Object Bucket Claims . Click Create new Object bucket claim . (Optional) In the Replication rules section, select the Sync deletion checkbox for each rule separately. Enter the name of the bucket that will contain the logs under Event log Bucket . If the logs are not stored in the root of the bucket, provide the full path without s3:// Enter a prefix to replicate only the objects whose name begins with the given prefix. Chapter 9. Object Bucket Claim An Object Bucket Claim can be used to request an S3 compatible bucket backend for your workloads. You can create an Object Bucket Claim in three ways: Section 9.1, "Dynamic Object Bucket Claim" Section 9.2, "Creating an Object Bucket Claim using the command line interface" Section 9.3, "Creating an Object Bucket Claim using the OpenShift Web Console" An object bucket claim creates a new bucket and an application account in NooBaa with permissions to the bucket, including a new access key and secret access key. The application account is allowed to access only a single bucket and can't create new buckets by default. 9.1. Dynamic Object Bucket Claim Similar to Persistent Volumes, you can add the details of the Object Bucket claim (OBC) to your application's YAML, and get the object service endpoint, access key, and secret access key available in a configuration map and secret. It is easy to read this information dynamically into environment variables of your application. Note The Multicloud Object Gateway endpoints uses self-signed certificates only if OpenShift uses self-signed certificates. Using signed certificates in OpenShift automatically replaces the Multicloud Object Gateway endpoints certificates with signed certificates. Get the certificate currently used by Multicloud Object Gateway by accessing the endpoint via the browser. See Accessing the Multicloud Object Gateway with your applications for more information. Procedure Add the following lines to your application YAML: These lines are the OBC itself. Replace <obc-name> with the a unique OBC name. Replace <obc-bucket-name> with a unique bucket name for your OBC. To automate the use of the OBC add more lines to the YAML file. For example: The example is the mapping between the bucket claim result, which is a configuration map with data and a secret with the credentials. This specific job claims the Object Bucket from NooBaa, which creates a bucket and an account. Replace all instances of <obc-name> with your OBC name. Replace <your application image> with your application image. Apply the updated YAML file: Replace <yaml.file> with the name of your YAML file. To view the new configuration map, run the following: Replace obc-name with the name of your OBC. You can expect the following environment variables in the output: BUCKET_HOST - Endpoint to use in the application. BUCKET_PORT - The port available for the application. The port is related to the BUCKET_HOST . For example, if the BUCKET_HOST is https://my.example.com , and the BUCKET_PORT is 443, the endpoint for the object service would be https://my.example.com:443 . BUCKET_NAME - Requested or generated bucket name. AWS_ACCESS_KEY_ID - Access key that is part of the credentials. AWS_SECRET_ACCESS_KEY - Secret access key that is part of the credentials. Important Retrieve the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The names are used so that it is compatible with the AWS S3 API. You need to specify the keys while performing S3 operations, especially when you read, write or list from the Multicloud Object Gateway (MCG) bucket. The keys are encoded in Base64. Decode the keys before using them. <obc_name> Specify the name of the object bucket claim. 9.2. Creating an Object Bucket Claim using the command line interface When creating an Object Bucket Claim (OBC) using the command-line interface, you get a configuration map and a Secret that together contain all the information your application needs to use the object storage service. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Procedure Use the command-line interface to generate the details of a new bucket and credentials. Run the following command: Replace <obc-name> with a unique OBC name, for example, myappobc . Additionally, you can use the --app-namespace option to specify the namespace where the OBC configuration map and secret will be created, for example, myapp-namespace . For example: The MCG command-line-interface has created the necessary configuration and has informed OpenShift about the new OBC. Run the following command to view the OBC: For example: Run the following command to view the YAML file for the new OBC: For example: Inside of your openshift-storage namespace, you can find the configuration map and the secret to use this OBC. The CM and the secret have the same name as the OBC. Run the following command to view the secret: For example: The secret gives you the S3 access credentials. Run the following command to view the configuration map: For example: The configuration map contains the S3 endpoint information for your application. 9.3. Creating an Object Bucket Claim using the OpenShift Web Console You can create an Object Bucket Claim (OBC) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. In order for your applications to communicate with the OBC, you need to use the configmap and secret. For more information about this, see Section 9.1, "Dynamic Object Bucket Claim" . Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims -> Create Object Bucket Claim . Enter a name for your object bucket claim and select the appropriate storage class based on your deployment, internal or external, from the dropdown menu: Internal mode The following storage classes, which were created after deployment, are available for use: ocs-storagecluster-ceph-rgw uses the Ceph Object Gateway (RGW) openshift-storage.noobaa.io uses the Multicloud Object Gateway (MCG) External mode The following storage classes, which were created after deployment, are available for use: ocs-external-storagecluster-ceph-rgw uses the RGW openshift-storage.noobaa.io uses the MCG Note The RGW OBC storage class is only available with fresh installations of OpenShift Data Foundation version 4.5. It does not apply to clusters upgraded from OpenShift Data Foundation releases. Click Create . Once you create the OBC, you are redirected to its detail page. 9.4. Attaching an Object Bucket Claim to a deployment Once created, Object Bucket Claims (OBCs) can be attached to specific deployments. Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims . Click the Action menu (...) to the OBC you created. From the drop-down menu, select Attach to Deployment . Select the desired deployment from the Deployment Name list, then click Attach . 9.5. Viewing object buckets using the OpenShift Web Console You can view the details of object buckets created for Object Bucket Claims (OBCs) using the OpenShift Web Console. Prerequisites Administrative access to the OpenShift Web Console. Procedure Log into the OpenShift Web Console. On the left navigation bar, click Storage -> Object Storage -> Object Buckets . Optonal: You can also navigate to the details page of a specific OBC, and click the Resource link to view the object buckets for that OBC. Select the object bucket of which you want to see the details. Once selected you are navigated to the Object Bucket Details page. 9.6. Deleting Object Bucket Claims Prerequisites Administrative access to the OpenShift Web Console. Procedure On the left navigation bar, click Storage -> Object Storage -> Object Bucket Claims . Click the Action menu (...) to the Object Bucket Claim (OBC) you want to delete. Select Delete Object Bucket Claim . Click Delete . Chapter 10. Caching policy for object buckets A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway (MCG) bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket. AWS S3 IBM COS 10.1. Creating an AWS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. In case of IBM Z use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the namespacestore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First create a secret with credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <namespacestore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-cache-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 10.2. Creating an IBM COS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found here https://access.redhat.com/downloads/content/547/ver=4/rhel---8/4/x86_64/package . Note Choose the correct Product Variant according to your architecture. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First, Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <backingstore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. Chapter 11. Lifecyle bucket configuration in Multicloud Object Gateway Multicloud Object Gateway (MCG) lifecycle provides a way to reduce storage costs due to accumulated data objects. Deletion of expired objects is a simplified way that enables handling of unused data. Data expiration is a part of Amazon Web Services (AWS) lifecycle management and sets an expiration date for automatic deletion. The minimal time resolution of the lifecycle expiration is one day. For more information, see Expiring objects . AWS S3 API is used to configure lifecyle bucket in MCG. For information about the data bucket APIs and their support level, see Support of Multicloud Object Gateway data bucket APIs . There are a few limitations with the expiratation rule API for MCG in comaparison with AWS: ExpiredObjectDeleteMarker is accepted but it is not processed. No option to define specific non-current version's expiration conditions Chapter 12. Scaling Multicloud Object Gateway performance The Multicloud Object Gateway (MCG) performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The MCG resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service S3 endpoint service The S3 endpoint is a service that every Multicloud Object Gateway (MCG) provides by default that handles the heavy lifting data digestion in the MCG. The endpoint service handles the inline data chunking, deduplication, compression, and encryption, and it accepts data placement instructions from the MCG. 12.1. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG. You can scale the Horizontal Pod Autoscaler (HPA) for noobaa-endpoint using the following oc patch command, for example: The example above sets the minCount to 3 and the maxCount to `10 . 12.2. Increasing CPU and memory for PV pool resources MCG default configuration supports low resource consumption. However, when you need to increase CPU and memory to accommodate specific workloads and to increase MCG performance for the workloads, you can configure the required values for CPU and memory in the OpenShift Web Console. Procedure In the OpenShift Web Console, navigate to Storage -> Object Storage -> Backing Store . Select the relevant backing store and click on YAML. Scroll down until you find spec: and update pvPool with CPU and memory. Add a new property of limits and then add cpu and memory. Example reference: Click Save . Verification steps To verfiy, you can check the resource values of the PV pool pods. Chapter 13. Accessing the RADOS Object Gateway S3 endpoint Users can access the RADOS Object Gateway (RGW) endpoint directly. In versions of Red Hat OpenShift Data Foundation, RGW service needed to be manually exposed to create RGW public route. As of OpenShift Data Foundation version 4.7, the RGW route is created by default and is named rook-ceph-rgw-ocs-storagecluster-cephobjectstore . Chapter 14. Using TLS certificates for applications accessing RGW Most of the S3 applications require TLS certificate in the forms such as an option included in the Deployment configuration file, passed as a file in the request, or stored in /etc/pki paths. TLS certificates for RADOS Object Gateway (RGW) are stored as Kubernetes secret and you need to fetch the details from the secret. Prerequisites A running OpenShift Data Foundation cluster. Procedure For internal RGW server Get the TLS certificate and key from the kubernetes secret: <secret_name> The default kubernetes secret name is <objectstore_name>-cos-ceph-rgw-tls-cert . Specify the name of the object store. For external RGW server Get the the TLS certificate from the kubernetes secret: <secret_name> The default kubernetes secret name is ceph-rgw-tls-cert and it is an opaque type of secret. The key value for storing the TLS certificates is cert .
[ "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "oc describe noobaa -n openshift-storage", "Name: noobaa Namespace: openshift-storage Labels: <none> Annotations: <none> API Version: noobaa.io/v1alpha1 Kind: NooBaa Metadata: Creation Timestamp: 2019-07-29T16:22:06Z Generation: 1 Resource Version: 6718822 Self Link: /apis/noobaa.io/v1alpha1/namespaces/openshift-storage/noobaas/noobaa UID: 019cfb4a-b21d-11e9-9a02-06c8de012f9e Spec: Status: Accounts: Admin: Secret Ref: Name: noobaa-admin Namespace: openshift-storage Actual Image: noobaa/noobaa-core:4.0 Observed Generation: 1 Phase: Ready Readme: Welcome to NooBaa! ----------------- Welcome to NooBaa! ----------------- NooBaa Core Version: NooBaa Operator Version: Lets get started: 1. Connect to Management console: Read your mgmt console login information (email & password) from secret: \"noobaa-admin\". kubectl get secret noobaa-admin -n openshift-storage -o json | jq '.data|map_values(@base64d)' Open the management console service - take External IP/DNS or Node Port or use port forwarding: kubectl port-forward -n openshift-storage service/noobaa-mgmt 11443:443 & open https://localhost:11443 2. Test S3 client: kubectl port-forward -n openshift-storage service/s3 10443:443 & 1 NOOBAA_ACCESS_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_ACCESS_KEY_ID|@base64d') 2 NOOBAA_SECRET_KEY=USD(kubectl get secret noobaa-admin -n openshift-storage -o json | jq -r '.data.AWS_SECRET_ACCESS_KEY|@base64d') alias s3='AWS_ACCESS_KEY_ID=USDNOOBAA_ACCESS_KEY AWS_SECRET_ACCESS_KEY=USDNOOBAA_SECRET_KEY aws --endpoint https://localhost:10443 --no-verify-ssl s3' s3 ls Services: Service Mgmt: External DNS: https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443 Internal DNS: https://noobaa-mgmt.openshift-storage.svc:443 Internal IP: https://172.30.235.12:443 Node Ports: https://10.0.142.103:31385 Pod Ports: https://10.131.0.19:8443 serviceS3: External DNS: 3 https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443 Internal DNS: https://s3.openshift-storage.svc:443 Internal IP: https://172.30.86.41:443 Node Ports: https://10.0.142.103:31011 Pod Ports: https://10.131.0.19:6443", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa status -n openshift-storage", "INFO[0000] Namespace: openshift-storage INFO[0000] INFO[0000] CRD Status: INFO[0003] ✅ Exists: CustomResourceDefinition \"noobaas.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"backingstores.noobaa.io\" INFO[0003] ✅ Exists: CustomResourceDefinition \"bucketclasses.noobaa.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbucketclaims.objectbucket.io\" INFO[0004] ✅ Exists: CustomResourceDefinition \"objectbuckets.objectbucket.io\" INFO[0004] INFO[0004] Operator Status: INFO[0004] ✅ Exists: Namespace \"openshift-storage\" INFO[0004] ✅ Exists: ServiceAccount \"noobaa\" INFO[0005] ✅ Exists: Role \"ocs-operator.v0.0.271-6g45f\" INFO[0005] ✅ Exists: RoleBinding \"ocs-operator.v0.0.271-6g45f-noobaa-f9vpj\" INFO[0006] ✅ Exists: ClusterRole \"ocs-operator.v0.0.271-fjhgh\" INFO[0006] ✅ Exists: ClusterRoleBinding \"ocs-operator.v0.0.271-fjhgh-noobaa-pdxn5\" INFO[0006] ✅ Exists: Deployment \"noobaa-operator\" INFO[0006] INFO[0006] System Status: INFO[0007] ✅ Exists: NooBaa \"noobaa\" INFO[0007] ✅ Exists: StatefulSet \"noobaa-core\" INFO[0007] ✅ Exists: Service \"noobaa-mgmt\" INFO[0008] ✅ Exists: Service \"s3\" INFO[0008] ✅ Exists: Secret \"noobaa-server\" INFO[0008] ✅ Exists: Secret \"noobaa-operator\" INFO[0008] ✅ Exists: Secret \"noobaa-admin\" INFO[0009] ✅ Exists: StorageClass \"openshift-storage.noobaa.io\" INFO[0009] ✅ Exists: BucketClass \"noobaa-default-bucket-class\" INFO[0009] ✅ (Optional) Exists: BackingStore \"noobaa-default-backing-store\" INFO[0010] ✅ (Optional) Exists: CredentialsRequest \"noobaa-cloud-creds\" INFO[0010] ✅ (Optional) Exists: PrometheusRule \"noobaa-prometheus-rules\" INFO[0010] ✅ (Optional) Exists: ServiceMonitor \"noobaa-service-monitor\" INFO[0011] ✅ (Optional) Exists: Route \"noobaa-mgmt\" INFO[0011] ✅ (Optional) Exists: Route \"s3\" INFO[0011] ✅ Exists: PersistentVolumeClaim \"db-noobaa-core-0\" INFO[0011] ✅ System Phase is \"Ready\" INFO[0011] ✅ Exists: \"noobaa-admin\" #------------------# #- Mgmt Addresses -# #------------------# ExternalDNS : [https://noobaa-mgmt-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a3406079515be11eaa3b70683061451e-1194613580.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31385] InternalDNS : [https://noobaa-mgmt.openshift-storage.svc:443] InternalIP : [https://172.30.235.12:443] PodPorts : [https://10.131.0.19:8443] #--------------------# #- Mgmt Credentials -# #--------------------# email : [email protected] password : HKLbH1rSuVU0I/souIkSiA== #----------------# #- S3 Addresses -# #----------------# 1 ExternalDNS : [https://s3-openshift-storage.apps.mycluster-cluster.qe.rh-ocs.com https://a340f4e1315be11eaa3b70683061451e-943168195.us-east-2.elb.amazonaws.com:443] ExternalIP : [] NodePorts : [https://10.0.142.103:31011] InternalDNS : [https://s3.openshift-storage.svc:443] InternalIP : [https://172.30.86.41:443] PodPorts : [https://10.131.0.19:6443] #------------------# #- S3 Credentials -# #------------------# 2 AWS_ACCESS_KEY_ID : jVmAsu9FsvRHYmfjTiHV 3 AWS_SECRET_ACCESS_KEY : E//420VNedJfATvVSmDz6FMtsSAzuBv6z180PT5c #------------------# #- Backing Stores -# #------------------# NAME TYPE TARGET-BUCKET PHASE AGE noobaa-default-backing-store aws-s3 noobaa-backing-store-15dc896d-7fe0-4bed-9349-5942211b93c9 Ready 141h35m32s #------------------# #- Bucket Classes -# #------------------# NAME PLACEMENT PHASE AGE noobaa-default-bucket-class {Tiers:[{Placement: BackingStores:[noobaa-default-backing-store]}]} Ready 141h35m33s #-----------------# #- Bucket Claims -# #-----------------# No OBC's found.", "AWS_ACCESS_KEY_ID=<AWS_ACCESS_KEY_ID> AWS_SECRET_ACCESS_KEY=<AWS_SECRET_ACCESS_KEY> aws --endpoint <ENDPOINT> --no-verify-ssl s3 ls", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "oc get backingstore NAME TYPE PHASE AGE noobaa-default-backing-store pv-pool Creating 102s", "oc patch noobaa/noobaa --type json --patch='[{\"op\":\"add\",\"path\":\"/spec/manualDefaultBackingStore\",\"value\":true}]'", "noobaa backingstore create pv-pool _NEW-DEFAULT-BACKING-STORE_ --num-volumes 1 --pv-size-gb 16", "noobaa account update [email protected] --new_default_resource=_NEW-DEFAULT-BACKING-STORE_", "oc patch Bucketclass noobaa-default-bucket-class -n openshift-storage --type=json --patch='[{\"op\": \"replace\", \"path\": \"/spec/placementPolicy/tiers/0/backingStores/0\", \"value\": \"NEW-DEFAULT-BACKING-STORE\"}]'", "oc delete backingstore noobaa-default-backing-store -n openshift-storage | oc patch -n openshift-storage backingstore/noobaa-default-backing-store --type json --patch='[ { \"op\": \"remove\", \"path\": \"/metadata/finalizers\" } ]'", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create aws-s3 <backingstore_name> --access-key=<AWS ACCESS KEY> --secret-key=<AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"aws-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-aws-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: awsS3: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: aws-s3", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::123456789123:oidc-provider/mybucket-oidc.s3.us-east-2.amazonaws.com\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"mybucket-oidc.s3.us-east-2.amazonaws.com:sub\": [ \"system:serviceaccount:openshift-storage:noobaa\", \"system:serviceaccount:openshift-storage:noobaa-endpoint\" ] } } } ] }", "#!/bin/bash set -x This is a sample script to help you deploy MCG on AWS STS cluster. This script shows how to create role-policy and then create the role in AWS. For more information see: https://docs.openshift.com/rosa/authentication/assuming-an-aws-iam-role-for-a-service-account.html WARNING: This is a sample script. You need to adjust the variables based on your requirement. Variables : user variables - REPLACE these variables with your values: ROLE_NAME=\"<role-name>\" # role name that you pick in your AWS account NAMESPACE=\"<namespace>\" # namespace name where MCG is running. For OpenShift Data Foundation, it is openshift-storage. MCG variables SERVICE_ACCOUNT_NAME_1=\"<service-account-name-1>\" # The service account name of statefulset core and deployment operator (MCG operator) SERVICE_ACCOUNT_NAME_2=\"<service-account-name-2>\" # The service account name of deployment endpoint (MCG endpoint) AWS variables Make sure these values are not empty (AWS_ACCOUNT_ID, OIDC_PROVIDER) AWS_ACCOUNT_ID is your AWS account number AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) If you want to create the role before using the cluster, replace this field too. The OIDC provider is in the structure: 1) <OIDC-bucket>.s3.<aws-region>.amazonaws.com. for OIDC bucket configurations are in an S3 public bucket 2) `<characters>.cloudfront.net` for OIDC bucket configurations in an S3 private bucket with a public CloudFront distribution URL OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") the permission (S3 full access) POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" Creating the role (with AWS command line interface) read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_1}\", \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_2}\" ] } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDROLE_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDROLE_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"", "noobaa backingstore create aws-sts-s3 <backingstore-name> --aws-sts-arn=<aws-sts-role-arn> --region=<region> --target-bucket=<target-bucket>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create ibm-cos <backingstore_name> --access-key=<IBM ACCESS KEY> --secret-key=<IBM SECRET ACCESS KEY> --endpoint=<IBM COS ENDPOINT> --target-bucket <bucket-name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"ibm-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-ibm-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> namespace: openshift-storage type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: ibmCos: endpoint: <endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <bucket-name> type: ibm-cos", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create azure-blob <backingstore_name> --account-key=<AZURE ACCOUNT KEY> --account-name=<AZURE ACCOUNT NAME> --target-blob-container <blob container name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"azure-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-azure-resource\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: azureBlob: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBlobContainer: <blob-container-name> type: azure-blob", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa backingstore create google-cloud-storage <backingstore_name> --private-key-json-file=<PATH TO GCP PRIVATE KEY JSON FILE> --target-bucket <GCP bucket name> -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"google-gcp\" INFO[0002] ✅ Created: Secret \"backing-store-google-cloud-storage-gcp\"", "apiVersion: v1 kind: Secret metadata: name: <backingstore-secret-name> type: Opaque data: GoogleServiceAccountPrivateKeyJson: <GCP PRIVATE KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: googleCloudStorage: secret: name: <backingstore-secret-name> namespace: openshift-storage targetBucket: <target bucket> type: google-cloud-storage", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa -n openshift-storage backingstore create pv-pool <backingstore_name> --num-volumes <NUMBER OF VOLUMES> --pv-size-gb <VOLUME SIZE> --request-cpu <CPU REQUEST> --request-memory <MEMORY REQUEST> --limit-cpu <CPU LIMIT> --limit-memory <MEMORY LIMIT> --storage-class <LOCAL STORAGE CLASS>", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore_name> namespace: openshift-storage spec: pvPool: numVolumes: <NUMBER OF VOLUMES> resources: requests: storage: <VOLUME SIZE> cpu: <CPU REQUEST> memory: <MEMORY REQUEST> limits: cpu: <CPU LIMIT> memory: <MEMORY LIMIT> storageClass: <LOCAL STORAGE CLASS> type: pv-pool", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Exists: BackingStore \"local-mcg-storage\"", "noobaa backingstore create s3-compatible rgw-resource --access-key=<RGW ACCESS KEY> --secret-key=<RGW SECRET KEY> --target-bucket=<bucket-name> --endpoint=<RGW endpoint> -n openshift-storage", "get secret <RGW USER SECRET NAME> -o yaml -n openshift-storage", "INFO[0001] ✅ Exists: NooBaa \"noobaa\" INFO[0002] ✅ Created: BackingStore \"rgw-resource\" INFO[0002] ✅ Created: Secret \"backing-store-secret-rgw-resource\"", "apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: <RGW-Username> namespace: openshift-storage spec: store: ocs-storagecluster-cephobjectstore displayName: \"<Display-name>\"", "apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <backingstore-name> namespace: openshift-storage spec: s3Compatible: endpoint: <RGW endpoint> secret: name: <backingstore-secret-name> namespace: openshift-storage signatureVersion: v4 targetBucket: <RGW-bucket-name> type: s3-compatible", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "noobaa account create <noobaa-account-name> [flags]", "noobaa account create testaccount --full_permission --nsfs_account_config --gid 10001 --uid 10001 -default_resource fs_namespacestore", "NooBaaAccount spec: allow_bucket_creation: true Allowed_buckets: full_permission: true permission_list: [] default_resource: noobaa-default-namespace-store Nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001 INFO[0006] ✅ Exists: Secret \"noobaa-account-testaccount\" Connection info: AWS_ACCESS_KEY_ID : <aws-access-key-id> AWS_SECRET_ACCESS_KEY : <aws-secret-access-key>", "noobaa account list NAME ALLOWED_BUCKETS DEFAULT_RESOURCE PHASE AGE testaccount [*] noobaa-default-backing-store Ready 1m17s", "oc get noobaaaccount/testaccount -o yaml spec: allow_bucket_creation: true allowed_buckets: full_permission: true permission_list: [] default_resource: noobaa-default-namespace-store nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001", "oc get ns <application_namespace> -o yaml | grep scc", "oc get ns testnamespace -o yaml | grep scc openshift.io/sa.scc.mcs: s0:c26,c5 openshift.io/sa.scc.supplemental-groups: 1000660000/10000 openshift.io/sa.scc.uid-range: 1000660000/10000", "oc project <application_namespace>", "oc project testnamespace", "oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-write-workload-generator-no-cache-pv-claim Bound pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX ocs-storagecluster-cephfs 12s", "oc get pod NAME READY STATUS RESTARTS AGE cephfs-write-workload-generator-no-cache-1-cv892 1/1 Running 0 11s", "oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'", "oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}' {\"name\":\"app-persistent-storage\",\"persistentVolumeClaim\":{\"claimName\":\"cephfs-write-workload-generator-no-cache-pv-claim\"}}", "oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'", "oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}' [{\"mountPath\":\"/mnt/pv\",\"name\":\"app-persistent-storage\"},{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"name\":\"kube-api-access-8tnc5\",\"readOnly\":true}]", "oc exec -it <pod_name> -- df <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pv main Filesystem 1K-blocks Used Available Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10485760 0 10485760 0% /mnt/pv", "oc exec -it <pod_name> -- ls -latrZ <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..", "oc get pv | grep <pv_name>", "oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX Delete Bound testnamespace/cephfs-write-workload-generator-no-cache-pv-claim ocs-storagecluster-cephfs 47s", "oc get pv <pv_name> -o yaml", "oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com creationTimestamp: \"2022-05-25T06:27:49Z\" finalizers: - kubernetes.io/pv-protection name: pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a resourceVersion: \"177458\" uid: 683fa87b-5192-4ccf-af2f-68c6bcf8f500 spec: accessModes: - ReadWriteMany capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: cephfs-write-workload-generator-no-cache-pv-claim namespace: testnamespace resourceVersion: \"177453\" uid: aa58fb91-c3d2-475b-bbee-68452a613e1a csi: controllerExpandSecretRef: name: rook-csi-cephfs-provisioner namespace: openshift-storage driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1653458225664-8081-openshift-storage.cephfs.csi.ceph.com subvolumeName: csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213 subvolumePath: /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213 persistentVolumeReclaimPolicy: Delete storageClassName: ocs-storagecluster-cephfs volumeMode: Filesystem status: phase: Bound", "cat << EOF >> pv-openshift-storage.yaml apiVersion: v1 kind: PersistentVolume metadata: name: cephfs-pv-legacy-openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany capacity: storage: 10Gi 1 csi: driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: # Volume Attributes can be copied from the Source testnamespace PV \"clusterID\": \"openshift-storage\" \"fsName\": \"ocs-storagecluster-cephfilesystem\" \"staticVolume\": \"true\" # rootpath is the subvolumePath: you copied from the Source testnamespace PV \"rootPath\": /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213-clone 2 persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-legacy namespace: openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany resources: requests: storage: 10Gi 3 volumeMode: Filesystem # volumeName should be same as PV name volumeName: cephfs-pv-legacy-openshift-storage EOF", "oc create -f <YAML_file>", "oc create -f pv-openshift-storage.yaml persistentvolume/cephfs-pv-legacy-openshift-storage created persistentvolumeclaim/cephfs-pvc-legacy created", "oc get pvc -n openshift-storage NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc-legacy Bound cephfs-pv-legacy-openshift-storage 10Gi RWX 14s", "oc project openshift-storage Now using project \"openshift-storage\" on server \"https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443\".", "noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name=' <cephfs_pvc_name> ' --fs-backend='CEPH_FS'", "noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'", "oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/ <nsfs_namespacestore>", "oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace Filesystem Size Used Avail Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10G 0 10G 0% /nsfs/legacy-namespace", "noobaa account create <user_account> --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'", "noobaa account create leguser --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'", "oc exec -it <pod_name> -- mkdir <mount_path> /nsfs", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs", "noobaa api bucket_api create_bucket '{ \"name\": \" <bucket_name> \", \"namespace\":{ \"write_resource\": { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }] } }'", "noobaa api bucket_api create_bucket '{ \"name\": \"legacy-bucket\", \"namespace\":{ \"write_resource\": { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }] } }'", "oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/ <nsfs_namespacstore>", "oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespace total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 30 May 25 06:35 ..", "oc exec -it <pod_name> -- ls -latrZ <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..", "noobaa bucket delete <bucket_name>", "noobaa bucket delete legacy-bucket", "noobaa account delete <user_account>", "noobaa account delete leguser", "noobaa namespacestore delete <nsfs_namespacestore>", "noobaa namespacestore delete legacy-namespace", "oc delete pv <cephfs_pv_name>", "oc delete pvc <cephfs_pvc_name>", "oc delete pv cephfs-pv-legacy-openshift-storage", "oc delete pvc cephfs-pvc-legacy", "oc get ns openshift-storage -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0", "oc edit ns <appplication_namespace>", "oc edit ns testnamespace", "oc get ns <application_namespace> -o yaml | grep sa.scc.mcs", "oc get ns testnamespace -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0", "cat << EOF >> scc.yaml allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:authenticated kind: SecurityContextConstraints metadata: annotations: name: restricted-pvselinux priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - MKNOD - SETUID - SETGID runAsUser: type: MustRunAsRange seLinuxContext: seLinuxOptions: level: s0:c26,c0 type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret EOF", "oc create -f scc.yaml", "oc create serviceaccount <service_account_name>", "oc create serviceaccount testnamespacesa", "oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>", "oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa", "oc patch dc/ <pod_name> '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \" <service_account_name> \"}}}}'", "oc patch dc/cephfs-write-workload-generator-no-cache --patch '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \"testnamespacesa\"}}}}'", "oc edit dc <pod_name> -n <application_namespace>", "spec: template: metadata: securityContext: seLinuxOptions: Level: <security_context_value>", "oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace", "spec: template: metadata: securityContext: seLinuxOptions: level: s0:c26,c0", "oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext", "oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext securityContext: seLinuxOptions: level: s0:c26,c0", "noobaa account passwd <noobaa_account_name> [options]", "noobaa account passwd FATA[0000] ❌ Missing expected arguments: <noobaa_account_name> Options: --new-password='': New Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in t he shell history --old-password='': Old Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in the shell history --retype-new-password='': Retype new Password for authentication - the best practice is to omit this flag , in that case the CLI will prompt to prompt and read it securely from the terminal to avoid leaking secrets in the shell history Usage: noobaa account passwd <noobaa-account-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).", "noobaa account passwd [email protected]", "Enter old-password: [got 24 characters] Enter new-password: [got 7 characters] Enter retype-new-password: [got 7 characters] INFO[0017] ✅ Exists: Secret \"noobaa-admin\" INFO[0017] ✅ Exists: NooBaa \"noobaa\" INFO[0017] ✅ Exists: Service \"noobaa-mgmt\" INFO[0017] ✅ Exists: Secret \"noobaa-operator\" INFO[0017] ✅ Exists: Secret \"noobaa-admin\" INFO[0017] ✈\\ufe0f RPC: account.reset_password() Request: {Email:[email protected] VerificationPassword: * Password: *} WARN[0017] RPC: GetConnection creating connection to wss://localhost:58460/rpc/ 0xc000402ae0 INFO[0017] RPC: Connecting websocket (0xc000402ae0) &{RPC:0xc000501a40 Address:wss://localhost:58460/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0017] RPC: Connected websocket (0xc000402ae0) &{RPC:0xc000501a40 Address:wss://localhost:58460/rpc/ State:init WS:<nil> PendingRequests:map[] NextRequestID:0 Lock:{state:1 sema:0} ReconnectDelay:0s cancelPings:<nil>} INFO[0020] ✅ RPC: account.reset_password() Response OK: took 2907.1ms INFO[0020] ✅ Updated: \"noobaa-admin\" INFO[0020] ✅ Successfully reset the password for the account \"[email protected]\"", "-------------------- - Mgmt Credentials - -------------------- email : [email protected] password : ***", "noobaa account list", "NAME ALLOWED_BUCKETS DEFAULT_RESOURCE PHASE AGE account-test [*] noobaa-default-backing-store Ready 14m17s test2 [first.bucket] noobaa-default-backing-store Ready 3m12s", "oc get noobaaaccount", "NAME PHASE AGE account-test Ready 15m test2 Ready 3m59s", "noobaa account regenerate <noobaa_account_name> [options]", "noobaa account regenerate FATA[0000] ❌ Missing expected arguments: <noobaa-account-name> Usage: noobaa account regenerate <noobaa-account-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).", "noobaa account regenerate account-test", "INFO[0000] You are about to regenerate an account's security credentials. INFO[0000] This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials. INFO[0000] are you sure? y/n", "INFO[0015] ✅ Exists: Secret \"noobaa-account-account-test\" Connection info: AWS_ACCESS_KEY_ID : *** AWS_SECRET_ACCESS_KEY : ***", "noobaa obc list", "NAMESPACE NAME BUCKET-NAME STORAGE-CLASS BUCKET-CLASS PHASE default obc-test obc-test-35800e50-8978-461f-b7e0-7793080e26ba default.noobaa.io noobaa-default-bucket-class Bound", "oc get obc", "NAME STORAGE-CLASS PHASE AGE obc-test default.noobaa.io Bound 38s", "noobaa obc regenerate <bucket_claim_name> [options]", "noobaa obc regenerate FATA[0000] ❌ Missing expected arguments: <bucket-claim-name> Usage: noobaa obc regenerate <bucket-claim-name> [flags] [options] Use \"noobaa options\" for a list of global command-line options (applies to all commands).", "noobaa obc regenerate obc-test", "INFO[0000] You are about to regenerate an OBC's security credentials. INFO[0000] This will invalidate all connections between S3 clients and NooBaa which are connected using the current credentials. INFO[0000] are you sure? y/n", "INFO[0022] ✅ RPC: bucket.read_bucket() Response OK: took 95.4ms ObjectBucketClaim info: Phase : Bound ObjectBucketClaim : kubectl get -n default objectbucketclaim obc-test ConfigMap : kubectl get -n default configmap obc-test Secret : kubectl get -n default secret obc-test ObjectBucket : kubectl get objectbucket obc-default-obc-test StorageClass : kubectl get storageclass default.noobaa.io BucketClass : kubectl get -n default bucketclass noobaa-default-bucket-class Connection info: BUCKET_HOST : s3.default.svc BUCKET_NAME : obc-test-35800e50-8978-461f-b7e0-7793080e26ba BUCKET_PORT : 443 AWS_ACCESS_KEY_ID : *** AWS_SECRET_ACCESS_KEY : *** Shell commands: AWS S3 Alias : alias s3='AWS_ACCESS_KEY_ID=*** AWS_SECRET_ACCESS_KEY =*** aws s3 --no-verify-ssl --endpoint-url ***' Bucket status: Name : obc-test-35800e50-8978-461f-b7e0-7793080e26ba Type : REGULAR Mode : OPTIMAL ResiliencyStatus : OPTIMAL QuotaStatus : QUOTA_NOT_SET Num Objects : 0 Data Size : 0.000 B Data Size Reduced : 0.000 B Data Space Avail : 13.261 GB Num Objects Avail : 9007199254740991", "oc edit noobaa -n openshift-storage noobaa", "spec: loadBalancerSourceSubnets: s3: [\"10.0.0.0/16\", \"192.168.10.0/32\"] sts: - \"10.0.0.0/16\" - \"192.168.10.0/32\"", "oc get svc -n openshift-storage <s3 | sts> -o=go-template='{{ .spec.loadBalancerSourceRanges }}'", "noobaa bucketclass create placement-bucketclass mirror-to-aws --backingstores=azure-resource,aws-resource --placement Mirror", "noobaa obc create mirrored-bucket --bucketclass=mirror-to-aws", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <bucket-class-name> namespace: openshift-storage spec: placementPolicy: tiers: - backingStores: - <backing-store-1> - <backing-store-2> placement: Mirror", "additionalConfig: bucketclass: mirror-to-aws", "{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }", "aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy file:// BucketPolicy", "aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa account create <noobaa-account-name> [--allow_bucket_create=true] [--allowed_buckets=[]] [--default_resource=''] [--full_permission=false]", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "noobaa obc create <bucket-claim-name> -n openshift-storage --replication-policy /path/to/json-file.json", "[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]", "noobaa obc create my-bucket-claim -n openshift-storage --replication-policy /path/to/json-file.json", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <desired-bucket-claim> namespace: <desired-namespace> spec: generateBucketName: <desired-bucket-name> storageClassName: openshift-storage.noobaa.io additionalConfig: replicationPolicy: {\"rules\": [{ \"rule_id\": \"\", \"destination_bucket\": \"\", \"filter\": {\"prefix\": \"\"}}]}", "noobaa -n openshift-storage bucketclass create placement-bucketclass <bucketclass-name> --backingstores <backingstores> --replication-policy=/path/to/json-file.json", "[{ \"rule_id\": \"rule-1\", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \"repl\"}}]", "noobaa -n openshift-storage bucketclass create placement-bucketclass bc --backingstores azure-blob-ns --replication-policy=/path/to/json-file.json", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: <desired-app-label> name: <desired-bucketclass-name> namespace: <desired-namespace> spec: placementPolicy: tiers: - backingstores: - <backingstore> placement: Spread replicationPolicy: [{ \"rule_id\": \" <rule id> \", \"destination_bucket\": \"first.bucket\", \"filter\": {\"prefix\": \" <object name prefix> \"}}]", "replicationPolicy: '{\"rules\":[{\"rule_id\":\"<RULE ID>\", \"destination_bucket\":\"<DEST>\", \"filter\": {\"prefix\": \"<PREFIX>\"}}], \"log_replication_info\": {\"logs_location\": {\"logs_bucket\": \"<LOGS_BUCKET>\"}}}'", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: TenantID: <AZURE TENANT ID ENCODED IN BASE64> ApplicationID: <AZURE APPLICATIOM ID ENCODED IN BASE64> ApplicationSecret: <AZURE APPLICATION SECRET ENCODED IN BASE64> LogsAnalyticsWorkspaceID: <AZURE LOG ANALYTICS WORKSPACE ID ENCODED IN BASE64> AccountName: <AZURE ACCOUNT NAME ENCODED IN BASE64> AccountKey: <AZURE ACCOUNT KEY ENCODED IN BASE64>", "replicationPolicy:'{\"rules\":[ {\"rule_id\":\"ID goes here\", \"sync_deletions\": \"<true or false>\"\", \"destination_bucket\":object bucket name\"} ], \"log_replication_info\":{\"endpoint_type\":\"AZURE\"}}'", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <obc-name> spec: generateBucketName: <obc-bucket-name> storageClassName: openshift-storage.noobaa.io", "apiVersion: batch/v1 kind: Job metadata: name: testjob spec: template: spec: restartPolicy: OnFailure containers: - image: <your application image> name: test env: - name: BUCKET_NAME valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_NAME - name: BUCKET_HOST valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_HOST - name: BUCKET_PORT valueFrom: configMapKeyRef: name: <obc-name> key: BUCKET_PORT - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: <obc-name> key: AWS_ACCESS_KEY_ID - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: <obc-name> key: AWS_SECRET_ACCESS_KEY", "oc apply -f <yaml.file>", "oc get cm <obc-name> -o yaml", "oc get secret <obc_name> -o yaml", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa obc create <obc-name> -n openshift-storage", "INFO[0001] ✅ Created: ObjectBucketClaim \"test21obc\"", "oc get obc -n openshift-storage", "NAME STORAGE-CLASS PHASE AGE test21obc openshift-storage.noobaa.io Bound 38s", "oc get obc test21obc -o yaml -n openshift-storage", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer generation: 2 labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage resourceVersion: \"40756\" selfLink: /apis/objectbucket.io/v1alpha1/namespaces/openshift-storage/objectbucketclaims/test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af spec: ObjectBucketName: obc-openshift-storage-test21obc bucketName: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 generateBucketName: test21obc storageClassName: openshift-storage.noobaa.io status: phase: Bound", "oc get -n openshift-storage secret test21obc -o yaml", "apiVersion: v1 data: AWS_ACCESS_KEY_ID: c0M0R2xVanF3ODR3bHBkVW94cmY= AWS_SECRET_ACCESS_KEY: Wi9kcFluSWxHRzlWaFlzNk1hc0xma2JXcjM1MVhqa051SlBleXpmOQ== kind: Secret metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40751\" selfLink: /api/v1/namespaces/openshift-storage/secrets/test21obc uid: 65117c1c-f662-11e9-9094-0a5305de57bb type: Opaque", "oc get -n openshift-storage cm test21obc -o yaml", "apiVersion: v1 data: BUCKET_HOST: 10.0.171.35 BUCKET_NAME: test21obc-933348a6-e267-4f82-82f1-e59bf4fe3bb4 BUCKET_PORT: \"31242\" BUCKET_REGION: \"\" BUCKET_SUBREGION: \"\" kind: ConfigMap metadata: creationTimestamp: \"2019-10-24T13:30:07Z\" finalizers: - objectbucket.io/finalizer labels: app: noobaa bucket-provisioner: openshift-storage.noobaa.io-obc noobaa-domain: openshift-storage.noobaa.io name: test21obc namespace: openshift-storage ownerReferences: - apiVersion: objectbucket.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ObjectBucketClaim name: test21obc uid: 64f04cba-f662-11e9-bc3c-0295250841af resourceVersion: \"40752\" selfLink: /api/v1/namespaces/openshift-storage/configmaps/test21obc uid: 651c6501-f662-11e9-9094-0a5305de57bb", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>", "noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <backingstore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>", "noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>", "oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"multiCloudGateway\": {\"endpoints\": {\"minCount\": 3,\"maxCount\": 10}}}}'", "spec: pvPool: resources: limits: cpu: 1000m memory: 4000Mi requests: cpu: 800m memory: 800Mi storage: 50Gi", "oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.crt}' | base64 -d oc get secrets/<secret_name> -o jsonpath='{.data..tls\\.key}' | base64 -d", "oc get secrets/<secret_name> -o jsonpath='{.data.cert}' | base64 -d" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html-single/managing_hybrid_and_multicloud_resources/index
Appendix C. Virtualization Restrictions
Appendix C. Virtualization Restrictions This appendix covers additional support and product restrictions of the virtualization packages in Red Hat Enterprise Linux 7. C.1. System Restrictions Host Systems Red Hat Enterprise Linux with KVM is supported only on the following host architectures: AMD64 and Intel 64 IBM Z IBM POWER8 IBM POWER9 This document primarily describes AMD64 and Intel 64 features and functionalities, but the other supported architectures work very similarly. For details, see Appendix B, Using KVM Virtualization on Multiple Architectures . Guest Systems On Red Hat Enterprise Linux 7, Microsoft Windows guest virtual machines are only supported under specific subscription programs such as Advanced Mission Critical (AMC). If you are unsure whether your subscription model includes support for Windows guests, contact customer support. For more information about Windows guest virtual machines on Red Hat Enterprise Linux 7, see Windows Guest Virtual Machines on Red Hat Enterprise Linux 7 Knowledgebase article .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/appe-virtualization_restrictions
4.248. python-qpid
4.248. python-qpid 4.248.1. RHBA-2011:1666 - python-qpid bug fix update An updated python-qpid package is now available for Red Hat Enterprise Linux 6. The python-qpid package provides a python client library for the Apache Qpid implementation of the Advanced Message Queuing Protocol (AMQP). The python-qpid package has been upgraded to upstream version 0.12. (BZ# 706993 ) Users of python-qpid are advised to upgrade to this updated package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/python-qpid
Chapter 86. ExternalConfigurationVolumeSource schema reference
Chapter 86. ExternalConfigurationVolumeSource schema reference Used in: ExternalConfiguration Property Property type Description configMap ConfigMapVolumeSource Reference to a key in a ConfigMap. Exactly one Secret or ConfigMap has to be specified. name string Name of the volume which will be added to the Kafka Connect pods. secret SecretVolumeSource Reference to a key in a Secret. Exactly one Secret or ConfigMap has to be specified.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-externalconfigurationvolumesource-reference
Chapter 7. Virtual machines
Chapter 7. Virtual machines 7.1. Creating VMs from Red Hat images 7.1.1. Creating virtual machines from Red Hat images overview Red Hat images are golden images . They are published as container disks in a secure registry. The Containerized Data Importer (CDI) polls and imports the container disks into your cluster and stores them in the openshift-virtualization-os-images project as snapshots or persistent volume claims (PVCs). Red Hat images are automatically updated. You can disable and re-enable automatic updates for these images. See Managing Red Hat boot source updates . Cluster administrators can enable automatic subscription for Red Hat Enterprise Linux (RHEL) virtual machines in the OpenShift Virtualization web console . You can create virtual machines (VMs) from operating system images provided by Red Hat by using one of the following methods: Creating a VM from a template by using the web console Creating a VM from an instance type by using the web console Creating a VM from a VirtualMachine manifest by using the command line Important Do not create VMs in the default openshift-* namespaces. Instead, create a new namespace or use an existing namespace without the openshift prefix. 7.1.1.1. About golden images A golden image is a preconfigured snapshot of a virtual machine (VM) that you can use as a resource to deploy new VMs. For example, you can use golden images to provision the same system environment consistently and deploy systems more quickly and efficiently. 7.1.1.1.1. How do golden images work? Golden images are created by installing and configuring an operating system and software applications on a reference machine or virtual machine. This includes setting up the system, installing required drivers, applying patches and updates, and configuring specific options and preferences. After the golden image is created, it is saved as a template or image file that can be replicated and deployed across multiple clusters. The golden image can be updated by its maintainer periodically to incorporate necessary software updates and patches, ensuring that the image remains up to date and secure, and newly created VMs are based on this updated image. 7.1.1.1.2. Red Hat implementation of golden images Red Hat publishes golden images as container disks in the registry for versions of Red Hat Enterprise Linux (RHEL). Container disks are virtual machine images that are stored as a container image in a container image registry. Any published image will automatically be made available in connected clusters after the installation of OpenShift Virtualization. After the images are available in a cluster, they are ready to use to create VMs. 7.1.1.2. About VM boot sources Virtual machines (VMs) consist of a VM definition and one or more disks that are backed by data volumes. VM templates enable you to create VMs using predefined specifications. Every template requires a boot source, which is a fully configured disk image including configured drivers. Each template contains a VM definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source. Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) and volume snapshots are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing boot sources in the cluster namespace that are configured with the default storage class. 7.1.2. Creating virtual machines from instance types You can simplify virtual machine (VM) creation by using instance types, whether you use the OpenShift Container Platform web console or the CLI to create VMs. 7.1.2.1. About instance types An instance type is a reusable object where you can define resources and characteristics to apply to new VMs. You can define custom instance types or use the variety that are included when you install OpenShift Virtualization. To create a new instance type, you must first create a manifest, either manually or by using the virtctl CLI tool. You then create the instance type object by applying the manifest to your cluster. OpenShift Virtualization provides two CRDs for configuring instance types: A namespaced object: VirtualMachineInstancetype A cluster-wide object: VirtualMachineClusterInstancetype These objects use the same VirtualMachineInstancetypeSpec . 7.1.2.1.1. Required attributes When you configure an instance type, you must define the cpu and memory attributes. Other attributes are optional. Note When you create a VM from an instance type, you cannot override any parameters defined in the instance type. Because instance types require defined CPU and memory attributes, OpenShift Virtualization always rejects additional requests for these resources when creating a VM from an instance type. You can manually create an instance type manifest. For example: Example YAML file with required fields apiVersion: instancetype.kubevirt.io/v1beta1 kind: VirtualMachineInstancetype metadata: name: example-instancetype spec: cpu: guest: 1 1 memory: guest: 128Mi 2 1 Required. Specifies the number of vCPUs to allocate to the guest. 2 Required. Specifies an amount of memory to allocate to the guest. You can create an instance type manifest by using the virtctl CLI utility. For example: Example virtctl command with required fields USD virtctl create instancetype --cpu 2 --memory 256Mi where: --cpu <value> Specifies the number of vCPUs to allocate to the guest. Required. --memory <value> Specifies an amount of memory to allocate to the guest. Required. Tip You can immediately create the object from the new manifest by running the following command: USD virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f - 7.1.2.1.2. Optional attributes In addition to the required cpu and memory attributes, you can include the following optional attributes in the VirtualMachineInstancetypeSpec : annotations List annotations to apply to the VM. gpus List vGPUs for passthrough. hostDevices List host devices for passthrough. ioThreadsPolicy Define an IO threads policy for managing dedicated disk access. launchSecurity Configure Secure Encrypted Virtualization (SEV). nodeSelector Specify node selectors to control the nodes where this VM is scheduled. schedulerName Define a custom scheduler to use for this VM instead of the default scheduler. 7.1.2.2. Pre-defined instance types OpenShift Virtualization includes a set of pre-defined instance types called common-instancetypes . Some are specialized for specific workloads and others are workload-agnostic. These instance type resources are named according to their series, version, and size. The size value follows the . delimiter and ranges from nano to 8xlarge . Table 7.1. common-instancetypes series comparison Use case Series Characteristics vCPU to memory ratio Example resource Universal U Burstable CPU performance 1:4 u1.medium 1 vCPUs 4 Gi memory Overcommitted O Overcommitted memory Burstable CPU performance 1:4 o1.small 1 vCPU 2Gi memory Compute-exclusive CX Hugepages Dedicated CPU Isolated emulator threads vNUMA 1:2 cx1.2xlarge 8 vCPUs 16Gi memory NVIDIA GPU GN For VMs that use GPUs provided by the NVIDIA GPU Operator Has predefined GPUs Burstable CPU performance 1:4 gn1.8xlarge 32 vCPUs 128Gi memory Memory-intensive M Hugepages Burstable CPU performance 1:8 m1.large 2 vCPUs 16Gi memory Network-intensive N Hugepages Dedicated CPU Isolated emulator threads Requires nodes capable of running DPDK workloads 1:2 n1.medium 4 vCPUs 4Gi memory 7.1.2.3. Creating manifests by using the virtctl tool You can use the virtctl CLI utility to simplify creating manifests for VMs, VM instance types, and VM preferences. For more information, see VM manifest creation commands . If you have a VirtualMachine manifest, you can create a VM from the command line . 7.1.2.4. Creating a VM from an instance type by using the web console You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM. You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list. Procedure In the web console, navigate to Virtualization Catalog . The InstanceTypes tab opens by default. Select either of the following options: Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list. Note The bootable volume table lists only those volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label. Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list. Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a containerDisk volume. Click Save . Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link. In addition, there is a link to the Create a Windows boot source quick start. The same link appears in a popover if you hover the pointer over the question mark icon to the Select volume to boot from line. Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button. Click an instance type tile and select the resource size appropriate for your workload. Optional: Choose the virtual machine details, including the VM's name, that apply to the volume you are booting from: For a Linux-based volume, follow these steps to configure SSH: If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section. Select one of the following options: Use existing : Select a secret from the secrets list. Add new : Follow these steps: Browse to the public SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . For a Windows volume, follow either of these set of steps to configure sysprep options: If you have not already added sysprep options for the Windows volume, follow these steps: Click the edit icon beside Sysprep in the VirtualMachine details section. Add the Autoattend.xml answer file. Add the Unattend.xml answer file. Click Save . If you want to use existing sysprep options for the Windows volume, follow these steps: Click Attach existing sysprep . Enter the name of the existing sysprep Unattend.xml answer file. Click Save . Optional: If you are creating a Windows VM, you can mount a Windows driver disk: Click the Customize VirtualMachine button. On the VirtualMachine details page, click Storage . Select the Mount Windows drivers disk checkbox. Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands. Click Create VirtualMachine . After the VM is created, you can monitor the status on the VirtualMachine details page. 7.1.3. Creating virtual machines from templates You can create virtual machines (VMs) from Red Hat templates by using the OpenShift Container Platform web console. 7.1.3.1. About VM templates Boot sources You can expedite VM creation by using templates that have an available boot source. Templates with a boot source are labeled Available boot source if they do not have a custom label. Templates without a boot source are labeled Boot source required . See Creating virtual machines from custom images . Customization You can customize the disk source and VM parameters before you start the VM. See storage volume types and storage fields for details about disk source settings. Note If you copy a VM template with all its labels and annotations, your version of the template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove this designation. See Customizing a VM template by using the web console . Single-node OpenShift Due to differences in storage behavior, some templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the evictionStrategy field for templates or VMs that use data volumes or storage profiles. 7.1.3.2. Creating a VM from a template You can create a virtual machine (VM) from a template with an available boot source by using the OpenShift Container Platform web console. Optional: You can customize template or VM parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. Procedure Navigate to Virtualization Catalog in the web console. Click Boot source available to filter templates with boot sources. The catalog displays the default templates. Click All Items to view all available templates for your filters. Click a template tile to view its details. Optional: If you are using a Windows template, you can mount a Windows driver disk by selecting the Mount Windows drivers disk checkbox. If you do not need to customize the template or VM parameters, click Quick create VirtualMachine to create a VM from the template. If you need to customize the template or VM parameters, do the following: Click Customize VirtualMachine . Expand Storage or Optional parameters to edit data source settings. Click Customize VirtualMachine parameters . The Customize and create VirtualMachine pane displays the Overview , YAML , Scheduling , Environment , Network interfaces , Disks , Scripts , and Metadata tabs. Edit the parameters that must be set before the VM boots, such as cloud-init or a static SSH key. Click Create VirtualMachine . The VirtualMachine details page displays the provisioning status. 7.1.3.2.1. Storage volume types Table 7.2. Storage volume types Type Description ephemeral A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim . The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way. persistentVolumeClaim Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions. Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC. dataVolume Data volumes build on the persistentVolumeClaim disk type by managing the process of preparing the virtual machine disk via an import, clone, or upload operation. VMs that use this volume type are guaranteed not to start until the volume is ready. Specify type: dataVolume or type: "" . If you specify any other value for type , such as persistentVolumeClaim , a warning is displayed, and the virtual machine does not start. cloudInitNoCloud Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk. containerDisk References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched. A containerDisk volume is not limited to a single virtual machine and is useful for creating large numbers of virtual machine clones that do not require persistent storage. Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size. Note A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. A containerDisk volume is useful for read-only file systems such as CD-ROMs or for disposable virtual machines. emptyDisk Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk. The disk capacity size must also be provided. 7.1.3.2.2. Storage fields Field Description Blank (creates PVC) Create an empty disk. Import via URL (creates PVC) Import content via URL (HTTP or HTTPS endpoint). Use an existing PVC Use a PVC that is already available in the cluster. Clone existing PVC (creates PVC) Select an existing PVC available in the cluster and clone it. Import via Registry (creates PVC) Import content via container registry. Container (ephemeral) Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. Name Name of the disk. The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), hyphens ( - ), and periods ( . ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters. Size Size of the disk in GiB. Type Type of disk. Example: Disk or CD-ROM Interface Type of disk device. Supported interfaces are virtIO , SATA , and SCSI . Storage Class The storage class that is used to create the disk. Advanced storage settings The following advanced storage settings are optional and available for Blank , Import via URL , and Clone existing PVC disks. If you do not specify these parameters, the system uses the default storage profile values. Parameter Option Parameter description Volume Mode Filesystem Stores the virtual disk on a file system-based volume. Block Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it. Access Mode ReadWriteOnce (RWO) Volume can be mounted as read-write by a single node. ReadWriteMany (RWX) Volume can be mounted as read-write by many nodes at one time. Note This mode is required for live migration. 7.1.3.2.3. Customizing a VM template by using the web console You can customize an existing virtual machine (VM) template by modifying the VM or template parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. If you customize a template by copying it and including all of its labels and annotations, the customized template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove the deprecated designation from the customized template. Procedure Navigate to Virtualization Templates in the web console. From the list of VM templates, click the template marked as deprecated. Click Edit to the pencil icon beside Labels . Remove the following two labels: template.kubevirt.io/type: "base" template.kubevirt.io/version: "version" Click Save . Click the pencil icon beside the number of existing Annotations . Remove the following annotation: template.kubevirt.io/deprecated Click Save . 7.1.3.2.4. Creating a custom VM template in the web console You create a virtual machine template by editing a YAML file example in the OpenShift Container Platform web console. Procedure In the web console, click Virtualization Templates in the side menu. Optional: Use the Project drop-down menu to change the project associated with the new template. All templates are saved to the openshift project by default. Click Create Template . Specify the template parameters by editing the YAML file. Click Create . The template is displayed on the Templates page. Optional: Click Download to download and save the YAML file. 7.1.4. Creating virtual machines from the command line You can create virtual machines (VMs) from the command line by editing or creating a VirtualMachine manifest. You can simplify VM configuration by using an instance type in your VM manifest. Note You can also create VMs from instance types by using the web console . 7.1.4.1. Creating manifests by using the virtctl tool You can use the virtctl CLI utility to simplify creating manifests for VMs, VM instance types, and VM preferences. For more information, see VM manifest creation commands . 7.1.4.2. Creating a VM from a VirtualMachine manifest You can create a virtual machine (VM) from a VirtualMachine manifest. Procedure Edit the VirtualMachine manifest for your VM. The following example configures a Red Hat Enterprise Linux (RHEL) VM: Note This example manifest does not configure VM authentication. Example manifest for a RHEL VM apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-9-minimal spec: dataVolumeTemplates: - metadata: name: rhel-9-minimal-volume spec: sourceRef: kind: DataSource name: rhel9 1 namespace: openshift-virtualization-os-images 2 storage: {} instancetype: name: u1.medium 3 preference: name: rhel.9 4 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: rhel-9-minimal-volume name: rootdisk 1 The rhel9 golden image is used to install RHEL 9 as the guest operating system. 2 Golden images are stored in the openshift-virtualization-os-images namespace. 3 The u1.medium instance type requests 1 vCPU and 4Gi memory for the VM. These resource values cannot be overridden within the VM. 4 The rhel.9 preference specifies additional attributes that support the RHEL 9 guest operating system. Create a virtual machine by using the manifest file: USD oc create -f <vm_manifest_file>.yaml Optional: Start the virtual machine: USD virtctl start <vm_name> -n <namespace> steps Configuring SSH access to virtual machines 7.2. Creating VMs from custom images 7.2.1. Creating virtual machines from custom images overview You can create virtual machines (VMs) from custom operating system images by using one of the following methods: Importing the image as a container disk from a registry . Optional: You can enable auto updates for your container disks. See Managing automatic boot source updates for details. Importing the image from a web page . Uploading the image from a local machine . Cloning a persistent volume claim (PVC) that contains the image . The Containerized Data Importer (CDI) imports the image into a PVC by using a data volume. You add the PVC to the VM by using the OpenShift Container Platform web console or command line. Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. You must also install VirtIO drivers on Windows VMs. The QEMU guest agent is included with Red Hat images. 7.2.2. Creating VMs by using container disks You can create virtual machines (VMs) by using container disks built from operating system images. You can enable auto updates for your container disks. See Managing automatic boot source updates for details. Important If the container disks are large, the I/O traffic might increase and cause worker nodes to be unavailable. You can perform the following tasks to resolve this issue: Pruning DeploymentConfig objects . Configuring garbage collection . You create a VM from a container disk by performing the following steps: Build an operating system image into a container disk and upload it to your container registry . If your container registry does not have TLS, configure your environment to disable TLS for your registry . Create a VM with the container disk as the disk source by using the web console or the command line . Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. 7.2.2.1. Building and uploading a container disk You can build a virtual machine (VM) image into a container disk and upload it to a registry. The size of a container disk is limited by the maximum layer size of the registry where the container disk is hosted. Note For Red Hat Quay , you can change the maximum layer size by editing the YAML configuration file that is created when Red Hat Quay is first deployed. Prerequisites You must have podman installed. You must have a QCOW2 or RAW image file. Procedure Create a Dockerfile to build the VM image into a container image. The VM image must be owned by QEMU, which has a UID of 107 , and placed in the /disk/ directory inside the container. Permissions for the /disk/ directory must then be set to 0440 . The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal scratch image in the second stage to store the result: USD cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF 1 Where <vm_image> is the image in either QCOW2 or RAW format. If you use a remote image, replace <vm_image>.qcow2 with the complete URL. Build and tag the container: USD podman build -t <registry>/<container_disk_name>:latest . Push the container image to the registry: USD podman push <registry>/<container_disk_name>:latest 7.2.2.2. Disabling TLS for a container registry You can disable TLS (transport layer security) for one or more container registries by editing the insecureRegistries field of the HyperConverged custom resource. Prerequisites Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add a list of insecure registries to the spec.storageImport.insecureRegistries field. Example HyperConverged custom resource apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - "private-registry-example-1:5000" - "private-registry-example-2:5000" 1 Replace the examples in this list with valid registry hostnames. 7.2.2.3. Creating a VM from a container disk by using the web console You can create a virtual machine (VM) by importing a container disk from a container registry by using the OpenShift Container Platform web console. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select Registry (creates PVC) from the Disk source list. Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 Set the disk size. Click . Click Create VirtualMachine . 7.2.2.4. Creating a VM from a container disk by using the command line You can create a virtual machine (VM) from a container disk by using the command line. When the virtual machine (VM) is created, the data volume with the container disk is imported into persistent storage. Prerequisites You must have access credentials for the container registry that contains the container disk. Procedure If the container registry requires authentication, create a Secret manifest, specifying the credentials, and save it as a data-source-secret.yaml file: apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 1 secretKey: "" 2 1 Specify the Base64-encoded key ID or user name. 2 Specify the Base64-encoded secret key or password. Apply the Secret manifest by running the following command: USD oc apply -f data-source-secret.yaml If the VM must communicate with servers that use self-signed certificates or certificates that are not signed by the system CA bundle, create a config map in the same namespace as the VM: USD oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2 1 Specify the config map name. 2 Specify the path to the CA certificate. Edit the VirtualMachine manifest and save it as a vm-fedora-datavolume.yaml file: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: registry: url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest" 5 secretRef: data-source-secret 6 certConfigMap: tls-certs 7 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {} 1 Specify the name of the VM. 2 Specify the name of the data volume. 3 Specify the size of the storage requested for the data volume. 4 Optional: If you do not specify a storage class, the default storage class is used. 5 Specify the URL of the container registry. 6 Optional: Specify the secret name if you created a secret for the container registry access credentials. 7 Optional: Specify a CA certificate config map. Create the VM by running the following command: USD oc create -f vm-fedora-datavolume.yaml The oc create command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded . You can start the VM. Data volume provisioning happens in the background, so there is no need to monitor the process. Verification The importer pod downloads the container disk from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command: USD oc get pods Monitor the data volume until its status is Succeeded by running the following command: USD oc describe dv fedora-dv 1 1 Specify the data volume name that you defined in the VirtualMachine manifest. Verify that provisioning is complete and that the VM has started by accessing its serial console: USD virtctl console vm-fedora-datavolume 7.2.3. Creating VMs by importing images from web pages You can create virtual machines (VMs) by importing operating system images from web pages. Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. 7.2.3.1. Creating a VM from an image on a web page by using the web console You can create a virtual machine (VM) by importing an image from a web page by using the OpenShift Container Platform web console. Prerequisites You must have access to the web page that contains the image. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select URL (creates PVC) from the Disk source list. Enter the image URL. Example: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 Set the disk size. Click . Click Create VirtualMachine . 7.2.3.2. Creating a VM from an image on a web page by using the command line You can create a virtual machine (VM) from an image on a web page by using the command line. When the virtual machine (VM) is created, the data volume with the image is imported into persistent storage. Prerequisites You must have access credentials for the web page that contains the image. Procedure If the web page requires authentication, create a Secret manifest, specifying the credentials, and save it as a data-source-secret.yaml file: apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 1 secretKey: "" 2 1 Specify the Base64-encoded key ID or user name. 2 Specify the Base64-encoded secret key or password. Apply the Secret manifest by running the following command: USD oc apply -f data-source-secret.yaml If the VM must communicate with servers that use self-signed certificates or certificates that are not signed by the system CA bundle, create a config map in the same namespace as the VM: USD oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2 1 Specify the config map name. 2 Specify the path to the CA certificate. Edit the VirtualMachine manifest and save it as a vm-fedora-datavolume.yaml file: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: http: url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" 5 registry: url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest" 6 secretRef: data-source-secret 7 certConfigMap: tls-certs 8 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {} 1 Specify the name of the VM. 2 Specify the name of the data volume. 3 Specify the size of the storage requested for the data volume. 4 Optional: If you do not specify a storage class, the default storage class is used. 5 6 Specify the URL of the web page. 7 Optional: Specify the secret name if you created a secret for the web page access credentials. 8 Optional: Specify a CA certificate config map. Create the VM by running the following command: USD oc create -f vm-fedora-datavolume.yaml The oc create command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded . You can start the VM. Data volume provisioning happens in the background, so there is no need to monitor the process. Verification The importer pod downloads the image from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command: USD oc get pods Monitor the data volume until its status is Succeeded by running the following command: USD oc describe dv fedora-dv 1 1 Specify the data volume name that you defined in the VirtualMachine manifest. Verify that provisioning is complete and that the VM has started by accessing its serial console: USD virtctl console vm-fedora-datavolume 7.2.4. Creating VMs by uploading images You can create virtual machines (VMs) by uploading operating system images from your local machine. You can create a Windows VM by uploading a Windows image to a PVC. Then you clone the PVC when you create the VM. Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. You must also install VirtIO drivers on Windows VMs. 7.2.4.1. Creating a VM from an uploaded image by using the web console You can create a virtual machine (VM) from an uploaded operating system image by using the OpenShift Container Platform web console. Prerequisites You must have an IMG , ISO , or QCOW2 image file. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select Upload (Upload a new file to a PVC) from the Disk source list. Browse to the image on your local machine and set the disk size. Click Customize VirtualMachine . Click Create VirtualMachine . 7.2.4.2. Creating a Windows VM You can create a Windows virtual machine (VM) by uploading a Windows image to a persistent volume claim (PVC) and then cloning the PVC when you create a VM by using the OpenShift Container Platform web console. Prerequisites You created a Windows installation DVD or USB with the Windows Media Creation Tool. See Create Windows 10 installation media in the Microsoft documentation. You created an autounattend.xml answer file. See Answer files (unattend.xml) in the Microsoft documentation. Procedure Upload the Windows image as a new PVC: Navigate to Storage PersistentVolumeClaims in the web console. Click Create PersistentVolumeClaim With Data upload form . Browse to the Windows image and select it. Enter the PVC name, select the storage class and size and then click Upload . The Windows image is uploaded to a PVC. Configure a new VM by cloning the uploaded PVC: Navigate to Virtualization Catalog . Select a Windows template tile and click Customize VirtualMachine . Select Clone (clone PVC) from the Disk source list. Select the PVC project, the Windows image PVC, and the disk size. Apply the answer file to the VM: Click Customize VirtualMachine parameters . On the Sysprep section of the Scripts tab, click Edit . Browse to the autounattend.xml answer file and click Save . Set the run strategy of the VM: Clear Start this VirtualMachine after creation so that the VM does not start immediately. Click Create VirtualMachine . On the YAML tab, replace running:false with runStrategy: RerunOnFailure and click Save . Click the options menu and select Start . The VM boots from the sysprep disk containing the autounattend.xml answer file. 7.2.4.2.1. Generalizing a Windows VM image You can generalize a Windows operating system image to remove all system-specific configuration data before you use the image to create a new virtual machine (VM). Before generalizing the VM, you must ensure the sysprep tool cannot detect an answer file after the unattended Windows installation. Prerequisites A running Windows VM with the QEMU guest agent installed. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines . Select a Windows VM to open the VirtualMachine details page. Click Configuration Disks . Click the Options menu beside the sysprep disk and select Detach . Click Detach . Rename C:\Windows\Panther\unattend.xml to avoid detection by the sysprep tool. Start the sysprep program by running the following command: %WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm After the sysprep tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs. You can now specialize the VM. 7.2.4.2.2. Specializing a Windows VM image Specializing a Windows virtual machine (VM) configures the computer-specific information from a generalized Windows image onto the VM. Prerequisites You must have a generalized Windows disk image. You must create an unattend.xml answer file. See the Microsoft documentation for details. Procedure In the OpenShift Container Platform console, click Virtualization Catalog . Select a Windows template and click Customize VirtualMachine . Select PVC (clone PVC) from the Disk source list. Select the PVC project and PVC name of the generalized Windows image. Click Customize VirtualMachine parameters . Click the Scripts tab. In the Sysprep section, click Edit , browse to the unattend.xml answer file, and click Save . Click Create VirtualMachine . During the initial boot, Windows uses the unattend.xml answer file to specialize the VM. The VM is now ready to use. Additional resources for creating Windows VMs Microsoft, Sysprep (Generalize) a Windows installation Microsoft, generalize Microsoft, specialize 7.2.4.3. Creating a VM from an uploaded image by using the command line You can upload an operating system image by using the virtctl command line tool. You can use an existing data volume or create a new data volume for the image. Prerequisites You must have an ISO , IMG , or QCOW2 operating system image file. For best performance, compress the image file by using the virt-sparsify tool or the xz or gzip utilities. You must have virtctl installed. The client machine must be configured to trust the OpenShift Container Platform router's certificate. Procedure Upload the image by running the virtctl image-upload command: USD virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3 1 The name of the data volume. 2 The size of the data volume. For example: --size=500Mi , --size=1G 3 The file path of the image. Note If you do not want to create a new data volume, omit the --size parameter and include the --no-create flag. When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk. To allow insecure server connections when using HTTPS, use the --insecure parameter. When you use the --insecure flag, the authenticity of the upload endpoint is not verified. Optional. To verify that a data volume was created, view all data volumes by running the following command: USD oc get dvs 7.2.5. Installing the QEMU guest agent and VirtIO drivers The QEMU guest agent is a daemon that runs on the virtual machine (VM) and passes information to the host about the VM, users, file systems, and secondary networks. You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. 7.2.5.1. Installing the QEMU guest agent 7.2.5.1.1. Installing the QEMU guest agent on a Linux VM The qemu-guest-agent is widely available and available by default in Red Hat Enterprise Linux (RHEL) virtual machines (VMs). Install the agent and start the service. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Procedure Log in to the VM by using a console or SSH. Install the QEMU guest agent by running the following command: USD yum install -y qemu-guest-agent Ensure the service is persistent and start it: USD systemctl enable --now qemu-guest-agent Verification Run the following command to verify that AgentConnected is listed in the VM spec: USD oc get vm <vm_name> 7.2.5.1.2. Installing the QEMU guest agent on a Windows VM For Windows virtual machines (VMs), the QEMU guest agent is included in the VirtIO drivers. You can install the drivers during a Windows installation or on an existing Windows VM. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Procedure In the Windows guest operating system, use the File Explorer to navigate to the guest-agent directory in the virtio-win CD drive. Run the qemu-ga-x86_64.msi installer. Verification Obtain a list of network services by running the following command: USD net start Verify that the output contains the QEMU Guest Agent . 7.2.5.2. Installing VirtIO drivers on Windows VMs VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines (VMs) to run in OpenShift Virtualization. The drivers are shipped with the rest of the images and do not require a separate download. The container-native-virtualization/virtio-win container disk must be attached to the VM as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation or added to an existing Windows installation. After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the VM. Table 7.3. Supported drivers Driver name Hardware ID Description viostor VEN_1AF4&DEV_1001 VEN_1AF4&DEV_1042 The block driver. Sometimes labeled as an SCSI Controller in the Other devices group. viorng VEN_1AF4&DEV_1005 VEN_1AF4&DEV_1044 The entropy source driver. Sometimes labeled as a PCI Device in the Other devices group. NetKVM VEN_1AF4&DEV_1000 VEN_1AF4&DEV_1041 The network driver. Sometimes labeled as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. 7.2.5.2.1. Attaching VirtIO container disk to Windows VMs during installation You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done during creation of the VM. Procedure When creating a Windows VM from a template, click Customize VirtualMachine . Select Mount Windows drivers disk . Click the Customize VirtualMachine parameters . Click Create VirtualMachine . After the VM is created, the virtio-win SATA CD disk will be attached to the VM. 7.2.5.2.2. Attaching VirtIO container disk to an existing Windows VM You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done to an existing VM. Procedure Navigate to the existing Windows VM, and click Actions Stop . Go to VM Details Configuration Disks and click Add disk . Add windows-driver-disk from container source, set the Type to CD-ROM , and then set the Interface to SATA . Click Save . Start the VM, and connect to a graphical console. 7.2.5.2.3. Installing VirtIO drivers during Windows installation You can install the VirtIO drivers while installing Windows on a virtual machine (VM). Note This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing. Prerequisites A storage device containing the virtio drivers must be attached to the VM. Procedure In the Windows operating system, use the File Explorer to navigate to the virtio-win CD drive. Double-click the drive to run the appropriate installer for your VM. For a 64-bit vCPU, select the virtio-win-gt-x64 installer. 32-bit vCPUs are no longer supported. Optional: During the Custom Setup step of the installer, select the device drivers you want to install. The recommended driver set is selected by default. After the installation is complete, select Finish . Reboot the VM. Verification Open the system disk on the PC. This is typically C: . Navigate to Program Files Virtio-Win . If the Virtio-Win directory is present and contains a sub-directory for each driver, the installation was successful. 7.2.5.2.4. Installing VirtIO drivers from a SATA CD drive on an existing Windows VM You can install the VirtIO drivers from a SATA CD drive on an existing Windows virtual machine (VM). Note This procedure uses a generic approach to adding drivers to Windows. See the installation documentation for your version of Windows for specific installation steps. Prerequisites A storage device containing the virtio drivers must be attached to the VM as a SATA CD drive. Procedure Start the VM and connect to a graphical console. Log in to a Windows user session. Open Device Manager and expand Other devices to list any Unknown device . Open the Device Properties to identify the unknown device. Right-click the device and select Properties . Click the Details tab and select Hardware Ids in the Property list. Compare the Value for the Hardware Ids with the supported VirtIO drivers. Right-click the device and select Update Driver Software . Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Click to install the driver. Repeat this process for all the necessary VirtIO drivers. After the driver installs, click Close to close the window. Reboot the VM to complete the driver installation. 7.2.5.2.5. Installing VirtIO drivers from a container disk added as a SATA CD drive You can install VirtIO drivers from a container disk that you add to a Windows virtual machine (VM) as a SATA CD drive. Tip Downloading the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog is not mandatory, because the container disk is downloaded from the Red Hat registry if it not already present in the cluster. However, downloading reduces the installation time. Prerequisites You must have access to the Red Hat registry or to the downloaded container-native-virtualization/virtio-win container disk in a restricted environment. Procedure Add the container-native-virtualization/virtio-win container disk as a CD drive by editing the VirtualMachine manifest: # ... spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk 1 OpenShift Virtualization boots the VM disks in the order defined in the VirtualMachine manifest. You can either define other VM disks that boot before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the VM boots from the correct disk. If you configure the boot order for a disk, you must configure the boot order for the other disks. Apply the changes: If the VM is not running, run the following command: USD virtctl start <vm> -n <namespace> If the VM is running, reboot the VM or run the following command: USD oc apply -f <vm.yaml> After the VM has started, install the VirtIO drivers from the SATA CD drive. 7.2.5.3. Updating VirtIO drivers 7.2.5.3.1. Updating VirtIO drivers on a Windows VM Update the virtio drivers on a Windows virtual machine (VM) by using the Windows Update service. Prerequisites The cluster must be connected to the internet. Disconnected clusters cannot reach the Windows Update service. Procedure In the Windows Guest operating system, click the Windows key and select Settings . Navigate to Windows Update Advanced Options Optional Updates . Install all updates from Red Hat, Inc. . Reboot the VM. Verification On the Windows VM, navigate to the Device Manager . Select a device. Select the Driver tab. Click Driver Details and confirm that the virtio driver details displays the correct version. 7.2.6. Cloning VMs You can clone virtual machines (VMs) or create new VMs from snapshots. Important Cloning of a VM with a vTPM device attached to it is not supported. 7.2.6.1. Cloning a VM by using the web console You can clone an existing VM by using the web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. Click Actions . Select Clone . On the Clone VirtualMachine page, enter the name of the new VM. (Optional) Select the Start cloned VM checkbox to start the cloned VM. Click Clone . 7.2.6.2. Creating a VM from an existing snapshot by using the web console You can create a new VM by copying an existing snapshot. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. Click the Snapshots tab. Click the actions menu for the snapshot you want to copy. Select Create VirtualMachine . Enter the name of the virtual machine. (Optional) Select the Start this VirtualMachine after creation checkbox to start the new virtual machine. Click Create . 7.2.6.3. Additional resources Creating VMs by cloning PVCs 7.2.7. Creating VMs by cloning PVCs You can create virtual machines (VMs) by cloning existing persistent volume claims (PVCs) with custom images. You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. You clone a PVC by creating a data volume that references a source PVC. 7.2.7.1. About cloning When cloning a data volume, the Containerized Data Importer (CDI) chooses one of the following Container Storage Interface (CSI) clone methods: CSI volume cloning Smart cloning Both CSI volume cloning and smart cloning methods are efficient, but they have certain requirements for use. If the requirements are not met, the CDI uses host-assisted cloning. Host-assisted cloning is the slowest and least efficient method of cloning, but it has fewer requirements than either of the other two cloning methods. 7.2.7.1.1. CSI volume cloning Container Storage Interface (CSI) cloning uses CSI driver features to more efficiently clone a source data volume. CSI volume cloning has the following requirements: The CSI driver that backs the storage class of the persistent volume claim (PVC) must support volume cloning. For provisioners not recognized by the CDI, the corresponding storage profile must have the cloneStrategy set to CSI Volume Cloning. The source and target PVCs must have the same storage class and volume mode. If you create the data volume, you must have permission to create the datavolumes/source resource in the source namespace. The source volume must not be in use. 7.2.7.1.2. Smart cloning When a Container Storage Interface (CSI) plugin with snapshot capabilities is available, the Containerized Data Importer (CDI) creates a persistent volume claim (PVC) from a snapshot, which then allows efficient cloning of additional PVCs. Smart cloning has the following requirements: A snapshot class associated with the storage class must exist. The source and target PVCs must have the same storage class and volume mode. If you create the data volume, you must have permission to create the datavolumes/source resource in the source namespace. The source volume must not be in use. 7.2.7.1.3. Host-assisted cloning When the requirements for neither Container Storage Interface (CSI) volume cloning nor smart cloning have been met, host-assisted cloning is used as a fallback method. Host-assisted cloning is less efficient than either of the two other cloning methods. Host-assisted cloning uses a source pod and a target pod to copy data from the source volume to the target volume. The target persistent volume claim (PVC) is annotated with the fallback reason that explains why host-assisted cloning has been used, and an event is created. Example PVC target annotation apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/cloneFallbackReason: The volume modes of source and target are incompatible cdi.kubevirt.io/clonePhase: Succeeded cdi.kubevirt.io/cloneType: copy Example event NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE test-ns 0s Warning IncompatibleVolumeModes persistentvolumeclaim/test-target The volume modes of source and target are incompatible 7.2.7.2. Creating a VM from a PVC by using the web console You can create a virtual machine (VM) by importing an image from a web page by using the OpenShift Container Platform web console. You can create a virtual machine (VM) by cloning a persistent volume claim (PVC) by using the OpenShift Container Platform web console. Prerequisites You must have access to the web page that contains the image. You must have access to the namespace that contains the source PVC. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select PVC (clone PVC) from the Disk source list. Enter the image URL. Example: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 Select the PVC project and the PVC name. Set the disk size. Click . Click Create VirtualMachine . 7.2.7.3. Creating a VM from a PVC by using the command line You can create a virtual machine (VM) by cloning the persistent volume claim (PVC) of an existing VM by using the command line. You can clone a PVC by using one of the following options: Cloning a PVC to a new data volume. This method creates a data volume whose lifecycle is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC. Cloning a PVC by creating a VirtualMachine manifest with a dataVolumeTemplates stanza. This method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC. 7.2.7.3.1. Cloning a PVC to a data volume You can clone the persistent volume claim (PVC) of an existing virtual machine (VM) disk to a data volume by using the command line. You create a data volume that references the original source PVC. The lifecycle of the new data volume is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC. Cloning between different volume modes is supported for host-assisted cloning, such as cloning from a block persistent volume (PV) to a file system PV, as long as the source and target PVs belong to the kubevirt content type. Note Smart-cloning is faster and more efficient than host-assisted cloning because it uses snapshots to clone PVCs. Smart-cloning is supported by storage providers that support snapshots, such as Red Hat OpenShift Data Foundation. Cloning between different volume modes is not supported for smart-cloning. Prerequisites The VM with the source PVC must be powered down. If you clone a PVC to a different namespace, you must have permissions to create resources in the target namespace. Additional prerequisites for smart-cloning: Your storage provider must support snapshots. The source and target PVCs must have the same storage provider and volume mode. The value of the driver key of the VolumeSnapshotClass object must match the value of the provisioner key of the StorageClass object as shown in the following example: Example VolumeSnapshotClass object kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com # ... Example StorageClass object kind: StorageClass apiVersion: storage.k8s.io/v1 # ... provisioner: openshift-storage.rbd.csi.ceph.com Procedure Create a DataVolume manifest as shown in the following example: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: namespace: "<source_namespace>" 2 name: "<my_vm_disk>" 3 storage: {} 1 Specify the name of the new data volume. 2 Specify the namespace of the source PVC. 3 Specify the name of the source PVC. Create the data volume by running the following command: USD oc create -f <datavolume>.yaml Note Data volumes prevent a VM from starting before the PVC is prepared. You can create a VM that references the new data volume while the PVC is being cloned. 7.2.7.3.2. Creating a VM from a cloned PVC by using a data volume template You can create a virtual machine (VM) that clones the persistent volume claim (PVC) of an existing VM by using a data volume template. This method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC. Prerequisites The VM with the source PVC must be powered down. Procedure Create a VirtualMachine manifest as shown in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: <source_namespace> 2 name: "<source_pvc>" 3 1 Specify the name of the VM. 2 Specify the namespace of the source PVC. 3 Specify the name of the source PVC. Create the virtual machine with the PVC-cloned data volume: USD oc create -f <vm-clone-datavolumetemplate>.yaml 7.3. Connecting to virtual machine consoles You can connect to the following consoles to access running virtual machines (VMs): VNC console Serial console Desktop viewer for Windows VMs 7.3.1. Connecting to the VNC console You can connect to the VNC console of a virtual machine by using the OpenShift Container Platform web console or the virtctl command line tool. 7.3.1.1. Connecting to the VNC console by using the web console You can connect to the VNC console of a virtual machine (VM) by using the OpenShift Container Platform web console. Note If you connect to a Windows VM with a vGPU assigned as a mediated device, you can switch between the default display and the vGPU display. Procedure On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page. Click the Console tab. The VNC console session starts automatically. Optional: To switch to the vGPU display of a Windows VM, select Ctl + Alt + 2 from the Send key list. Select Ctl + Alt + 1 from the Send key list to restore the default display. To end the console session, click outside the console pane and then click Disconnect . 7.3.1.2. Connecting to the VNC console by using virtctl You can use the virtctl command line tool to connect to the VNC console of a running virtual machine. Note If you run the virtctl vnc command on a remote machine over an SSH connection, you must forward the X session to your local machine by running the ssh command with the -X or -Y flags. Prerequisites You must install the virt-viewer package. Procedure Run the following command to start the console session: USD virtctl vnc <vm_name> If the connection fails, run the following command to collect troubleshooting information: USD virtctl vnc <vm_name> -v 4 7.3.1.3. Generating a temporary token for the VNC console To access the VNC of a virtual machine (VM), generate a temporary authentication bearer token for the Kubernetes API. Note Kubernetes also supports authentication using client certificates, instead of a bearer token, by modifying the curl command. Prerequisites A running VM with OpenShift Virtualization 4.14 or later and ssp-operator 4.14 or later Procedure Enable the feature gate in the HyperConverged ( HCO ) custom resource (CR): USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/deployVmConsoleProxy", "value": true}]' Generate a token by entering the following command: USD curl --header "Authorization: Bearer USD{TOKEN}" \ "https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>" The <duration> parameter can be set in hours and minutes, with a minimum duration of 10 minutes. For example: 5h30m . If this parameter is not set, the token is valid for 10 minutes by default. Sample output: { "token": "eyJhb..." } Optional: Use the token provided in the output to create a variable: USD export VNC_TOKEN="<token>" You can now use the token to access the VNC console of a VM. Verification Log in to the cluster by entering the following command: USD oc login --token USD{VNC_TOKEN} Test access to the VNC console of the VM by using the virtctl command: USD virtctl vnc <vm_name> -n <namespace> Warning It is currently not possible to revoke a specific token. To revoke a token, you must delete the service account that was used to create it. However, this also revokes all other tokens that were created by using the service account. Use the following command with caution: USD virtctl delete serviceaccount --namespace "<namespace>" "<vm_name>-vnc-access" 7.3.1.3.1. Granting token generation permission for the VNC console by using the cluster role As a cluster administrator, you can install a cluster role and bind it to a user or service account to allow access to the endpoint that generates tokens for the VNC console. Procedure Choose to bind the cluster role to either a user or service account. Run the following command to bind the cluster role to a user: USD kubectl create rolebinding "USD{ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --user="USD{USER_NAME}" Run the following command to bind the cluster role to a service account: USD kubectl create rolebinding "USD{ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --serviceaccount="USD{SERVICE_ACCOUNT_NAME}" 7.3.2. Connecting to the serial console You can connect to the serial console of a virtual machine by using the OpenShift Container Platform web console or the virtctl command line tool. Note Running concurrent VNC connections to a single virtual machine is not currently supported. 7.3.2.1. Connecting to the serial console by using the web console You can connect to the serial console of a virtual machine (VM) by using the OpenShift Container Platform web console. Procedure On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page. Click the Console tab. The VNC console session starts automatically. Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background. Select Serial console from the console list. To end the console session, click outside the console pane and then click Disconnect . 7.3.2.2. Connecting to the serial console by using virtctl You can use the virtctl command line tool to connect to the serial console of a running virtual machine. Procedure Run the following command to start the console session: USD virtctl console <vm_name> Press Ctrl+] to end the console session. 7.3.3. Connecting to the desktop viewer You can connect to a Windows virtual machine (VM) by using the desktop viewer and the Remote Desktop Protocol (RDP). 7.3.3.1. Connecting to the desktop viewer by using the web console You can connect to the desktop viewer of a Windows virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You installed the QEMU guest agent on the Windows VM. You have an RDP client installed. Procedure On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page. Click the Console tab. The VNC console session starts automatically. Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background. Select Desktop viewer from the console list. Click Create RDP Service to open the RDP Service dialog. Select Expose RDP Service and click Save to create a node port service. Click Launch Remote Desktop to download an .rdp file and launch the desktop viewer. 7.4. Specifying an instance type or preference You can specify an instance type, a preference, or both to define a set of workload sizing and runtime characteristics for reuse across multiple VMs. 7.4.1. Using flags to specify instance types and preferences Specify instance types and preferences by using flags. Prerequisites You must have an instance type, preference, or both on the cluster. Procedure To specify an instance type when creating a VM, use the --instancetype flag. To specify a preference, use the --preference flag. The following example includes both flags: USD virtctl create vm --instancetype <my_instancetype> --preference <my_preference> Optional: To specify a namespaced instance type or preference, include the kind in the value passed to the --instancetype or --preference flag command. The namespaced instance type or preference must be in the same namespace you are creating the VM in. The following example includes flags for a namespaced instance type and a namespaced preference: USD virtctl create vm --instancetype virtualmachineinstancetype/<my_instancetype> --preference virtualmachinepreference/<my_preference> 7.4.2. Inferring an instance type or preference Inferring instance types, preferences, or both is enabled by default, and the inferFromVolumeFailure policy of the inferFromVolume attribute is set to Ignore . When inferring from the boot volume, errors are ignored, and the VM is created with the instance type and preference left unset. However, when flags are applied, the inferFromVolumeFailure policy defaults to Reject . When inferring from the boot volume, errors result in the rejection of the creation of that VM. You can use the --infer-instancetype and --infer-preference flags to infer which instance type, preference, or both to use to define the workload sizing and runtime characteristics of a VM. Prerequisites You have installed the virtctl tool. Procedure To explicitly infer instance types from the volume used to boot the virtual machine, use the --infer-instancetype flag. To explicitly infer preferences, use the --infer-preference flag. The following command includes both flags: USD virtctl create vm --volume-import type:pvc,src:my-ns/my-pvc --infer-instancetype --infer-preference 7.4.3. Setting the inferFromVolume labels Use the following labels on your PVC, data source, or data volume to instruct the inference mechanism which instance type, preference, or both to use when trying to boot from a volume. A cluster-wide instance type: instancetype.kubevirt.io/default-instancetype label. A namespaced instance type: instancetype.kubevirt.io/default-instancetype-kind label. Defaults to the VirtualMachineClusterInstancetype label if left empty. A cluster-wide preference: instancetype.kubevirt.io/default-preference label. A namespaced preference: instancetype.kubevirt.io/default-preference-kind label. Defaults to VirtualMachineClusterPreference label, if left empty. Prerequisites You must have an instance type, preference, or both on the cluster. Procedure To apply a label to a data source, use oc label . The following command applies a label that points to a cluster-wide instance type: USD oc label DataSource foo instancetype.kubevirt.io/default-instancetype=<my_instancetype> 7.5. Configuring SSH access to virtual machines You can configure SSH access to virtual machines (VMs) by using the following methods: virtctl ssh command You create an SSH key pair, add the public key to a VM, and connect to the VM by running the virtctl ssh command with the private key. You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source. virtctl port-forward command You add the virtctl port-foward command to your .ssh/config file and connect to the VM by using OpenSSH. Service You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service. Secondary network You configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address. 7.5.1. Access configuration considerations Each method for configuring access to a virtual machine (VM) has advantages and limitations, depending on the traffic load and client requirements. Services provide excellent performance and are recommended for applications that are accessed from outside the cluster. If the internal cluster network cannot handle the traffic load, you can configure a secondary network. virtctl ssh and virtctl port-forwarding commands Simple to configure. Recommended for troubleshooting VMs. virtctl port-forwarding recommended for automated configuration of VMs with Ansible. Dynamic public SSH keys can be used to provision VMs with Ansible. Not recommended for high-traffic applications like Rsync or Remote Desktop Protocol because of the burden on the API server. The API server must be able to handle the traffic load. The clients must be able to access the API server. The clients must have access credentials for the cluster. Cluster IP service The internal cluster network must be able to handle the traffic load. The clients must be able to access an internal cluster IP address. Node port service The internal cluster network must be able to handle the traffic load. The clients must be able to access at least one node. Load balancer service A load balancer must be configured. Each node must be able to handle the traffic load of one or more load balancer services. Secondary network Excellent performance because traffic does not go through the internal cluster network. Allows a flexible approach to network topology. Guest operating system must be configured with appropriate security because the VM is exposed directly to the secondary network. If a VM is compromised, an intruder could gain access to the secondary network. 7.5.2. Using virtctl ssh You can add a public SSH key to a virtual machine (VM) and connect to the VM by running the virtctl ssh command. This method is simple to configure. However, it is not recommended for high traffic loads because it places a burden on the API server. 7.5.2.1. About static and dynamic SSH key management You can add public SSH keys to virtual machines (VMs) statically at first boot or dynamically at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. Static SSH key management You can add a statically managed SSH key to a VM with a guest operating system that supports configuration by using a cloud-init data source. The key is added to the virtual machine (VM) at first boot. You can add the key by using one of the following methods: Add a key to a single VM when you create it by using the web console or the command line. Add a key to a project by using the web console. Afterwards, the key is automatically added to the VMs that you create in this project. Use cases As a VM owner, you can provision all your newly created VMs with a single key. Dynamic SSH key management You can enable dynamic SSH key management for a VM with Red Hat Enterprise Linux (RHEL) 9 installed. Afterwards, you can update the key during runtime. The key is added by the QEMU guest agent, which is installed with Red Hat boot sources. When dynamic key management is disabled, the default key management setting of a VM is determined by the image used for the VM. Use cases Granting or revoking access to VMs: As a cluster administrator, you can grant or revoke remote VM access by adding or removing the keys of individual users from a Secret object that is applied to all VMs in a namespace. User access: You can add your access credentials to all VMs that you create and manage. Ansible provisioning: As an operations team member, you can create a single secret that contains all the keys used for Ansible provisioning. As a VM owner, you can create a VM and attach the keys used for Ansible provisioning. Key rotation: As a cluster administrator, you can rotate the Ansible provisioner keys used by VMs in a namespace. As a workload owner, you can rotate the key for the VMs that you manage. 7.5.2.2. Static key management You can add a statically managed public SSH key when you create a virtual machine (VM) by using the OpenShift Container Platform web console or the command line. The key is added as a cloud-init data source when the VM boots for the first time. You can also add a public SSH key to a project when you create a VM by using the web console. The key is saved as a secret and is added automatically to all VMs that you create. Note If you add a secret to a project and then delete the VM, the secret is retained because it is a namespace resource. You must delete the secret manually. 7.5.2.2.1. Adding a key when creating a VM from a template You can add a statically managed public SSH key when you create a virtual machine (VM) by using the OpenShift Container Platform web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data. Optional: You can add a key to a project. Afterwards, this key is added automatically to VMs that you create in the project. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile. The guest operating system must support configuration from a cloud-init data source. Click Customize VirtualMachine . Click . Click the Scripts tab. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . Click Create VirtualMachine . The VirtualMachine details page displays the progress of the VM creation. Verification Click the Scripts tab on the Configuration tab. The secret name is displayed in the Authorized SSH key section. 7.5.2.2.2. Adding a key when creating a VM from an instance type by using the web console You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM. You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list. You can add a statically managed SSH key when you create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data. Procedure In the web console, navigate to Virtualization Catalog . The InstanceTypes tab opens by default. Select either of the following options: Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list. Note The bootable volume table lists only those volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label. Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list. Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a containerDisk volume. Click Save . Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link. In addition, there is a link to the Create a Windows boot source quick start. The same link appears in a popover if you hover the pointer over the question mark icon to the Select volume to boot from line. Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button. Click an instance type tile and select the resource size appropriate for your workload. Optional: Choose the virtual machine details, including the VM's name, that apply to the volume you are booting from: For a Linux-based volume, follow these steps to configure SSH: If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section. Select one of the following options: Use existing : Select a secret from the secrets list. Add new : Follow these steps: Browse to the public SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . For a Windows volume, follow either of these set of steps to configure sysprep options: If you have not already added sysprep options for the Windows volume, follow these steps: Click the edit icon beside Sysprep in the VirtualMachine details section. Add the Autoattend.xml answer file. Add the Unattend.xml answer file. Click Save . If you want to use existing sysprep options for the Windows volume, follow these steps: Click Attach existing sysprep . Enter the name of the existing sysprep Unattend.xml answer file. Click Save . Optional: If you are creating a Windows VM, you can mount a Windows driver disk: Click the Customize VirtualMachine button. On the VirtualMachine details page, click Storage . Select the Mount Windows drivers disk checkbox. Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands. Click Create VirtualMachine . After the VM is created, you can monitor the status on the VirtualMachine details page. 7.5.2.2.3. Adding a key when creating a VM by using the command line You can add a statically managed public SSH key when you create a virtual machine (VM) by using the command line. The key is added to the VM at first boot. The key is added to the VM as a cloud-init data source. This method separates the access credentials from the application data in the cloud-init user data. This method does not affect cloud-init user data. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Create a manifest file for a VirtualMachine object and a Secret object: Example manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config user: cloud-user name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3 1 Specify the cloudInitNoCloud data source. 2 Specify the Secret object name. 3 Paste the public SSH key. Create the VirtualMachine and Secret objects by running the following command: USD oc create -f <manifest_file>.yaml Start the VM by running the following command: USD virtctl start vm example-vm -n example-namespace Verification Get the VM configuration: USD oc describe vm example-vm -n example-namespace Example output apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys # ... 7.5.2.3. Dynamic key management You can enable dynamic key injection for a virtual machine (VM) by using the OpenShift Container Platform web console or the command line. Then, you can update the key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. If you disable dynamic key injection, the VM inherits the key management method of the image from which it was created. 7.5.2.3.1. Enabling dynamic key injection when creating a VM from a template You can enable dynamic public SSH key injection when you create a virtual machine (VM) from a template by using the OpenShift Container Platform web console. Then, you can update the key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Navigate to Virtualization Catalog in the web console. Click the Red Hat Enterprise Linux 9 VM tile. Click Customize VirtualMachine . Click . Click the Scripts tab. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Set Dynamic SSH key injection to on. Click Save . Click Create VirtualMachine . The VirtualMachine details page displays the progress of the VM creation. Verification Click the Scripts tab on the Configuration tab. The secret name is displayed in the Authorized SSH key section. 7.5.2.3.2. Enabling dynamic key injection when creating a VM from an instance type by using the web console You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM. You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list. You can enable dynamic SSH key injection when you create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. Then, you can add or revoke the key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9. Procedure In the web console, navigate to Virtualization Catalog . The InstanceTypes tab opens by default. Select either of the following options: Select a suitable bootable volume from the list. If the list is truncated, click the Show all button to display the entire list. Note The bootable volume table lists only those volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label. Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list. Click Add volume to upload a new volume or to use an existing persistent volume claim (PVC), a volume snapshot, or a containerDisk volume. Click Save . Logos of operating systems that are not available in the cluster are shown at the bottom of the list. You can add a volume for the required operating system by clicking the Add volume link. In addition, there is a link to the Create a Windows boot source quick start. The same link appears in a popover if you hover the pointer over the question mark icon to the Select volume to boot from line. Immediately after you install the environment or when the environment is disconnected, the list of volumes to boot from is empty. In that case, three operating system logos are displayed: Windows, RHEL, and Linux. You can add a new volume that meets your requirements by clicking the Add volume button. Click an instance type tile and select the resource size appropriate for your workload. Click the Red Hat Enterprise Linux 9 VM tile. Optional: Choose the virtual machine details, including the VM's name, that apply to the volume you are booting from: For a Linux-based volume, follow these steps to configure SSH: If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section. Select one of the following options: Use existing : Select a secret from the secrets list. Add new : Follow these steps: Browse to the public SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . For a Windows volume, follow either of these set of steps to configure sysprep options: If you have not already added sysprep options for the Windows volume, follow these steps: Click the edit icon beside Sysprep in the VirtualMachine details section. Add the Autoattend.xml answer file. Add the Unattend.xml answer file. Click Save . If you want to use existing sysprep options for the Windows volume, follow these steps: Click Attach existing sysprep . Enter the name of the existing sysprep Unattend.xml answer file. Click Save . Set Dynamic SSH key injection in the VirtualMachine details section to on. Optional: If you are creating a Windows VM, you can mount a Windows driver disk: Click the Customize VirtualMachine button. On the VirtualMachine details page, click Storage . Select the Mount Windows drivers disk checkbox. Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands. Click Create VirtualMachine . After the VM is created, you can monitor the status on the VirtualMachine details page. 7.5.2.3.3. Enabling dynamic SSH key injection by using the web console You can enable dynamic key injection for a virtual machine (VM) by using the OpenShift Container Platform web console. Then, you can update the public SSH key at runtime. The key is added to the VM by the QEMU guest agent, which is installed with Red Hat Enterprise Linux (RHEL) 9. Prerequisites The guest operating system is RHEL 9. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. On the Configuration tab, click Scripts . If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Set Dynamic SSH key injection to on. Click Save . 7.5.2.3.4. Enabling dynamic key injection by using the command line You can enable dynamic key injection for a virtual machine (VM) by using the command line. Then, you can update the public SSH key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. The key is added to the VM by the QEMU guest agent, which is installed automatically with RHEL 9. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Create a manifest file for a VirtualMachine object and a Secret object: Example manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: ["cloud-user"] source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3 1 Specify the cloudInitNoCloud data source. 2 Specify the Secret object name. 3 Paste the public SSH key. Create the VirtualMachine and Secret objects by running the following command: USD oc create -f <manifest_file>.yaml Start the VM by running the following command: USD virtctl start vm example-vm -n example-namespace Verification Get the VM configuration: USD oc describe vm example-vm -n example-namespace Example output apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: ["cloud-user"] source: secret: secretName: authorized-keys # ... 7.5.2.4. Using the virtctl ssh command You can access a running virtual machine (VM) by using the virtcl ssh command. Prerequisites You installed the virtctl command line tool. You added a public SSH key to the VM. You have an SSH client installed. The environment where you installed the virtctl tool has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable. Procedure Run the virtctl ssh command: USD virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1 1 Specify the namespace, user name, and the SSH private key. The default SSH key location is /home/user/.ssh . If you save the key in a different location, you must specify the path. Example USD virtctl -n my-namespace ssh cloud-user@example-vm -i my-key Tip You can copy the virtctl ssh command in the web console by selecting Copy SSH command from the options menu beside a VM on the VirtualMachines page. 7.5.3. Using the virtctl port-forward command You can use your local OpenSSH client and the virtctl port-forward command to connect to a running virtual machine (VM). You can use this method with Ansible to automate the configuration of VMs. This method is recommended for low-traffic applications because port-forwarding traffic is sent over the control plane. This method is not recommended for high-traffic applications such as Rsync or Remote Desktop Protocol because it places a heavy burden on the API server. Prerequisites You have installed the virtctl client. The virtual machine you want to access is running. The environment where you installed the virtctl tool has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable. Procedure Add the following text to the ~/.ssh/config file on your client machine: Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p Connect to the VM by running the following command: USD ssh <user>@vm/<vm_name>.<namespace> 7.5.4. Using a service for SSH access You can create a service for a virtual machine (VM) and connect to the IP address and port exposed by the service. Services provide excellent performance and are recommended for applications that are accessed from outside the cluster or within the cluster. Ingress traffic is protected by firewalls. If the cluster network cannot handle the traffic load, consider using a secondary network for VM access. 7.5.4.1. About services A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort and LoadBalancer types, exposure to the outside world. ClusterIP Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client's request is load balanced among available backends. ClusterIP is the default service type. NodePort Exposes the service on the same port of each selected node in the cluster. NodePort makes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. LoadBalancer Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service. Note For on-premise clusters, you can configure a load-balancing service by deploying the MetalLB Operator. 7.5.4.2. Creating a service You can create a service to expose a virtual machine (VM) by using the OpenShift Container Platform web console, virtctl command line tool, or a YAML file. 7.5.4.2.1. Enabling load balancer service creation by using the web console You can enable the creation of load balancer services for a virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You have configured a load balancer for the cluster. You are logged in as a user with the cluster-admin role. You created a network attachment definition for the network. Procedure Navigate to Virtualization Overview . On the Settings tab, click Cluster . Expand General settings and SSH configuration . Set SSH over LoadBalancer service to on. 7.5.4.2.2. Creating a service by using the web console You can create a node port or load balancer service for a virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You configured the cluster network to support either a load balancer or a node port. To create a load balancer service, you enabled the creation of load balancer services. Procedure Navigate to VirtualMachines and select a virtual machine to view the VirtualMachine details page. On the Details tab, select SSH over LoadBalancer from the SSH service type list. Optional: Click the copy icon to copy the SSH command to your clipboard. Verification Check the Services pane on the Details tab to view the new service. 7.5.4.2.3. Creating a service by using virtctl You can create a service for a virtual machine (VM) by using the virtctl command line tool. Prerequisites You installed the virtctl command line tool. You configured the cluster network to support the service. The environment where you installed virtctl has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable. Procedure Create a service by running the following command: USD virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1 1 Specify the ClusterIP , NodePort , or LoadBalancer service type. Example USD virtctl expose vm example-vm --name example-service --type NodePort --port 22 Verification Verify the service by running the following command: USD oc get service steps After you create a service with virtctl , you must add special: key to the spec.template.metadata.labels stanza of the VirtualMachine manifest. See Creating a service by using the command line . 7.5.4.2.4. Creating a service by using the command line You can create a service and associate it with a virtual machine (VM) by using the command line. Prerequisites You configured the cluster network to support the service. Procedure Edit the VirtualMachine manifest to add the label for service creation: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 # ... 1 Add special: key to the spec.template.metadata.labels stanza. Note Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest. Save the VirtualMachine manifest file to apply your changes. Create a Service manifest to expose the VM: apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: # ... selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000 1 Specify the label that you added to the spec.template.metadata.labels stanza of the VirtualMachine manifest. 2 Specify ClusterIP , NodePort , or LoadBalancer . 3 Specifies a collection of network ports and protocols that you want to expose from the virtual machine. Save the Service manifest file. Create the service by running the following command: USD oc create -f example-service.yaml Restart the VM to apply the changes. Verification Query the Service object to verify that it is available: USD oc get service -n example-namespace 7.5.4.3. Connecting to a VM exposed by a service by using SSH You can connect to a virtual machine (VM) that is exposed by a service by using SSH. Prerequisites You created a service to expose the VM. You have an SSH client installed. You are logged in to the cluster. Procedure Run the following command to access the VM: USD ssh <user_name>@<ip_address> -p <port> 1 1 Specify the cluster IP for a cluster IP service, the node IP for a node port service, or the external IP address for a load balancer service. 7.5.5. Using a secondary network for SSH access You can configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address by using SSH. Important Secondary networks provide excellent performance because the traffic is not handled by the cluster network stack. However, the VMs are exposed directly to the secondary network and are not protected by firewalls. If a VM is compromised, an intruder could gain access to the secondary network. You must configure appropriate security within the operating system of the VM if you use this method. See the Multus and SR-IOV documentation in the OpenShift Virtualization Tuning & Scaling Guide for additional information about networking options. Prerequisites You configured a secondary network such as Linux bridge or SR-IOV . You created a network attachment definition for a Linux bridge network or the SR-IOV Network Operator created a network attachment definition when you created an SriovNetwork object. 7.5.5.1. Configuring a VM network interface by using the web console You can configure a network interface for a virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You created a network attachment definition for the network. Procedure Navigate to Virtualization VirtualMachines . Click a VM to view the VirtualMachine details page. On the Configuration tab, click the Network interfaces tab. Click Add network interface . Enter the interface name and select the network attachment definition from the Network list. Click Save . Restart the VM to apply the changes. 7.5.5.2. Connecting to a VM attached to a secondary network by using SSH You can connect to a virtual machine (VM) attached to a secondary network by using SSH. Prerequisites You attached a VM to a secondary network with a DHCP server. You have an SSH client installed. Procedure Obtain the IP address of the VM by running the following command: USD oc describe vm <vm_name> -n <namespace> Example output Connect to the VM by running the following command: USD ssh <user_name>@<ip_address> -i <ssh_key> Example USD ssh [email protected] -i ~/.ssh/id_rsa_cloud-user Note You can also access a VM attached to a secondary network interface by using the cluster FQDN . 7.6. Editing virtual machines You can update a virtual machine (VM) configuration by using the OpenShift Container Platform web console. You can update the YAML file or the VirtualMachine details page. You can also edit a VM by using the command line. To edit a VM to configure disk sharing by using virtual disks or LUN, see Configuring shared volumes for virtual machines . 7.6.1. Editing a virtual machine by using the command line You can edit a virtual machine (VM) by using the command line. Prerequisites You installed the oc CLI. Procedure Obtain the virtual machine configuration by running the following command: USD oc edit vm <vm_name> Edit the YAML configuration. If you edit a running virtual machine, you need to do one of the following: Restart the virtual machine. Run the following command for the new configuration to take effect: USD oc apply vm <vm_name> -n <namespace> 7.6.2. Adding a disk to a virtual machine You can add a virtual disk to a virtual machine (VM) by using the OpenShift Container Platform web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. On the Disks tab, click Add disk . Specify the Source , Name , Size , Type , Interface , and Storage Class . Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox. Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. Click Add . Note If the VM is running, you must restart the VM to apply the change. 7.6.2.1. Storage fields Field Description Blank (creates PVC) Create an empty disk. Import via URL (creates PVC) Import content via URL (HTTP or HTTPS endpoint). Use an existing PVC Use a PVC that is already available in the cluster. Clone existing PVC (creates PVC) Select an existing PVC available in the cluster and clone it. Import via Registry (creates PVC) Import content via container registry. Container (ephemeral) Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. Name Name of the disk. The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), hyphens ( - ), and periods ( . ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters. Size Size of the disk in GiB. Type Type of disk. Example: Disk or CD-ROM Interface Type of disk device. Supported interfaces are virtIO , SATA , and SCSI . Storage Class The storage class that is used to create the disk. Advanced storage settings The following advanced storage settings are optional and available for Blank , Import via URL , and Clone existing PVC disks. If you do not specify these parameters, the system uses the default storage profile values. Parameter Option Parameter description Volume Mode Filesystem Stores the virtual disk on a file system-based volume. Block Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it. Access Mode ReadWriteOnce (RWO) Volume can be mounted as read-write by a single node. ReadWriteMany (RWX) Volume can be mounted as read-write by many nodes at one time. Note This mode is required for live migration. 7.6.3. Mounting a Windows driver disk on a virtual machine You can mount a Windows driver disk on a virtual machine (VM) by using the OpenShift Container Platform web console. Procedure Navigate to Virtualization VirtualMachines . Select the required VM to open the VirtualMachine details page. On the Configuration tab, click Storage . Select the Mount Windows drivers disk checkbox. The Windows driver disk is displayed in the list of mounted disks. 7.6.4. Adding a secret, config map, or service account to a virtual machine You add a secret, config map, or service account to a virtual machine by using the OpenShift Container Platform web console. These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk. If the virtual machine is running, changes do not take effect until you restart the virtual machine. The newly added resources are marked as pending changes at the top of the page. Prerequisites The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click Configuration Environment . Click Add Config Map, Secret or Service Account . Click Select a resource and select a resource from the list. A six character serial number is automatically generated for the selected resource. Optional: Click Reload to revert the environment to its last saved state. Click Save . Verification On the VirtualMachine details page, click Configuration Disks and verify that the resource is displayed in the list of disks. Restart the virtual machine by clicking Actions Restart . You can now mount the secret, config map, or service account as you would mount any other disk. Additional resources for config maps, secrets, and service accounts Understanding config maps Providing sensitive data to pods Understanding and creating service accounts 7.7. Editing boot order You can update the values for a boot order list by using the web console or the CLI. With Boot Order in the Virtual Machine Overview page, you can: Select a disk or network interface controller (NIC) and add it to the boot order list. Edit the order of the disks or NICs in the boot order list. Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources. 7.7.1. Adding items to a boot order list in the web console Add items to a boot order list by using the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file. Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine. Add any additional disks or NICs to the boot order list. Click Save . Note If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 7.7.2. Editing a boot order list in the web console Edit the boot order list in the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . Choose the appropriate method to move the item in the boot order list: If you do not use a screen reader, hover over the arrow icon to the item that you want to move, drag the item up or down, and drop it in a location of your choice. If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice. Click Save . Note If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 7.7.3. Editing a boot order list in the YAML configuration file Edit the boot order list in a YAML configuration file by using the CLI. Procedure Open the YAML configuration file for the virtual machine by running the following command: USD oc edit vm <vm_name> -n <namespace> Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example: disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default 1 The boot order value specified for the disk. 2 The boot order value specified for the network interface controller. Save the YAML file. 7.7.4. Removing items from a boot order list in the web console Remove items from a boot order list by using the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . Click the Remove icon to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file. Note If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 7.8. Deleting virtual machines You can delete a virtual machine from the web console or by using the oc command line interface. 7.8.1. Deleting a virtual machine using the web console Deleting a virtual machine permanently removes it from the cluster. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Click the Options menu beside a virtual machine and select Delete . Alternatively, click the virtual machine name to open the VirtualMachine details page and click Actions Delete . Optional: Select With grace period or clear Delete disks . Click Delete to permanently delete the virtual machine. 7.8.2. Deleting a virtual machine by using the CLI You can delete a virtual machine by using the oc command line interface (CLI). The oc client enables you to perform actions on multiple virtual machines. Prerequisites Identify the name of the virtual machine that you want to delete. Procedure Delete the virtual machine by running the following command: USD oc delete vm <vm_name> Note This command only deletes a VM in the current project. Specify the -n <project_name> option if the VM you want to delete is in a different project or namespace. 7.9. Exporting virtual machines You can export a virtual machine (VM) and its associated disks in order to import a VM into another cluster or to analyze the volume for forensic purposes. You create a VirtualMachineExport custom resource (CR) by using the command line interface. Alternatively, you can use the virtctl vmexport command to create a VirtualMachineExport CR and to download exported volumes. Note You can migrate virtual machines between OpenShift Virtualization clusters by using the Migration Toolkit for Virtualization . 7.9.1. Creating a VirtualMachineExport custom resource You can create a VirtualMachineExport custom resource (CR) to export the following objects: Virtual machine (VM): Exports the persistent volume claims (PVCs) of a specified VM. VM snapshot: Exports PVCs contained in a VirtualMachineSnapshot CR. PVC: Exports a PVC. If the PVC is used by another pod, such as the virt-launcher pod, the export remains in a Pending state until the PVC is no longer in use. The VirtualMachineExport CR creates internal and external links for the exported volumes. Internal links are valid within the cluster. External links can be accessed by using an Ingress or Route . The export server supports the following file formats: raw : Raw disk image file. gzip : Compressed disk image file. dir : PVC directory and files. tar.gz : Compressed PVC file. Prerequisites The VM must be shut down for a VM export. Procedure Create a VirtualMachineExport manifest to export a volume from a VirtualMachine , VirtualMachineSnapshot , or PersistentVolumeClaim CR according to the following example and save it as example-export.yaml : VirtualMachineExport example apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3 1 Specify the appropriate API group: "kubevirt.io" for VirtualMachine . "snapshot.kubevirt.io" for VirtualMachineSnapshot . "" for PersistentVolumeClaim . 2 Specify VirtualMachine , VirtualMachineSnapshot , or PersistentVolumeClaim . 3 Optional. The default duration is 2 hours. Create the VirtualMachineExport CR: USD oc create -f example-export.yaml Get the VirtualMachineExport CR: USD oc get vmexport example-export -o yaml The internal and external links for the exported volumes are displayed in the status stanza: Output example apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: "" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: "2022-06-21T14:10:09Z" reason: podReady status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-06-21T14:09:02Z" reason: pvcBound status: "True" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export 1 External links are accessible from outside the cluster by using an Ingress or Route . 2 Internal links are only valid inside the cluster. 7.9.2. Accessing exported virtual machine manifests After you export a virtual machine (VM) or snapshot, you can get the VirtualMachine manifest and related information from the export server. Prerequisites You exported a virtual machine or VM snapshot by creating a VirtualMachineExport custom resource (CR). Note VirtualMachineExport objects that have the spec.source.kind: PersistentVolumeClaim parameter do not generate virtual machine manifests. Procedure To access the manifests, you must first copy the certificates from the source cluster to the target cluster. Log in to the source cluster. Save the certificates to the cacert.crt file by running the following command: USD oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1 1 Replace <export_name> with the metadata.name value from the VirtualMachineExport object. Copy the cacert.crt file to the target cluster. Decode the token in the source cluster and save it to the token_decode file by running the following command: USD oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1 1 Replace <export_name> with the metadata.name value from the VirtualMachineExport object. Copy the token_decode file to the target cluster. Get the VirtualMachineExport custom resource by running the following command: USD oc get vmexport <export_name> -o yaml Review the status.links stanza, which is divided into external and internal sections. Note the manifests.url fields within each section: Example output apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: #... links: external: #... manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1 - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2 internal: #... manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3 - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export 1 Contains the VirtualMachine manifest, DataVolume manifest, if present, and a ConfigMap manifest that contains the public certificate for the external URL's ingress or route. 2 Contains a secret containing a header that is compatible with Containerized Data Importer (CDI). The header contains a text version of the export token. 3 Contains the VirtualMachine manifest, DataVolume manifest, if present, and a ConfigMap manifest that contains the certificate for the internal URL's export server. Log in to the target cluster. Get the Secret manifest by running the following command: USD curl --cacert cacert.crt <secret_manifest_url> -H \ 1 "x-kubevirt-export-token:token_decode" -H \ 2 "Accept:application/yaml" 1 Replace <secret_manifest_url> with an auth-header-secret URL from the VirtualMachineExport YAML output. 2 Reference the token_decode file that you created earlier. For example: USD curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml" Get the manifests of type: all , such as the ConfigMap and VirtualMachine manifests, by running the following command: USD curl --cacert cacert.crt <all_manifest_url> -H \ 1 "x-kubevirt-export-token:token_decode" -H \ 2 "Accept:application/yaml" 1 Replace <all_manifest_url> with a URL from the VirtualMachineExport YAML output. 2 Reference the token_decode file that you created earlier. For example: USD curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml" steps You can now create the ConfigMap and VirtualMachine objects on the target cluster by using the exported manifests. 7.10. Managing virtual machine instances If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using oc or virtctl commands from the command-line interface (CLI). The virtctl command provides more virtualization options than the oc command. For example, you can use virtctl to pause a VM or expose a port. 7.10.1. About virtual machine instances A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc command-line interface (CLI). A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs: List standalone VMIs and their details. Edit labels and annotations for a standalone VMI. Delete a standalone VMI. When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects. Note Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs. When you edit a VM, some settings might be applied to the VMIs dynamically and without the need for a restart. Any change made to a VM object that cannot be applied to the VMIs dynamically will trigger the RestartRequired VM condition. Changes are effective on the reboot, and the condition is removed. 7.10.2. Listing all virtual machine instances using the CLI You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI). Procedure List all VMIs by running the following command: USD oc get vmis -A 7.10.3. Listing standalone virtual machine instances using the web console Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs). Note VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI. Procedure Click Virtualization VirtualMachines from the side menu. You can identify a standalone VMI by a dark colored badge to its name. 7.10.4. Editing a standalone virtual machine instance using the web console You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a standalone VMI to open the VirtualMachineInstance details page. On the Details tab, click the pencil icon beside Annotations or Labels . Make the relevant changes and click Save . 7.10.5. Deleting a standalone virtual machine instance using the CLI You can delete a standalone virtual machine instance (VMI) by using the oc command-line interface (CLI). Prerequisites Identify the name of the VMI that you want to delete. Procedure Delete the VMI by running the following command: USD oc delete vmi <vmi_name> 7.10.6. Deleting a standalone virtual machine instance using the web console Delete a standalone virtual machine instance (VMI) from the web console. Procedure In the OpenShift Container Platform web console, click Virtualization VirtualMachines from the side menu. Click Actions Delete VirtualMachineInstance . In the confirmation pop-up window, click Delete to permanently delete the standalone VMI. 7.11. Controlling virtual machine states You can stop, start, restart, and unpause virtual machines from the web console. You can use virtctl to manage virtual machine states and perform other actions from the CLI. For example, you can use virtctl to force stop a VM or expose a port. 7.11.1. Starting a virtual machine You can start a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to start. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Start VirtualMachine . To view comprehensive information about the selected virtual machine before you start it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Start . Note When you start virtual machine that is provisioned from a URL source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes. 7.11.2. Stopping a virtual machine You can stop a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to stop. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Stop VirtualMachine . To view comprehensive information about the selected virtual machine before you stop it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Stop . 7.11.3. Restarting a virtual machine You can restart a running virtual machine from the web console. Important To avoid errors, do not restart a virtual machine while it has a status of Importing . Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to restart. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Restart . To view comprehensive information about the selected virtual machine before you restart it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Restart . 7.11.4. Pausing a virtual machine You can pause a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to pause. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Pause VirtualMachine . To view comprehensive information about the selected virtual machine before you pause it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Pause . 7.11.5. Unpausing a virtual machine You can unpause a paused virtual machine from the web console. Prerequisites At least one of your virtual machines must have a status of Paused . Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to unpause. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Unpause VirtualMachine . To view comprehensive information about the selected virtual machine before you unpause it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Unpause . 7.12. Using virtual Trusted Platform Module devices Add a virtual Trusted Platform Module (vTPM) device to a new or existing virtual machine by editing the VirtualMachine (VM) or VirtualMachineInstance (VMI) manifest. Important Cloning or creating snapshots of virtual machines (VMs) with a vTPM device is not supported. Support for creating snapshots of VMs with vTPM devices is added in OpenShift Virtualization 4.18. 7.12.1. About vTPM devices A virtual Trusted Platform Module (vTPM) device functions like a physical Trusted Platform Module (TPM) hardware chip. You can use a vTPM device with any operating system, but Windows 11 requires the presence of a TPM chip to install or boot. A vTPM device allows VMs created from a Windows 11 image to function without a physical TPM chip. If you do not enable vTPM, then the VM does not recognize a TPM device, even if the node has one. A vTPM device also protects virtual machines by storing secrets without physical hardware. OpenShift Virtualization supports persisting vTPM device state by using Persistent Volume Claims (PVCs) for VMs. You must specify the storage class to be used by the PVC by setting the vmStateStorageClass attribute in the HyperConverged custom resource (CR): kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: vmStateStorageClass: <storage_class_name> # ... Note The storage class must be of type Filesystem and support the ReadWriteMany (RWX) access mode. 7.12.2. Adding a vTPM device to a virtual machine Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine (VM) allows you to run a VM created from a Windows 11 image without a physical TPM device. A vTPM device also stores secrets for that VM. Prerequisites You have installed the OpenShift CLI ( oc ). You have configured a Persistent Volume Claim (PVC) to use a storage class of type Filesystem that supports the ReadWriteMany (RWX) access mode. This is necessary for the vTPM device data to persist across VM reboots. Procedure Run the following command to update the VM configuration: USD oc edit vm <vm_name> -n <namespace> Edit the VM specification to add the vTPM device. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: 1 persistent: true 2 # ... 1 Adds the vTPM device to the VM. 2 Specifies that the vTPM device state persists after the VM is shut down. The default value is false . To apply your changes, save and exit the editor. Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 7.13. Managing virtual machines with OpenShift Pipelines Red Hat OpenShift Pipelines is a Kubernetes-native CI/CD framework that allows developers to design and run each step of the CI/CD pipeline in its own container. By using OpenShift Pipelines tasks and the example pipeline, you can do the following: Create and manage virtual machines (VMs), persistent volume claims (PVCs), data volumes, and data sources. Run commands in VMs. Manipulate disk images with libguestfs tools. The tasks are located in the task catalog (ArtifactHub) . The example Windows pipeline is located in the pipeline catalog (ArtifactHub) . 7.13.1. Prerequisites You have access to an OpenShift Container Platform cluster with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed OpenShift Pipelines . 7.13.2. Supported virtual machine tasks The following table shows the supported tasks. Table 7.4. Supported virtual machine tasks Task Description create-vm-from-manifest Create a virtual machine from a provided manifest or with virtctl . create-vm-from-template Create a virtual machine from a template. copy-template Copy a virtual machine template. modify-vm-template Modify a virtual machine template. modify-data-object Create or delete data volumes or data sources. cleanup-vm Run a script or a command in a virtual machine and stop or delete the virtual machine afterward. disk-virt-customize Use the virt-customize tool to run a customization script on a target PVC. disk-virt-sysprep Use the virt-sysprep tool to run a sysprep script on a target PVC. wait-for-vmi-status Wait for a specific status of a virtual machine instance and fail or succeed based on the status. Note Virtual machine creation in pipelines now utilizes ClusterInstanceType and ClusterPreference instead of template-based tasks, which have been deprecated. The create-vm-from-template , copy-template , and modify-vm-template commands remain available but are not used in default pipeline tasks. 7.13.3. Windows EFI installer pipeline You can run the Windows EFI installer pipeline by using the web console or CLI. The Windows EFI installer pipeline installs Windows 10, Windows 11, or Windows Server 2022 into a new data volume from a Windows installation image (ISO file). A custom answer file is used to run the installation process. Note The Windows EFI installer pipeline uses a config map file with sysprep predefined by OpenShift Container Platform and suitable for Microsoft ISO files. For ISO files pertaining to different Windows editions, it may be necessary to create a new config map file with a system-specific sysprep definition. 7.13.3.1. Running the example pipelines using the web console You can run the example pipelines from the Pipelines menu in the web console. Procedure Click Pipelines Pipelines in the side menu. Select a pipeline to open the Pipeline details page. From the Actions list, select Start . The Start Pipeline dialog is displayed. Keep the default values for the parameters and then click Start to run the pipeline. The Details tab tracks the progress of each task and displays the pipeline status. 7.13.3.2. Running the example pipelines using the CLI Use a PipelineRun resource to run the example pipelines. A PipelineRun object is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a TaskRun object for each task in the pipeline. Procedure To run the Microsoft Windows 11 installer pipeline, create the following PipelineRun manifest: apiVersion: tekton.dev/v1 kind: PipelineRun metadata: generateName: windows11-installer-run- labels: pipelinerun: windows11-installer-run spec: params: - name: winImageDownloadURL value: <windows_image_download_url> 1 - name: acceptEula value: false 2 pipelineRef: params: - name: catalog value: redhat-pipelines - name: type value: artifact - name: kind value: pipeline - name: name value: windows-efi-installer - name: version value: 4.16 resolver: hub taskRunSpecs: - pipelineTaskName: modify-windows-iso-file PodTemplate: securityContext: fsGroup: 107 runAsUser: 107 1 Specify the URL for the Windows 11 64-bit ISO file. The product's language must be English (United States). 2 Example PipelineRun objects have a special parameter, acceptEula . By setting this parameter, you are agreeing to the applicable Microsoft user license agreements for each deployment or installation of the Microsoft products. If you set it to false, the pipeline exits at the first task. Apply the PipelineRun manifest: USD oc apply -f windows11-customize-run.yaml 7.13.4. Additional resources Creating CI/CD solutions for applications using Red Hat OpenShift Pipelines Creating a Windows VM 7.14. Advanced virtual machine management 7.14.1. Working with resource quotas for virtual machines Create and manage resource quotas for virtual machines. 7.14.1.1. Setting resource quota limits for virtual machines Resource quotas that only use requests automatically work with virtual machines (VMs). If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests. Procedure Set limits for a VM by editing the VirtualMachine manifest. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: # ... resources: requests: memory: 128Mi limits: memory: 256Mi 1 1 This configuration is supported because the limits.memory value is at least 100Mi larger than the requests.memory value. Save the VirtualMachine manifest. 7.14.1.2. Additional resources Resource quotas per project Resource quotas across multiple projects 7.14.2. Specifying nodes for virtual machines You can place virtual machines (VMs) on specific nodes by using node placement rules. 7.14.2.1. About node placement for virtual machines To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if: You have several VMs. To ensure fault tolerance, you want them to run on different nodes. You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node. Your VMs require specific hardware features that are not present on all available nodes. You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities. Note Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes. You can use the following rule types in the spec field of a VirtualMachine manifest: nodeSelector Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs. affinity Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the VirtualMachine workload type is based on the Pod object. tolerations Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint. Note Affinity rules only apply during scheduling. OpenShift Container Platform does not reschedule running workloads if the constraints are no longer met. 7.14.2.2. Node placement examples The following example YAML file snippets use nodePlacement , affinity , and tolerations fields to customize node placement for virtual machines. 7.14.2.2.1. Example: VM node placement with nodeSelector In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1 and example-key-2 = example-value-2 labels. Warning If there are no nodes that fit this description, the virtual machine is not scheduled. Example VM manifest metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2 # ... 7.14.2.2.2. Example: VM node placement with pod affinity and pod anti-affinity In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1 . If there is no such pod running on any node, the VM is not scheduled. If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2 . However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint. Example VM manifest metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname # ... 1 If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met. 2 If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. 7.14.2.2.3. Example: VM node placement with node affinity In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1 or the label example.io/example-key = example-value-2 . The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled. If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value . However, if all candidate nodes have this label, the scheduler ignores this constraint. Example VM manifest metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value # ... 1 If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met. 2 If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. 7.14.2.2.4. Example: VM node placement with tolerations In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule taint. Because this virtual machine has matching tolerations , it can schedule onto the tainted nodes. Note A virtual machine that tolerates a taint is not required to schedule onto a node with that taint. Example VM manifest metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" # ... 7.14.2.3. Additional resources Specifying nodes for virtualization components Placing pods on specific nodes using node selectors Controlling pod placement on nodes using node affinity rules Controlling pod placement using node taints 7.14.3. Activating kernel samepage merging (KSM) OpenShift Virtualization can activate kernel samepage merging (KSM) when nodes are overloaded. KSM deduplicates identical data found in the memory pages of virtual machines (VMs). If you have very similar VMs, KSM can make it possible to schedule more VMs on a single node. Important You must only use KSM with trusted workloads. 7.14.3.1. Prerequisites Ensure that an administrator has configured KSM support on any nodes where you want OpenShift Virtualization to activate KSM. 7.14.3.2. About using OpenShift Virtualization to activate KSM You can configure OpenShift Virtualization to activate kernel samepage merging (KSM) when nodes experience memory overload. 7.14.3.2.1. Configuration methods You can enable or disable the KSM activation feature for all nodes by using the OpenShift Container Platform web console or by editing the HyperConverged custom resource (CR). The HyperConverged CR supports more granular configuration. CR configuration You can configure the KSM activation feature by editing the spec.configuration.ksmConfiguration stanza of the HyperConverged CR. You enable the feature and configure settings by editing the ksmConfiguration stanza. You disable the feature by deleting the ksmConfiguration stanza. You can allow OpenShift Virtualization to enable KSM on only a subset of nodes by adding node selection syntax to the ksmConfiguration.nodeLabelSelector field. Note Even if the KSM activation feature is disabled in OpenShift Virtualization, an administrator can still enable KSM on nodes that support it. 7.14.3.2.2. KSM node labels OpenShift Virtualization identifies nodes that are configured to support KSM and applies the following node labels: kubevirt.io/ksm-handler-managed: "false" This label is set to "true" when OpenShift Virtualization activates KSM on a node that is experiencing memory overload. This label is not set to "true" if an administrator activates KSM. kubevirt.io/ksm-enabled: "false" This label is set to "true" when KSM is activated on a node, even if OpenShift Virtualization did not activate KSM. These labels are not applied to nodes that do not support KSM. 7.14.3.3. Configuring KSM activation by using the web console You can allow OpenShift Virtualization to activate kernel samepage merging (KSM) on all nodes in your cluster by using the OpenShift Container Platform web console. Procedure From the side menu, click Virtualization Overview . Select the Settings tab. Select the Cluster tab. Expand Resource management . Enable or disable the feature for all nodes: Set Kernel Samepage Merging (KSM) to on. Set Kernel Samepage Merging (KSM) to off. 7.14.3.4. Configuring KSM activation by using the CLI You can enable or disable OpenShift Virtualization's kernel samepage merging (KSM) activation feature by editing the HyperConverged custom resource (CR). Use this method if you want OpenShift Virtualization to activate KSM on only a subset of nodes. Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Edit the ksmConfiguration stanza: To enable the KSM activation feature for all nodes, set the nodeLabelSelector value to {} . For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: {} # ... To enable the KSM activation feature on a subset of nodes, edit the nodeLabelSelector field. Add syntax that matches the nodes where you want OpenShift Virtualization to enable KSM. For example, the following configuration allows OpenShift Virtualization to enable KSM on nodes where both <first_example_key> and <second_example_key> are set to "true" : apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: matchLabels: <first_example_key>: "true" <second_example_key>: "true" # ... To disable the KSM activation feature, delete the ksmConfiguration stanza. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: # ... Save the file. 7.14.3.5. Additional resources Specifying nodes for virtual machines Placing pods on specific nodes using node selectors Managing kernel samepage merging in the Red Hat Enterprise Linux (RHEL) documentation 7.14.4. Configuring certificate rotation Configure certificate rotation parameters to replace existing certificates. 7.14.4.1. Configuring certificate rotation You can do this during OpenShift Virtualization installation in the web console or after installation in the HyperConverged custom resource (CR). Procedure Open the HyperConverged CR by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Edit the spec.certConfig fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golang ParseDuration format . apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3 1 The value of ca.renewBefore must be less than or equal to the value of ca.duration . 2 The value of server.duration must be less than or equal to the value of ca.duration . 3 The value of server.renewBefore must be less than or equal to the value of server.duration . Apply the YAML file to your cluster. 7.14.4.2. Troubleshooting certificate rotation parameters Deleting one or more certConfig values causes them to revert to the default values, unless the default values conflict with one of the following conditions: The value of ca.renewBefore must be less than or equal to the value of ca.duration . The value of server.duration must be less than or equal to the value of ca.duration . The value of server.renewBefore must be less than or equal to the value of server.duration . If the default values conflict with these conditions, you will receive an error. If you remove the server.duration value in the following example, the default value of 24h0m0s is greater than the value of ca.duration , conflicting with the specified conditions. Example certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s This results in the following error message: error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration The error message only mentions the first conflict. Review all certConfig values before you proceed. 7.14.5. Configuring the default CPU model Use the defaultCPUModel setting in the HyperConverged custom resource (CR) to define a cluster-wide default CPU model. The virtual machine (VM) CPU model depends on the availability of CPU models within the VM and the cluster. If the VM does not have a defined CPU model: The defaultCPUModel is automatically set using the CPU model defined at the cluster-wide level. If both the VM and the cluster have a defined CPU model: The VM's CPU model takes precedence. If neither the VM nor the cluster have a defined CPU model: The host-model is automatically set using the CPU model defined at the host level. 7.14.5.1. Configuring the default CPU model Configure the defaultCPUModel by updating the HyperConverged custom resource (CR). You can change the defaultCPUModel while OpenShift Virtualization is running. Note The defaultCPUModel is case sensitive. Prerequisites Install the OpenShift CLI (oc). Procedure Open the HyperConverged CR by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the defaultCPUModel field to the CR and set the value to the name of a CPU model that exists in the cluster: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: "EPYC" Apply the YAML file to your cluster. 7.14.6. Using UEFI mode for virtual machines You can boot a virtual machine (VM) in Unified Extensible Firmware Interface (UEFI) mode. 7.14.6.1. About UEFI mode for virtual machines Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times. It stores all the information about initialization and startup in a file with a .efi extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer. 7.14.6.2. Booting virtual machines in UEFI mode You can configure a virtual machine to boot in UEFI mode by editing the VirtualMachine manifest. Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit or create a VirtualMachine manifest file. Use the spec.firmware.bootloader stanza to configure UEFI mode: Booting in UEFI mode with secure boot active apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2 # ... 1 OpenShift Virtualization requires System Management Mode ( SMM ) to be enabled for Secure Boot in UEFI mode to occur. 2 OpenShift Virtualization supports a VM with or without Secure Boot when using UEFI mode. If Secure Boot is enabled, then UEFI mode is required. However, UEFI mode can be enabled without using Secure Boot. Apply the manifest to your cluster by running the following command: USD oc create -f <file_name>.yaml 7.14.6.3. Enabling persistent EFI You can enable EFI persistence in a VM by configuring an RWX storage class at the cluster level and adjusting the settings in the EFI section of the VM. Prerequisites You must have cluster administrator privileges. You must have a storage class that supports RWX access mode and FS volume mode. Procedure Enable the VMPersistentState feature gate by running the following command: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op":"replace","path":"/spec/featureGates/VMPersistentState", "value": true}]' 7.14.6.4. Configuring VMs with persistent EFI You can configure a VM to have EFI persistence enabled by editing its manifest file. Prerequisites VMPersistentState feature gate enabled. Procedure Edit the VM manifest file and save to apply settings. apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm spec: template: spec: domain: firmware: bootloader: efi: persistent: true # ... 7.14.7. Configuring PXE booting for virtual machines PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host. 7.14.7.1. Prerequisites A Linux bridge must be connected . The PXE server must be connected to the same VLAN as the bridge. 7.14.7.2. PXE booting with a specified MAC address As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server. Prerequisites A Linux bridge must be connected. The PXE server must be connected to the same VLAN as the bridge. Procedure Configure a PXE network on the cluster: Create the network attachment definition file for PXE network pxe-net-conf : apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf 1 spec: config: | { "cniVersion": "0.3.1", "name": "pxe-net-conf", 2 "type": "bridge", 3 "bridge": "bridge-interface", 4 "macspoofchk": false, 5 "vlan": 100, 6 "disableContainerInterface": true, "preserveDefaultVlan": false 7 } 1 The name for the NetworkAttachmentDefinition object. 2 The name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition. 3 The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. This example uses a Linux bridge CNI plugin. You can also use an OVN-Kubernetes localnet or an SR-IOV CNI plugin. 4 The name of the Linux bridge configured on the node. 5 Optional: A flag to enable the MAC spoof check. When set to true , you cannot change the MAC address of the pod or guest interface. This attribute allows only a single MAC address to exit the pod, which provides security against a MAC spoofing attack. 6 Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy. 7 Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is true . Create the network attachment definition by using the file you created in the step: USD oc create -f pxe-net-conf.yaml Edit the virtual machine instance configuration file to include the details of the interface and network. Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically. Ensure that bootOrder is set to 1 so that the interface boots first. In this example, the interface is connected to a network called <pxe-net> : interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1 Note Boot order is global for interfaces and disks. Assign a boot device number to the disk to ensure proper booting after operating system provisioning. Set the disk bootOrder value to 2 : devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2 Specify that the network is connected to the previously created network attachment definition. In this scenario, <pxe-net> is connected to the network attachment definition called <pxe-net-conf> : networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf Create the virtual machine instance: USD oc create -f vmi-pxe-boot.yaml Example output virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created Wait for the virtual machine instance to run: USD oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running View the virtual machine instance using VNC: USD virtctl vnc vmi-pxe-boot Watch the boot screen to verify that the PXE boot is successful. Log in to the virtual machine instance: USD virtctl console vmi-pxe-boot Verification Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used eth1 for the PXE boot, without an IP address. The other interface, eth0 , got an IP address from OpenShift Container Platform. USD ip addr Example output ... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff 7.14.7.3. OpenShift Virtualization networking glossary The following terms are used throughout OpenShift Virtualization documentation: Container Network Interface (CNI) A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality. Multus A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Custom resource definition (CRD) A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource. Network attachment definition (NAD) A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks. Node network configuration policy (NNCP) A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster. 7.14.8. Using huge pages with virtual machines You can use huge pages as backing memory for virtual machines in your cluster. 7.14.8.1. Prerequisites Nodes must have pre-allocated huge pages configured . 7.14.8.2. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages. 7.14.8.3. Configuring huge pages for virtual machines You can configure virtual machines to use pre-allocated huge pages by including the memory.hugepages.pageSize and resources.requests.memory parameters in your virtual machine configuration. The memory request must be divisible by the page size. For example, you cannot request 500Mi memory with a page size of 1Gi . Note The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance. If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect. Prerequisites Nodes must have pre-allocated huge pages configured. For instructions, see Configuring huge pages at boot time . Procedure In your virtual machine configuration, add the resources.requests.memory and memory.hugepages.pageSize parameters to the spec.domain . The following configuration snippet is for a virtual machine that requests a total of 4Gi memory with a page size of 1Gi : kind: VirtualMachine # ... spec: domain: resources: requests: memory: "4Gi" 1 memory: hugepages: pageSize: "1Gi" 2 # ... 1 The total amount of memory requested for the virtual machine. This value must be divisible by the page size. 2 The size of each huge page. Valid values for x86_64 architecture are 1Gi and 2Mi . The page size must be smaller than the requested memory. Apply the virtual machine configuration: USD oc apply -f <virtual_machine>.yaml 7.14.9. Enabling dedicated resources for virtual machines To improve performance, you can dedicate node resources, such as CPU, to a virtual machine. 7.14.9.1. About dedicated resources When you enable dedicated resources for your virtual machine, your virtual machine's workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions. 7.14.9.2. Prerequisites The CPU Manager must be configured on the node. Verify that the node has the cpumanager = true label before scheduling virtual machine workloads. The virtual machine must be powered off. 7.14.9.3. Enabling dedicated resources for a virtual machine You enable dedicated resources for a virtual machine in the Details tab. Virtual machines that were created from a Red Hat template can be configured with dedicated resources. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. On the Configuration Scheduling tab, click the edit icon beside Dedicated Resources . Select Schedule this workload with dedicated resources (guaranteed policy) . Click Save . 7.14.10. Scheduling virtual machines You can schedule a virtual machine (VM) on a node by ensuring that the VM's CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node. 7.14.10.1. Policy attributes You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node. Policy attribute Description force The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM's CPU. require Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM's CPU or the hypervisor must be able to emulate the supported CPU model. optional The VM is added to a node if that VM is supported by the host's physical machine CPU. disable The VM cannot be scheduled with CPU node discovery. forbid The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled. 7.14.10.2. Setting a policy attribute and CPU feature You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor. Procedure Edit the domain spec of your VM configuration file. The following example sets the CPU feature and the require policy for a virtual machine (VM): apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2 1 Name of the CPU feature for the VM. 2 Policy attribute for the VM. 7.14.10.3. Scheduling virtual machines with the supported CPU model You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported. Procedure Edit the domain spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1 1 CPU model for the VM. 7.14.10.4. Scheduling virtual machines with the host model When the CPU model for a virtual machine (VM) is set to host-model , the VM inherits the CPU model of the node where it is scheduled. Procedure Edit the domain spec of your VM configuration file. The following example shows host-model being specified for the virtual machine: apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1 1 The VM that inherits the CPU model of the node where it is scheduled. 7.14.10.5. Scheduling virtual machines with a custom scheduler You can use a custom scheduler to schedule a virtual machine (VM) on a node. Prerequisites A secondary scheduler is configured for your cluster. Procedure Add the custom scheduler to the VM configuration by editing the VirtualMachine manifest. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: running: true template: spec: schedulerName: my-scheduler 1 domain: devices: disks: - name: containerdisk disk: bus: virtio # ... 1 The name of the custom scheduler. If the schedulerName value does not match an existing scheduler, the virt-launcher pod stays in a Pending state until the specified scheduler is found. Verification Verify that the VM is using the custom scheduler specified in the VirtualMachine manifest by checking the virt-launcher pod events: View the list of pods in your cluster by entering the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m Run the following command to display the pod events: USD oc describe pod virt-launcher-vm-fedora-dpc87 The value of the From field in the output verifies that the scheduler name matches the custom scheduler specified in the VirtualMachine manifest: Example output [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...] Additional resources Deploying a secondary scheduler 7.14.11. Configuring PCI passthrough The Peripheral Component Interconnect (PCI) passthrough feature enables you to access and manage hardware devices from a virtual machine (VM). When PCI passthrough is configured, the PCI devices function as if they were physically attached to the guest operating system. Cluster administrators can expose and manage host devices that are permitted to be used in the cluster by using the oc command-line interface (CLI). 7.14.11.1. Preparing nodes for GPU passthrough You can prevent GPU operands from deploying on worker nodes that you designated for GPU passthrough. 7.14.11.1.1. Preventing NVIDIA GPU operands from deploying on nodes If you use the NVIDIA GPU Operator in your cluster, you can apply the nvidia.com/gpu.deploy.operands=false label to nodes that you do not want to configure for GPU or vGPU operands. This label prevents the creation of the pods that configure GPU or vGPU operands and terminates the pods if they already exist. Prerequisites The OpenShift CLI ( oc ) is installed. Procedure Label the node by running the following command: USD oc label node <node_name> nvidia.com/gpu.deploy.operands=false 1 1 Replace <node_name> with the name of a node where you do not want to install the NVIDIA GPU operands. Verification Verify that the label was added to the node by running the following command: USD oc describe node <node_name> Optional: If GPU operands were previously deployed on the node, verify their removal. Check the status of the pods in the nvidia-gpu-operator namespace by running the following command: USD oc get pods -n nvidia-gpu-operator Example output NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-sandbox-validator-kxwj7 1/1 Terminating 0 9d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d nvidia-vfio-manager-zqtck 1/1 Terminating 0 9d Monitor the pod status until the pods with Terminating status are removed: USD oc get pods -n nvidia-gpu-operator Example output NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d 7.14.11.2. Preparing host devices for PCI passthrough 7.14.11.2.1. About preparing a host device for PCI passthrough To prepare a host device for PCI passthrough by using the CLI, create a MachineConfig object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the permittedHostDevices field of the HyperConverged custom resource (CR). The permittedHostDevices list is empty when you first install the OpenShift Virtualization Operator. To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the HyperConverged CR. 7.14.11.2.2. Adding kernel arguments to enable the IOMMU driver To enable the IOMMU driver in the kernel, create the MachineConfig object and add the kernel arguments. Prerequisites You have cluster administrator permissions. Your CPU hardware is Intel or AMD. You enabled Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS. Procedure Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 # ... 1 Applies the new kernel argument only to worker nodes. 2 The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on . 3 Identifies the kernel argument as intel_iommu for an Intel CPU. Create the new MachineConfig object: USD oc create -f 100-worker-kernel-arg-iommu.yaml Verification Verify that the new MachineConfig object was added. USD oc get MachineConfig 7.14.11.2.3. Binding PCI devices to the VFIO driver To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for vendor-ID and device-ID from each device and create a list with the values. Add this list to the MachineConfig object. The MachineConfig Operator generates the /etc/modprobe.d/vfio.conf on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver. Prerequisites You added kernel arguments to enable IOMMU for the CPU. Procedure Run the lspci command to obtain the vendor-ID and the device-ID for the PCI device. USD lspci -nnv | grep -i nvidia Example output 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) Create a Butane config file, 100-worker-vfiopci.bu , binding the PCI device to the VFIO driver. Note See "Creating machine configs with Butane" for information about Butane. Example variant: openshift version: 4.16.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci 1 Applies the new kernel argument only to worker nodes. 2 Specify the previously determined vendor-ID value ( 10de ) and the device-ID value ( 1eb8 ) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information. 3 The file that loads the vfio-pci kernel module on the worker nodes. Use Butane to generate a MachineConfig object file, 100-worker-vfiopci.yaml , containing the configuration to be delivered to the worker nodes: USD butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml Apply the MachineConfig object to the worker nodes: USD oc apply -f 100-worker-vfiopci.yaml Verify that the MachineConfig object was added. USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s Verification Verify that the VFIO driver is loaded. USD lspci -nnk -d 10de: The output confirms that the VFIO driver is being used. Example output 7.14.11.2.4. Exposing PCI host devices in the cluster using the CLI To expose PCI host devices in the cluster, add details about the PCI devices to the spec.permittedHostDevices.pciHostDevices array of the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the PCI device information to the spec.permittedHostDevices.pciHostDevices array. For example: Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: "10DE:1DB6" 3 resourceName: "nvidia.com/GV100GL_Tesla_V100" 4 - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" - pciDeviceSelector: "8086:6F54" resourceName: "intel.com/qat" externalResourceProvider: true 5 # ... 1 The host devices that are permitted to be used in the cluster. 2 The list of PCI devices available on the node. 3 The vendor-ID and the device-ID required to identify the PCI device. 4 The name of a PCI host device. 5 Optional: Setting this field to true indicates that the resource is provided by an external device plugin. OpenShift Virtualization allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin. Note The above example snippet shows two PCI host devices that are named nvidia.com/GV100GL_Tesla_V100 and nvidia.com/TU104GL_Tesla_T4 added to the list of permitted host devices in the HyperConverged CR. These devices have been tested and verified to work with OpenShift Virtualization. Save your changes and exit the editor. Verification Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the nvidia.com/GV100GL_Tesla_V100 , nvidia.com/TU104GL_Tesla_T4 , and intel.com/qat resource names. USD oc describe node <node_name> Example output Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 7.14.11.2.5. Removing PCI host devices from the cluster using the CLI To remove a PCI host device from the cluster, delete the information for that device from the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Remove the PCI device information from the spec.permittedHostDevices.pciHostDevices array by deleting the pciDeviceSelector , resourceName and externalResourceProvider (if applicable) fields for the appropriate device. In this example, the intel.com/qat resource has been deleted. Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: "10DE:1DB6" resourceName: "nvidia.com/GV100GL_Tesla_V100" - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" # ... Save your changes and exit the editor. Verification Verify that the PCI host device was removed from the node by running the following command. The example output shows that there are zero devices associated with the intel.com/qat resource name. USD oc describe node <node_name> Example output Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 7.14.11.3. Configuring virtual machines for PCI passthrough After the PCI devices have been added to the cluster, you can assign them to virtual machines. The PCI devices are now available as if they are physically connected to the virtual machines. 7.14.11.3.1. Assigning a PCI device to a virtual machine When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough. Procedure Assign the PCI device to a virtual machine as a host device. Example apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1 1 The name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device. Verification Use the following command to verify that the host device is available from the virtual machine. USD lspci -nnk | grep NVIDIA Example output USD 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) 7.14.11.4. Additional resources Enabling Intel VT-X and AMD-V Virtualization Hardware Extensions in BIOS Managing file permissions Machine Config Overview 7.14.12. Configuring virtual GPUs If you have graphics processing unit (GPU) cards, OpenShift Virtualization can automatically create virtual GPUs (vGPUs) that you can assign to virtual machines (VMs). 7.14.12.1. About using virtual GPUs with OpenShift Virtualization Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). OpenShift Virtualization can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the HyperConverged custom resource (CR). This automation is especially useful for large clusters. Note Refer to your hardware vendor's documentation for functionality and support details. Mediated device A physical device that is divided into one or more virtual devices. A vGPU is a type of mediated device (mdev); the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines (VMs), but the number of guests must be compatible with your GPU. Some GPUs do not support multiple guests. 7.14.12.2. Preparing hosts for mediated devices You must enable the Input-Output Memory Management Unit (IOMMU) driver before you can configure mediated devices. 7.14.12.2.1. Adding kernel arguments to enable the IOMMU driver To enable the IOMMU driver in the kernel, create the MachineConfig object and add the kernel arguments. Prerequisites You have cluster administrator permissions. Your CPU hardware is Intel or AMD. You enabled Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS. Procedure Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 # ... 1 Applies the new kernel argument only to worker nodes. 2 The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on . 3 Identifies the kernel argument as intel_iommu for an Intel CPU. Create the new MachineConfig object: USD oc create -f 100-worker-kernel-arg-iommu.yaml Verification Verify that the new MachineConfig object was added. USD oc get MachineConfig 7.14.12.3. Configuring the NVIDIA GPU Operator You can use the NVIDIA GPU Operator to provision worker nodes for running GPU-accelerated virtual machines (VMs) in OpenShift Virtualization. Note The NVIDIA GPU Operator is supported only by NVIDIA. For more information, see Obtaining Support from NVIDIA in the Red Hat Knowledgebase. 7.14.12.3.1. About using the NVIDIA GPU Operator You can use the NVIDIA GPU Operator with OpenShift Virtualization to rapidly provision worker nodes for running GPU-enabled virtual machines (VMs). The NVIDIA GPU Operator manages NVIDIA GPU resources in an OpenShift Container Platform cluster and automates tasks that are required when preparing nodes for GPU workloads. Before you can deploy application workloads to a GPU resource, you must install components such as the NVIDIA drivers that enable the compute unified device architecture (CUDA), Kubernetes device plugin, container runtime, and other features, such as automatic node labeling and monitoring. By automating these tasks, you can quickly scale the GPU capacity of your infrastructure. The NVIDIA GPU Operator can especially facilitate provisioning complex artificial intelligence and machine learning (AI/ML) workloads. 7.14.12.3.2. Options for configuring mediated devices There are two available methods for configuring mediated devices when using the NVIDIA GPU Operator. The method that Red Hat tests uses OpenShift Virtualization features to schedule mediated devices, while the NVIDIA method only uses the GPU Operator. Using the NVIDIA GPU Operator to configure mediated devices This method exclusively uses the NVIDIA GPU Operator to configure mediated devices. To use this method, refer to NVIDIA GPU Operator with OpenShift Virtualization in the NVIDIA documentation. Using OpenShift Virtualization to configure mediated devices This method, which is tested by Red Hat, uses OpenShift Virtualization's capabilities to configure mediated devices. In this case, the NVIDIA GPU Operator is only used for installing drivers with the NVIDIA vGPU Manager. The GPU Operator does not configure mediated devices. When using the OpenShift Virtualization method, you still configure the GPU Operator by following the NVIDIA documentation . However, this method differs from the NVIDIA documentation in the following ways: You must not overwrite the default disableMDEVConfiguration: false setting in the HyperConverged custom resource (CR). Important Setting this feature gate as described in the NVIDIA documentation prevents OpenShift Virtualization from configuring mediated devices. You must configure your ClusterPolicy manifest so that it matches the following example: Example manifest kind: ClusterPolicy apiVersion: nvidia.com/v1 metadata: name: gpu-cluster-policy spec: operator: defaultRuntime: crio use_ocp_driver_toolkit: true initContainer: {} sandboxWorkloads: enabled: true defaultWorkload: vm-vgpu driver: enabled: false 1 dcgmExporter: {} dcgm: enabled: true daemonsets: {} devicePlugin: {} gfd: {} migManager: enabled: true nodeStatusExporter: enabled: true mig: strategy: single toolkit: enabled: true validator: plugin: env: - name: WITH_WORKLOAD value: "true" vgpuManager: enabled: true 2 repository: <vgpu_container_registry> 3 image: <vgpu_image_name> version: nvidia-vgpu-manager vgpuDeviceManager: enabled: false 4 config: name: vgpu-devices-config default: default sandboxDevicePlugin: enabled: false 5 vfioManager: enabled: false 6 1 Set this value to false . Not required for VMs. 2 Set this value to true . Required for using vGPUs with VMs. 3 Substitute <vgpu_container_registry> with your registry value. 4 Set this value to false to allow OpenShift Virtualization to configure mediated devices instead of the NVIDIA GPU Operator. 5 Set this value to false to prevent discovery and advertising of the vGPU devices to the kubelet. 6 Set this value to false to prevent loading the vfio-pci driver. Instead, follow the OpenShift Virtualization documentation to configure PCI passthrough. Additional resources Configuring PCI passthrough 7.14.12.4. How vGPUs are assigned to nodes For each physical device, OpenShift Virtualization configures the following values: A single mdev type. The maximum number of instances of the selected mdev type. The cluster architecture affects how devices are created and assigned to nodes. Large cluster with multiple cards per node On nodes with multiple cards that can support similar vGPU types, the relevant device types are created in a round-robin manner. For example: # ... mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108 # ... In this scenario, each node has two cards, both of which support the following vGPU types: nvidia-105 # ... nvidia-108 nvidia-217 nvidia-299 # ... On each node, OpenShift Virtualization creates the following vGPUs: 16 vGPUs of type nvidia-105 on the first card. 2 vGPUs of type nvidia-108 on the second card. One node has a single card that supports more than one requested vGPU type OpenShift Virtualization uses the supported type that comes first on the mediatedDeviceTypes list. For example, the card on a node card supports nvidia-223 and nvidia-224 . The following mediatedDeviceTypes list is configured: # ... mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-22 - nvidia-223 - nvidia-224 # ... In this example, OpenShift Virtualization uses the nvidia-223 type. 7.14.12.5. Managing mediated devices Before you can assign mediated devices to virtual machines, you must create the devices and expose them to the cluster. You can also reconfigure and remove mediated devices. 7.14.12.5.1. Creating and exposing mediated devices As an administrator, you can create mediated devices and expose them to the cluster by editing the HyperConverged custom resource (CR). Prerequisites You enabled the Input-Output Memory Management Unit (IOMMU) driver. If your hardware vendor provides drivers, you installed them on the nodes where you want to create mediated devices. If you use NVIDIA cards, you installed the NVIDIA GRID driver . Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Example 7.1. Example configuration file with mediated devices configured apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-231 nodeMediatedDeviceTypes: - mediatedDeviceTypes: - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q # ... Create mediated devices by adding them to the spec.mediatedDevicesConfiguration stanza: Example YAML snippet # ... spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDeviceTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value> # ... 1 Required: Configures global settings for the cluster. 2 Optional: Overrides the global configuration for a specific node or group of nodes. Must be used with the global mediatedDeviceTypes configuration. 3 Required if you use nodeMediatedDeviceTypes . Overrides the global mediatedDeviceTypes configuration for the specified nodes. 4 Required if you use nodeMediatedDeviceTypes . Must include a key:value pair. Important Before OpenShift Virtualization 4.14, the mediatedDeviceTypes field was named mediatedDevicesTypes . Ensure that you use the correct field name when configuring mediated devices. Identify the name selector and resource name values for the devices that you want to expose to the cluster. You will add these values to the HyperConverged CR in the step. Find the resourceName value by running the following command: USD oc get USDNODE -o json \ | jq '.status.allocatable \ | with_entries(select(.key | startswith("nvidia.com/"))) \ | with_entries(select(.value != "0"))' Find the mdevNameSelector value by viewing the contents of /sys/bus/pci/devices/<slot>:<bus>:<domain>.<function>/mdev_supported_types/<type>/name , substituting the correct values for your system. For example, the name file for the nvidia-231 type contains the selector string GRID T4-2Q . Using GRID T4-2Q as the mdevNameSelector value allows nodes to use the nvidia-231 type. Expose the mediated devices to the cluster by adding the mdevNameSelector and resourceName values to the spec.permittedHostDevices.mediatedDevices stanza of the HyperConverged CR: Example YAML snippet # ... permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2 # ... 1 Exposes the mediated devices that map to this value on the host. 2 Matches the resource name that is allocated on the node. Save your changes and exit the editor. Verification Optional: Confirm that a device was added to a specific node by running the following command: USD oc describe node <node_name> 7.14.12.5.2. About changing and removing mediated devices You can reconfigure or remove mediated devices in several ways: Edit the HyperConverged CR and change the contents of the mediatedDeviceTypes stanza. Change the node labels that match the nodeMediatedDeviceTypes node selector. Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Note If you remove the device information from the spec.permittedHostDevices stanza without also removing it from the spec.mediatedDevicesConfiguration stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas. 7.14.12.5.3. Removing mediated devices from the cluster To remove a mediated device from the cluster, delete the information for that device from the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Removing both entries ensures that you can later create a new mediated device type on the same node. For example: Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q 1 To remove the nvidia-231 device type, delete it from the mediatedDeviceTypes array. 2 To remove the GRID T4-2Q device, delete the mdevNameSelector field and its corresponding resourceName field. Save your changes and exit the editor. 7.14.12.6. Using mediated devices You can assign mediated devices to one or more virtual machines. 7.14.12.6.1. Assigning a vGPU to a VM by using the CLI Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines (VMs). Prerequisites The mediated device is configured in the HyperConverged custom resource. The VM is stopped. Procedure Assign the mediated device to a virtual machine (VM) by editing the spec.domain.devices.gpus stanza of the VirtualMachine manifest: Example virtual machine manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-2Q name: gpu2 1 The resource name associated with the mediated device. 2 A name to identify the device on the VM. Verification To verify that the device is available from the virtual machine, run the following command, substituting <device_name> with the deviceName value from the VirtualMachine manifest: USD lspci -nnk | grep <device_name> 7.14.12.6.2. Assigning a vGPU to a VM by using the web console You can assign virtual GPUs to virtual machines by using the OpenShift Container Platform web console. Note You can add hardware devices to virtual machines created from customized templates or a YAML file. You cannot add devices to pre-supplied boot source templates for specific operating systems. Prerequisites The vGPU is configured as a mediated device in your cluster. To view the devices that are connected to your cluster, click Compute Hardware Devices from the side menu. The VM is stopped. Procedure In the OpenShift Container Platform web console, click Virtualization VirtualMachines from the side menu. Select the VM that you want to assign the device to. On the Details tab, click GPU devices . Click Add GPU device . Enter an identifying value in the Name field. From the Device name list, select the device that you want to add to the VM. Click Save . Verification To confirm that the devices were added to the VM, click the YAML tab and review the VirtualMachine configuration. Mediated devices are added to the spec.domain.devices stanza. 7.14.12.7. Additional resources Enabling Intel VT-X and AMD-V Virtualization Hardware Extensions in BIOS 7.14.13. Enabling descheduler evictions on virtual machines You can use the descheduler to evict pods so that the pods can be rescheduled onto more appropriate nodes. If the pod is a virtual machine, the pod eviction causes the virtual machine to be live migrated to another node. Important Descheduler eviction for virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.14.13.1. Descheduler profiles Use the Technology Preview DevPreviewLongLifecycle profile to enable the descheduler on a virtual machine. This is the only descheduler profile currently available for OpenShift Virtualization. To ensure proper scheduling, create VMs with CPU and memory requests for the expected load. DevPreviewLongLifecycle This profile balances resource usage between nodes and enables the following strategies: RemovePodsHavingTooManyRestarts : removes pods whose containers have been restarted too many times and pods where the sum of restarts over all containers (including Init Containers) is more than 100. Restarting the VM guest operating system does not increase this count. LowNodeUtilization : evicts pods from overutilized nodes when there are any underutilized nodes. The destination node for the evicted pod will be determined by the scheduler. A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods). A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods). 7.14.13.2. Installing the descheduler The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles. By default, the descheduler runs in predictive mode, which means that it only simulates pod evictions. You must change the mode to automatic for the descheduler to perform the pod evictions. Important If you have enabled hosted control planes in your cluster, set a custom priority threshold to lower the chance that pods in the hosted control plane namespaces are evicted. Set the priority threshold class name to hypershift-control-plane , because it has the lowest priority value ( 100000000 ) of the hosted control plane priority classes. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Kube Descheduler Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-kube-descheduler-operator in the Name field, enter openshift.io/cluster-monitoring=true in the Labels field to enable descheduler metrics, and click Create . Install the Kube Descheduler Operator. Navigate to Operators OperatorHub . Type Kube Descheduler Operator into the filter box. Select the Kube Descheduler Operator and click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-kube-descheduler-operator from the drop-down menu. Adjust the values for the Update Channel and Approval Strategy to the desired values. Click Install . Create a descheduler instance. From the Operators Installed Operators page, click the Kube Descheduler Operator . Select the Kube Descheduler tab and click Create KubeDescheduler . Edit the settings as necessary. To evict pods instead of simulating the evictions, change the Mode field to Automatic . Expand the Profiles section and select DevPreviewLongLifecycle . The AffinityAndTaints profile is enabled by default. Important The only profile currently available for OpenShift Virtualization is DevPreviewLongLifecycle . You can also configure the profiles and settings for the descheduler later using the OpenShift CLI ( oc ). 7.14.13.3. Enabling descheduler evictions on a virtual machine (VM) After the descheduler is installed, you can enable descheduler evictions on your VM by adding an annotation to the VirtualMachine custom resource (CR). Prerequisites Install the descheduler in the OpenShift Container Platform web console or OpenShift CLI ( oc ). Ensure that the VM is not running. Procedure Before starting the VM, add the descheduler.alpha.kubernetes.io/evict annotation to the VirtualMachine CR: apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: "true" If you did not already set the DevPreviewLongLifecycle profile in the web console during installation, specify the DevPreviewLongLifecycle in the spec.profile section of the KubeDescheduler object: apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - DevPreviewLongLifecycle mode: Predictive 1 1 By default, the descheduler does not evict pods. To evict pods, set mode to Automatic . The descheduler is now enabled on the VM. 7.14.13.4. Additional resources Descheduler overview 7.14.14. About high availability for virtual machines You can enable high availability for virtual machines (VMs) by manually deleting a failed node to trigger VM failover or by configuring remediating nodes. Manually deleting a failed node If a node fails and machine health checks are not deployed on your cluster, virtual machines with runStrategy: Always configured are not automatically relocated to healthy nodes. To trigger VM failover, you must manually delete the Node object. See Deleting a failed node to trigger virtual machine failover . Configuring remediating nodes You can configure remediating nodes by installing the Self Node Remediation Operator or the Fence Agents Remediation Operator from the OperatorHub and enabling machine health checks or node remediation checks. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. 7.14.15. Virtual machine control plane tuning OpenShift Virtualization offers the following tuning options at the control-plane level: The highBurst profile, which uses fixed QPS and burst rates, to create hundreds of virtual machines (VMs) in one batch Migration setting adjustment based on workload type 7.14.15.1. Configuring a highBurst profile Use the highBurst profile to create and maintain a large number of virtual machines (VMs) in one cluster. Procedure Apply the following patch to enable the highBurst tuning policy profile: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type=json -p='[{"op": "add", "path": "/spec/tuningPolicy", \ "value": "highBurst"}]' Verification Run the following command to verify the highBurst tuning policy profile is enabled: USD oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged \ -n openshift-cnv -o go-template --template='{{range USDconfig, \ USDvalue := .spec.configuration}} {{if eq USDconfig "apiConfiguration" \ "webhookConfiguration" "controllerConfiguration" "handlerConfiguration"}} \ {{"\n"}} {{USDconfig}} = {{USDvalue}} {{end}} {{end}} {{"\n"}} 7.14.16. Assigning compute resources In OpenShift Virtualization, compute resources assigned to virtual machines (VMs) are backed by either guaranteed CPUs or time-sliced CPU shares. Guaranteed CPUs, also known as CPU reservation, dedicate CPU cores or threads to a specific workload, which makes them unavailable to any other workload. Assigning guaranteed CPUs to a VM ensures that the VM will have sole access to a reserved physical CPU. Enable dedicated resources for VMs to use a guaranteed CPU. Time-sliced CPUs dedicate a slice of time on a shared physical CPU to each workload. You can specify the size of the slice during VM creation, or when the VM is offline. By default, each vCPU receives 100 milliseconds, or 1/10 of a second, of physical CPU time. The type of CPU reservation depends on the instance type or VM configuration. 7.14.16.1. Overcommitting CPU resources Time-slicing allows multiple virtual CPUs (vCPUs) to share a single physical CPU. This is known as CPU overcommitment . Guaranteed VMs can not be overcommitted. Configure CPU overcommitment to prioritize VM density over performance when assigning CPUs to VMs. With a higher CPU over-commitment of vCPUs, more VMs fit onto a given node. 7.14.16.2. Setting the CPU allocation ratio The CPU Allocation Ratio specifies the degree of overcommitment by mapping vCPUs to time slices of physical CPUs. For example, a mapping or ratio of 10:1 maps 10 virtual CPUs to 1 physical CPU by using time slices. To change the default number of vCPUs mapped to each physical CPU, set the vmiCPUAllocationRatio value in the HyperConverged CR. The pod CPU request is calculated by multiplying the number of vCPUs by the reciprocal of the CPU allocation ratio. For example, if vmiCPUAllocationRatio is set to 10, OpenShift Virtualization will request 10 times fewer CPUs on the pod for that VM. Procedure Set the vmiCPUAllocationRatio value in the HyperConverged CR to define a node CPU allocation ratio. Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Set the vmiCPUAllocationRatio : ... spec: resourceRequirements: vmiCPUAllocationRatio: 1 1 # ... 1 When vmiCPUAllocationRatio is set to 1 , the maximum amount of vCPUs are requested for the pod. 7.14.16.3. Additional resources Pod Quality of Service Classes 7.14.17. About multi-queue functionality Use multi-queue functionality to scale network throughput and performance on virtual machines (VMs) with multiple vCPUs. By default, the queueCount value, which is derived from the domain XML, is determined by the number of vCPUs allocated to a VM. Network performance does not scale as the number of vCPUs increases. Additionally, because virtio-net has only one Tx and Rx queue, guests cannot transmit or retrieve packs in parallel. Note Enabling virtio-net multiqueue does not offer significant improvements when the number of vNICs in a guest instance is proportional to the number of vCPUs. 7.14.17.1. Known limitations MSI vectors are still consumed if virtio-net multiqueue is enabled in the host but not enabled in the guest operating system by the administrator. Each virtio-net queue consumes 64 KiB of kernel memory for the vhost driver. Starting a VM with more than 16 CPUs results in no connectivity if networkInterfaceMultiqueue is set to 'true' ( CNV-16107 ). 7.14.17.2. Enabling multi-queue functionality Enable multi-queue functionality for interfaces configured with a VirtIO model. Procedure Set the networkInterfaceMultiqueue value to true in the VirtualMachine manifest file of your VM to enable multi-queue functionality: apiVersion: kubevirt.io/v1 kind: VM spec: domain: devices: networkInterfaceMultiqueue: true Save the VirtualMachine manifest file to apply your changes. 7.15. VM disks 7.15.1. Hot-plugging VM disks You can add or remove virtual disks without stopping your virtual machine (VM) or virtual machine instance (VMI). Only data volumes and persistent volume claims (PVCs) can be hot plugged and hot-unplugged. You cannot hot plug or hot-unplug container disks. A hot plugged disk remains attached to the VM even after reboot. You must detach the disk to remove it from the VM. You can make a hot plugged disk persistent so that it is permanently mounted on the VM. Note Each VM has a virtio-scsi controller so that hot plugged disks can use the scsi bus. The virtio-scsi controller overcomes the limitations of virtio while retaining its performance advantages. It is highly scalable and supports hot plugging over 4 million disks. Regular virtio is not available for hot plugged disks because it is not scalable. Each virtio disk uses one of the limited PCI Express (PCIe) slots in the VM. PCIe slots are also used by other devices and must be reserved in advance. Therefore, slots might not be available on demand. 7.15.1.1. Hot plugging and hot unplugging a disk by using the web console You can hot plug a disk by attaching it to a virtual machine (VM) while the VM is running by using the OpenShift Container Platform web console. The hot plugged disk remains attached to the VM until you unplug it. You can make a hot plugged disk persistent so that it is permanently mounted on the VM. Prerequisites You must have a data volume or persistent volume claim (PVC) available for hot plugging. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a running VM to view its details. On the VirtualMachine details page, click Configuration Disks . Add a hot plugged disk: Click Add disk . In the Add disk (hot plugged) window, select the disk from the Source list and click Save . Optional: Unplug a hot plugged disk: Click the options menu beside the disk and select Detach . Click Detach . Optional: Make a hot plugged disk persistent: Click the options menu beside the disk and select Make persistent . Reboot the VM to apply the change. 7.15.1.2. Hot plugging and hot unplugging a disk by using the command line You can hot plug and hot unplug a disk while a virtual machine (VM) is running by using the command line. You can make a hot plugged disk persistent so that it is permanently mounted on the VM. Prerequisites You must have at least one data volume or persistent volume claim (PVC) available for hot plugging. Procedure Hot plug a disk by running the following command: USD virtctl addvolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC> \ [--persist] [--serial=<label-name>] Use the optional --persist flag to add the hot plugged disk to the virtual machine specification as a permanently mounted virtual disk. Stop, restart, or reboot the virtual machine to permanently mount the virtual disk. After specifying the --persist flag, you can no longer hot plug or hot unplug the virtual disk. The --persist flag applies to virtual machines, not virtual machine instances. The optional --serial flag allows you to add an alphanumeric string label of your choice. This helps you to identify the hot plugged disk in a guest virtual machine. If you do not specify this option, the label defaults to the name of the hot plugged data volume or PVC. Hot unplug a disk by running the following command: USD virtctl removevolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC> 7.15.2. Expanding virtual machine disks You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk. If your storage provider does not support volume expansion, you can expand the available virtual storage of a VM by adding blank data volumes. You cannot reduce the size of a VM disk. 7.15.2.1. Expanding a VM disk PVC You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk. If the PVC uses the file system volume mode, the disk image file expands to the available size while reserving some space for file system overhead. Procedure Edit the PersistentVolumeClaim manifest of the VM disk that you want to expand: USD oc edit pvc <pvc_name> Update the disk size: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1 # ... 1 Specify the new disk size. Additional resources for volume expansion Extending a basic volume in Windows Extending an existing file system partition without destroying data in Red Hat Enterprise Linux Extending a logical volume and its file system online in Red Hat Enterprise Linux 7.15.2.2. Expanding available virtual storage by adding blank data volumes You can expand the available storage of a virtual machine (VM) by adding blank data volumes. Prerequisites You must have at least one persistent volume. Procedure Create a DataVolume manifest as shown in the following example: Example DataVolume manifest apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> 1 storageClassName: "<storage_class>" 2 1 Specify the amount of available space requested for the data volume. 2 Optional: If you do not specify a storage class, the default storage class is used. Create the data volume by running the following command: USD oc create -f <blank-image-datavolume>.yaml Additional resources for data volumes Configuring preallocation mode for data volumes Managing data volume annotations 7.15.3. Configuring shared volumes for virtual machines You can configure shared disks to allow multiple virtual machines (VMs) to share the same underlying storage. A shared disk's volume must be block mode. You configure disk sharing by exposing the storage as either of these types: An ordinary VM disk A logical unit number (LUN) disk with an SCSI connection and raw device mapping, as required for Windows Failover Clustering for shared volumes In addition to configuring disk sharing, you can also set an error policy for each ordinary VM disk or LUN disk. The error policy controls how the hypervisor behaves when an input/output error occurs on a disk Read or Write. 7.15.3.1. Configuring disk sharing by using virtual machine disks You can configure block volumes so that multiple virtual machines (VMs) can share storage. The application running on the guest operating system determines the storage option you must configure for the VM. A disk of type disk exposes the volume as an ordinary disk to the VM. You can set an error policy for each disk. The error policy controls how the hypervisor behaves when an input/output error occurs while a disk is being written to or read. The default behavior stops the VM and generates a Kubernetes event. You can accept the default behavior, or you can set the error policy to one of the following options: report , which reports the error in the guest. ignore , which ignores the error. The Read or Write failure is undetected. enospace , which produces an error indicating that there is not enough disk space. Prerequisites The volume access mode must be ReadWriteMany (RWX) if the VMs that are sharing disks are running on different nodes. If the VMs that are sharing disks are running on the same node, ReadWriteOnce (RWO) volume access mode is sufficient. The storage provider must support the required Container Storage Interface (CSI) driver. Procedure Create the VirtualMachine manifest for your VM to set the required values, as shown in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: # ... spec: domain: devices: disks: - disk: bus: virtio name: rootdisk errorPolicy: report 1 disk1: disk_one 2 - disk: bus: virtio name: cloudinitdisk disk2: disk_two shareable: true 3 interfaces: - masquerade: {} name: default 1 Identifies the error policy. 2 Identifies a device as a disk. 3 Identifies a shared disk. Save the VirtualMachine manifest file to apply your changes. 7.15.3.2. Configuring disk sharing by using LUN To secure data on your VM from outside access, you can enable SCSI persistent reservation and configure a LUN-backed virtual machine disk to be shared among multiple virtual machines. By enabling the shared option, you can use advanced SCSI commands, such as those required for a Windows failover clustering implementation, for managing the underlying storage. When a storage volume is configured as the LUN disk type, a VM can use the volume as a logical unit number (LUN) device. As a result, the VM can deploy and manage the disk by using SCSI commands. You reserve a LUN through the SCSI persistent reserve options. To enable the reservation: Configure the feature gate option Activate the feature gate option on the LUN disk to issue SCSI device-specific input and output controls (IOCTLs) that the VM requires. You can set an error policy for each LUN disk. The error policy controls how the hypervisor behaves when an input/output error occurs on a disk Read or Write. The default behavior stops the guest and generates a Kubernetes event. For a LUN disk with an iSCSi connection and a persistent reservation, as required for Windows Failover Clustering for shared volumes, you set the error policy to report . Important OpenShift Virtualization does not currently support SCSI-3 Persistent Reservations (SCSI-3 PR) over multipath storage. As a workaround, disable multipath or ensure the Windows Server Failover Clustering (WSFC) shared disk is setup from a single device and not part of multipath. Prerequisites You must have cluster administrator privileges to configure the feature gate option. The volume access mode must be ReadWriteMany (RWX) if the VMs that are sharing disks are running on different nodes. If the VMs that are sharing disks are running on the same node, ReadWriteOnce (RWO) volume access mode is sufficient. The storage provider must support a Container Storage Interface (CSI) driver that uses Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), or iSCSI storage protocols. If you are a cluster administrator and intend to configure disk sharing by using LUN, you must enable the cluster's feature gate on the HyperConverged custom resource (CR). Disks that you want to share must be in block mode. Procedure Edit or create the VirtualMachine manifest for your VM to set the required values, as shown in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-0 spec: template: spec: domain: devices: disks: - disk: bus: sata name: rootdisk - errorPolicy: report 1 lun: 2 bus: scsi reservation: true 3 name: na-shared serial: shared1234 volumes: - dataVolume: name: vm-0 name: rootdisk - name: na-shared persistentVolumeClaim: claimName: pvc-na-share 1 Identifies the error policy. 2 Identifies a LUN disk. 3 Identifies that the persistent reservation is enabled. Save the VirtualMachine manifest file to apply your changes. 7.15.3.2.1. Configuring disk sharing by using LUN and the web console You can use the OpenShift Container Platform web console to configure disk sharing by using LUN. Prerequisites The cluster administrator must enable the persistentreservation feature gate setting. Procedure Click Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. Expand Storage . On the Disks tab, click Add disk . Specify the Name , Source , Size , Interface , and Storage Class . Select LUN as the Type . Select Shared access (RWX) as the Access Mode . Select Block as the Volume Mode . Expand Advanced Settings , and select both checkboxes. Click Save . 7.15.3.2.2. Configuring disk sharing by using LUN and the command line You can use the command line to configure disk sharing by using LUN. Procedure Edit or create the VirtualMachine manifest for your VM to set the required values, as shown in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-0 spec: template: spec: domain: devices: disks: - disk: bus: sata name: rootdisk - errorPolicy: report lun: 1 bus: scsi reservation: true 2 name: na-shared serial: shared1234 volumes: - dataVolume: name: vm-0 name: rootdisk - name: na-shared persistentVolumeClaim: claimName: pvc-na-share 1 Identifies a LUN disk. 2 Identifies that the persistent reservation is enabled. Save the VirtualMachine manifest file to apply your changes. 7.15.3.3. Enabling the PersistentReservation feature gate You can enable the SCSI persistentReservation feature gate and allow a LUN-backed block mode virtual machine (VM) disk to be shared among multiple virtual machines. The persistentReservation feature gate is disabled by default. You can enable the persistentReservation feature gate by using the web console or the command line. Prerequisites Cluster administrator privileges are required. The volume access mode ReadWriteMany (RWX) is required if the VMs that are sharing disks are running on different nodes. If the VMs that are sharing disks are running on the same node, the ReadWriteOnce (RWO) volume access mode is sufficient. The storage provider must support a Container Storage Interface (CSI) driver that uses Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), or iSCSI storage protocols. 7.15.3.3.1. Enabling the PersistentReservation feature gate by using the web console You must enable the PersistentReservation feature gate to allow a LUN-backed block mode virtual machine (VM) disk to be shared among multiple virtual machines. Enabling the feature gate requires cluster administrator privileges. Procedure Click Virtualization Overview in the web console. Click the Settings tab. Select Cluster . Expand SCSI persistent reservation and set Enable persistent reservation to on. 7.15.3.3.2. Enabling the PersistentReservation feature gate by using the command line You enable the persistentReservation feature gate by using the command line. Enabling the feature gate requires cluster administrator privileges. Procedure Enable the persistentReservation feature gate by running the following command: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p \ '[{"op":"replace","path":"/spec/featureGates/persistentReservation", "value": true}]' Additional resources Persistent reservation helper protocol Failover Clustering in Windows Server and Azure Stack HCI
[ "apiVersion: instancetype.kubevirt.io/v1beta1 kind: VirtualMachineInstancetype metadata: name: example-instancetype spec: cpu: guest: 1 1 memory: guest: 128Mi 2", "virtctl create instancetype --cpu 2 --memory 256Mi", "virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f -", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-9-minimal spec: dataVolumeTemplates: - metadata: name: rhel-9-minimal-volume spec: sourceRef: kind: DataSource name: rhel9 1 namespace: openshift-virtualization-os-images 2 storage: {} instancetype: name: u1.medium 3 preference: name: rhel.9 4 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: rhel-9-minimal-volume name: rootdisk", "oc create -f <vm_manifest_file>.yaml", "virtctl start <vm_name> -n <namespace>", "cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF", "podman build -t <registry>/<container_disk_name>:latest .", "podman push <registry>/<container_disk_name>:latest", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - \"private-registry-example-1:5000\" - \"private-registry-example-2:5000\"", "apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 1 secretKey: \"\" 2", "oc apply -f data-source-secret.yaml", "oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: registry: url: \"docker://kubevirt/fedora-cloud-container-disk-demo:latest\" 5 secretRef: data-source-secret 6 certConfigMap: tls-certs 7 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: \"\" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}", "oc create -f vm-fedora-datavolume.yaml", "oc get pods", "oc describe dv fedora-dv 1", "virtctl console vm-fedora-datavolume", "apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 1 secretKey: \"\" 2", "oc apply -f data-source-secret.yaml", "oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: http: url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 5 registry: url: \"docker://kubevirt/fedora-cloud-container-disk-demo:latest\" 6 secretRef: data-source-secret 7 certConfigMap: tls-certs 8 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: \"\" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}", "oc create -f vm-fedora-datavolume.yaml", "oc get pods", "oc describe dv fedora-dv 1", "virtctl console vm-fedora-datavolume", "%WINDIR%\\System32\\Sysprep\\sysprep.exe /generalize /shutdown /oobe /mode:vm", "virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3", "oc get dvs", "yum install -y qemu-guest-agent", "systemctl enable --now qemu-guest-agent", "oc get vm <vm_name>", "net start", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "virtctl start <vm> -n <namespace>", "oc apply -f <vm.yaml>", "apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/cloneFallbackReason: The volume modes of source and target are incompatible cdi.kubevirt.io/clonePhase: Succeeded cdi.kubevirt.io/cloneType: copy", "NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE test-ns 0s Warning IncompatibleVolumeModes persistentvolumeclaim/test-target The volume modes of source and target are incompatible", "kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com", "kind: StorageClass apiVersion: storage.k8s.io/v1 provisioner: openshift-storage.rbd.csi.ceph.com", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: namespace: \"<source_namespace>\" 2 name: \"<my_vm_disk>\" 3 storage: {}", "oc create -f <datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: <source_namespace> 2 name: \"<source_pvc>\" 3", "oc create -f <vm-clone-datavolumetemplate>.yaml", "virtctl vnc <vm_name>", "virtctl vnc <vm_name> -v 4", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/deployVmConsoleProxy\", \"value\": true}]'", "curl --header \"Authorization: Bearer USD{TOKEN}\" \"https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>\"", "{ \"token\": \"eyJhb...\" }", "export VNC_TOKEN=\"<token>\"", "oc login --token USD{VNC_TOKEN}", "virtctl vnc <vm_name> -n <namespace>", "virtctl delete serviceaccount --namespace \"<namespace>\" \"<vm_name>-vnc-access\"", "kubectl create rolebinding \"USD{ROLE_BINDING_NAME}\" --clusterrole=\"token.kubevirt.io:generate\" --user=\"USD{USER_NAME}\"", "kubectl create rolebinding \"USD{ROLE_BINDING_NAME}\" --clusterrole=\"token.kubevirt.io:generate\" --serviceaccount=\"USD{SERVICE_ACCOUNT_NAME}\"", "virtctl console <vm_name>", "virtctl create vm --instancetype <my_instancetype> --preference <my_preference>", "virtctl create vm --instancetype virtualmachineinstancetype/<my_instancetype> --preference virtualmachinepreference/<my_preference>", "virtctl create vm --volume-import type:pvc,src:my-ns/my-pvc --infer-instancetype --infer-preference", "oc label DataSource foo instancetype.kubevirt.io/default-instancetype=<my_instancetype>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config user: cloud-user name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3", "oc create -f <manifest_file>.yaml", "virtctl start vm example-vm -n example-namespace", "oc describe vm example-vm -n example-namespace", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"cloud-user\"] source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3", "oc create -f <manifest_file>.yaml", "virtctl start vm example-vm -n example-namespace", "oc describe vm example-vm -n example-namespace", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"cloud-user\"] source: secret: secretName: authorized-keys", "virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1", "virtctl -n my-namespace ssh cloud-user@example-vm -i my-key", "Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p", "ssh <user>@vm/<vm_name>.<namespace>", "virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1", "virtctl expose vm example-vm --name example-service --type NodePort --port 22", "oc get service", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000", "oc create -f example-service.yaml", "oc get service -n example-namespace", "ssh <user_name>@<ip_address> -p <port> 1", "oc describe vm <vm_name> -n <namespace>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default", "ssh <user_name>@<ip_address> -i <ssh_key>", "ssh [email protected] -i ~/.ssh/id_rsa_cloud-user", "oc edit vm <vm_name>", "oc apply vm <vm_name> -n <namespace>", "oc edit vm <vm_name> -n <namespace>", "disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default", "oc delete vm <vm_name>", "apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3", "oc create -f example-export.yaml", "oc get vmexport example-export -o yaml", "apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: \"\" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:10:09Z\" reason: podReady status: \"True\" type: Ready - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:09:02Z\" reason: pvcBound status: \"True\" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export", "oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1", "oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1", "oc get vmexport <export_name> -o yaml", "apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: # links: external: # manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1 - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2 internal: # manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3 - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export", "curl --cacert cacert.crt <secret_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"", "curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"", "curl --cacert cacert.crt <all_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"", "curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"", "oc get vmis -A", "oc delete vmi <vmi_name>", "kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: vmStateStorageClass: <storage_class_name>", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: 1 persistent: true 2", "apiVersion: tekton.dev/v1 kind: PipelineRun metadata: generateName: windows11-installer-run- labels: pipelinerun: windows11-installer-run spec: params: - name: winImageDownloadURL value: <windows_image_download_url> 1 - name: acceptEula value: false 2 pipelineRef: params: - name: catalog value: redhat-pipelines - name: type value: artifact - name: kind value: pipeline - name: name value: windows-efi-installer - name: version value: 4.16 resolver: hub taskRunSpecs: - pipelineTaskName: modify-windows-iso-file PodTemplate: securityContext: fsGroup: 107 runAsUser: 107", "oc apply -f windows11-customize-run.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: resources: requests: memory: 128Mi limits: memory: 256Mi 1", "metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2", "metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname", "metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value", "metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: {}", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: matchLabels: <first_example_key>: \"true\" <second_example_key>: \"true\"", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration:", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3", "certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s", "error: hyperconvergeds.hco.kubevirt.io \"kubevirt-hyperconverged\" could not be patched: admission webhook \"validate-hco.kubevirt.io\" denied the request: spec.certConfig: ca.duration is smaller than server.duration", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: \"EPYC\"", "apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2", "oc create -f <file_name>.yaml", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/featureGates/VMPersistentState\", \"value\": true}]'", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm spec: template: spec: domain: firmware: bootloader: efi: persistent: true", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf 1 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"pxe-net-conf\", 2 \"type\": \"bridge\", 3 \"bridge\": \"bridge-interface\", 4 \"macspoofchk\": false, 5 \"vlan\": 100, 6 \"disableContainerInterface\": true, \"preserveDefaultVlan\": false 7 }", "oc create -f pxe-net-conf.yaml", "interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1", "devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2", "networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf", "oc create -f vmi-pxe-boot.yaml", "virtualmachineinstance.kubevirt.io \"vmi-pxe-boot\" created", "oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running", "virtctl vnc vmi-pxe-boot", "virtctl console vmi-pxe-boot", "ip addr", "3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff", "kind: VirtualMachine spec: domain: resources: requests: memory: \"4Gi\" 1 memory: hugepages: pageSize: \"1Gi\" 2", "oc apply -f <virtual_machine>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1", "apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: running: true template: spec: schedulerName: my-scheduler 1 domain: devices: disks: - name: containerdisk disk: bus: virtio", "oc get pods", "NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m", "oc describe pod virt-launcher-vm-fedora-dpc87", "[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...]", "oc label node <node_name> nvidia.com/gpu.deploy.operands=false 1", "oc describe node <node_name>", "oc get pods -n nvidia-gpu-operator", "NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-sandbox-validator-kxwj7 1/1 Terminating 0 9d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d nvidia-vfio-manager-zqtck 1/1 Terminating 0 9d", "oc get pods -n nvidia-gpu-operator", "NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "lspci -nnv | grep -i nvidia", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "variant: openshift version: 4.16.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci", "butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml", "oc apply -f 100-worker-vfiopci.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s", "lspci -nnk -d 10de:", "04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: \"10DE:1DB6\" 3 resourceName: \"nvidia.com/GV100GL_Tesla_V100\" 4 - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\" - pciDeviceSelector: \"8086:6F54\" resourceName: \"intel.com/qat\" externalResourceProvider: true 5", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: \"10DE:1DB6\" resourceName: \"nvidia.com/GV100GL_Tesla_V100\" - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\"", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1", "lspci -nnk | grep NVIDIA", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "kind: ClusterPolicy apiVersion: nvidia.com/v1 metadata: name: gpu-cluster-policy spec: operator: defaultRuntime: crio use_ocp_driver_toolkit: true initContainer: {} sandboxWorkloads: enabled: true defaultWorkload: vm-vgpu driver: enabled: false 1 dcgmExporter: {} dcgm: enabled: true daemonsets: {} devicePlugin: {} gfd: {} migManager: enabled: true nodeStatusExporter: enabled: true mig: strategy: single toolkit: enabled: true validator: plugin: env: - name: WITH_WORKLOAD value: \"true\" vgpuManager: enabled: true 2 repository: <vgpu_container_registry> 3 image: <vgpu_image_name> version: nvidia-vgpu-manager vgpuDeviceManager: enabled: false 4 config: name: vgpu-devices-config default: default sandboxDevicePlugin: enabled: false 5 vfioManager: enabled: false 6", "mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108", "nvidia-105 nvidia-108 nvidia-217 nvidia-299", "mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-22 - nvidia-223 - nvidia-224", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-231 nodeMediatedDeviceTypes: - mediatedDeviceTypes: - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q", "spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDeviceTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value>", "oc get USDNODE -o json | jq '.status.allocatable | with_entries(select(.key | startswith(\"nvidia.com/\"))) | with_entries(select(.value != \"0\"))'", "permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2", "oc describe node <node_name>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-2Q name: gpu2", "lspci -nnk | grep <device_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: \"true\"", "apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - DevPreviewLongLifecycle mode: Predictive 1", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/tuningPolicy\", \"value\": \"highBurst\"}]'", "oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged -n openshift-cnv -o go-template --template='{{range USDconfig, USDvalue := .spec.configuration}} {{if eq USDconfig \"apiConfiguration\" \"webhookConfiguration\" \"controllerConfiguration\" \"handlerConfiguration\"}} {{\"\\n\"}} {{USDconfig}} = {{USDvalue}} {{end}} {{end}} {{\"\\n\"}}", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "spec: resourceRequirements: vmiCPUAllocationRatio: 1 1", "apiVersion: kubevirt.io/v1 kind: VM spec: domain: devices: networkInterfaceMultiqueue: true", "virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> [--persist] [--serial=<label-name>]", "virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>", "oc edit pvc <pvc_name>", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> 1 storageClassName: \"<storage_class>\" 2", "oc create -f <blank-image-datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: domain: devices: disks: - disk: bus: virtio name: rootdisk errorPolicy: report 1 disk1: disk_one 2 - disk: bus: virtio name: cloudinitdisk disk2: disk_two shareable: true 3 interfaces: - masquerade: {} name: default", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-0 spec: template: spec: domain: devices: disks: - disk: bus: sata name: rootdisk - errorPolicy: report 1 lun: 2 bus: scsi reservation: true 3 name: na-shared serial: shared1234 volumes: - dataVolume: name: vm-0 name: rootdisk - name: na-shared persistentVolumeClaim: claimName: pvc-na-share", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-0 spec: template: spec: domain: devices: disks: - disk: bus: sata name: rootdisk - errorPolicy: report lun: 1 bus: scsi reservation: true 2 name: na-shared serial: shared1234 volumes: - dataVolume: name: vm-0 name: rootdisk - name: na-shared persistentVolumeClaim: claimName: pvc-na-share", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/featureGates/persistentReservation\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/virtualization/virtual-machines
Data Grid downloads
Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/upgrading_data_grid/rhdg-downloads_datagrid
Chapter 13. Removing the trust using the IdM Web UI
Chapter 13. Removing the trust using the IdM Web UI You can remove the Identity Management (IdM)/Active Directory (AD) trust using the IdM Web UI. Prerequisites You have obtained a Kerberos ticket. For details, see Logging in to IdM in the Web UI: Using a Kerberos ticket . Procedure Log in to the IdM Web UI with administrator privileges. For details, see Accessing the IdM Web UI in a web browser . In the IdM Web UI, click the IPA Server tab. In the IPA Server tab, click the Trusts tab. Select the trust you want to remove. Click the Delete button. In the Remove trusts dialog box, click Delete . Remove the trust object from your Active Directory configuration. Note Removing the trust configuration does not automatically remove the ID range IdM has created for AD users. This way, if you add the trust again, the existing ID range is re-used. Also, if AD users have created files on an IdM client, their POSIX IDs are preserved in the file metadata. To remove all information related to an AD trust, remove the AD user ID range in the ID Ranges tab after removing the trust configuration and trust object. Verification If the trust has been successfully deleted, the Web UI displays a green pop-up with the text:
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_trust_between_idm_and_ad/removing-the-trust-using-the-idm-web-ui_installing-trust-between-idm-and-ad
Appendix C. Using AMQ Broker with the examples
Appendix C. Using AMQ Broker with the examples The AMQ OpenWire JMS examples require a running message broker with a queue named exampleQueue . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named exampleQueue . USD <broker-instance-dir> /bin/artemis queue create --name exampleQueue --address exampleQueue --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2021-08-24 14:28:08 UTC
[ "<broker-instance-dir> /bin/artemis run", "example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live", "<broker-instance-dir> /bin/artemis queue create --name exampleQueue --address exampleQueue --auto-create-address --anycast", "<broker-instance-dir> /bin/artemis stop" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_openwire_jms_client/using_the_broker_with_the_examples
Chapter 2. Managing images
Chapter 2. Managing images The Image service (glance) provides discovery, registration, and delivery services for disk and server images. It provides the ability to copy or snapshot a server image, and immediately store it. You can use stored images as templates to commission new servers quickly and more consistently than installing a server operating system and individually configuring services. 2.1. Creating images To create images, you can use Red Hat Enterprise Linux (RHEL) Kernel-based Virtual Machine (KVM) guest images, or you can manually create Red Hat OpenStack Platform (RHOSP) compatible images in the QCOW2 format by using RHEL ISO files or Windows ISO files. 2.1.1. Use a KVM guest image with Red Hat OpenStack Platform You can use one of the following ready Red Hat Enterprise Linux (RHEL) Kernel-based Virtual Machine (KVM) guest QCOW2 images: Red Hat Enterprise Linux 9 KVM Guest Image Red Hat Enterprise Linux 8 KVM Guest Image These images are configured with cloud-init and must take advantage of EC2-compatible metadata services for provisioning SSH keys to function correctly. Ready Windows KVM guest QCOW2 images are not available. Note For KVM guest images: The root account in the image is deactivated, but sudo access is granted to a special user named cloud-user . There is no root password set for this image. The root password is locked in /etc/shadow by placing !! in the second field. For a Red Hat OpenStack Platform (RHOSP) instance, generate an SSH keypair from the RHOSP dashboard or command line, and use that key combination to perform an SSH public authentication to the instance as root user. When you launch the instance, this public key is injected to it. You can then authenticate by using the private key that you download when you create the keypair. 2.1.2. Create custom Red Hat Enterprise Linux or Windows images To create custom Red Hat Enterprise Linux (RHEL) or Windows images, ensure that you have the following prerequistes in place. Prerequisites A Linux host machine to create an image. This can be any machine on which you can install and run the Linux packages, except for the undercloud or the overcloud. The advanced-virt repository is enabled: The virt-manager application is installed to have all packages necessary to create a guest operating system: The libguestfs-tools package is installed to have a set of tools to access and modify virtual machine images: A RHEL 9 or 8 ISO file or a Windows ISO file. For more information about RHEL ISO files, see RHEL 9.0 Binary DVD or RHEL 8.6 Binary DVD . If you do not have a Windows ISO file, see the Microsoft Evaluation Center to download an evaluation image. A text editor, if you want to change the kickstart files (RHEL only). Important If you install the libguestfs-tools package on the undercloud, disable iscsid.socket to avoid port conflicts with the tripleo_iscsid service on the undercloud: 2.1.3. Creating a Red Hat Enterprise Linux 9 image Manually create a Red Hat OpenStack Platform (RHOSP) compatible image in the QCOW2 format by using a Red Hat Enterprise Linux (RHEL) 9 ISO file. Note You must run all commands with the [root@host]# on your host machine. Procedure Start the installation by using virt-install : Replace the values in angle brackets <> with the correct values for your RHEL 9 image. This command launches an instance and starts the installation process. Note If the instance does not launch automatically, run the virt-viewer command to view the console: Configure the instance: At the initial Installer boot menu, select Install Red Hat Enterprise Linux 9 . Choose the appropriate Language and Keyboard options. When prompted about which type of devices your installation uses, select Auto-detected installation media . When prompted about which type of installation destination, select Local Standard Disks . For other storage options, select Automatically configure partitioning . Choose the Basic Server install, which installs an SSH server. For network and host name, select eth0 for network and choose a host name for your device. The default host name is localhost.localdomain . Enter a password in the Root Password field and enter the same password again in the Confirm field. Result The installation process completes and the Complete! screen is displayed. After the installation is complete, reboot the instance and log in as the root user. Update the /etc/sysconfig/network-scripts/ifcfg-eth0 file so that it contains only the following values: Reboot the machine. Register the machine with the Content Delivery Network. Update the system: Install the cloud-init packages: Edit the /etc/cloud/cloud.cfg configuration file and add the following content under cloud_init_modules : The resolv-conf option automatically configures the resolv.conf file when an instance boots for the first time. This file contains information related to the instance such as nameservers , domain , and other options. Add the following line to /etc/sysconfig/network to avoid issues when accessing the EC2 metadata service: To ensure that the console messages appear in the Log tab on the dashboard and the nova console-log output, add the following boot option to the /etc/default/grub file: Run the grub2-mkconfig command: The output is as follows: Deregister the instance so that the resulting image does not contain the subscription details for this instance: Power off the instance: Reset and clean the image by using the virt-sysprep command so that it can be used to create instances without issues: Reduce the image size by converting any free space within the disk image back to free space within the host: This command creates a new <rhel9-cloud.qcow2> file in the location from where the command is run. Note You must manually resize the partitions of instances based on the image in accordance with the disk space in the flavor that is applied to the instance. The <rhel9-cloud.qcow2> image file is ready to be uploaded to the Image service. For more information about uploading this image to your RHOSP deployment, see Uploading an image . 2.1.4. Creating a Red Hat Enterprise Linux 8 image Manually create a Red Hat OpenStack Platform (RHOSP) compatible image in the QCOW2 format by using a Red Hat Enterprise Linux (RHEL) 8 ISO file. Note You must run all commands with the [root@host]# on your host machine. Procedure Start the installation by using virt-install : Replace the values in angle brackets <> with the correct values for your RHEL 8 image. This command launches an instance and starts the installation process. Note If the instance does not launch automatically, run the virt-viewer command to view the console: Configure the instances: At the initial Installer boot menu, select Install or upgrade an existing system and follow the installation prompts. Accept the defaults. The disk installer provides an option to test your installation media before installation. Select OK to run the test or Skip to proceed without testing. Choose the appropriate Language and Keyboard options. When prompted about which type of devices your installation uses, select Basic Storage Devices . Choose a host name for your device. The default host name is localhost.localdomain . Set the timezone and root password. Based on the space on the disk, choose the type of installation you want from the options in the Which type of installation would you like? window. Choose the Basic Server install, which installs an SSH server. The installation process completes and the Congratulations, your Red Hat Enterprise Linux installation is complete screen is displayed. Reboot the instance and log in as the root user. Update the /etc/sysconfig/network-scripts/ifcfg-eth0 file so that it contains only the following values: Reboot the machine. Register the machine with the Content Delivery Network: Update the system: Install the cloud-init packages: Edit the /etc/cloud/cloud.cfg configuration file and add the following content under cloud_init_modules . The resolv-conf option automatically configures the resolv.conf file when an instance boots for the first time. This file contains information related to the instance such as nameservers , domain , and other options. To prevent network issues, create /etc/udev/rules.d/75-persistent-net-generator.rules : This prevents the /etc/udev/rules.d/70-persistent-net.rules file from being created. If the /etc/udev/rules.d/70-persistent-net.rules file is created, networking might not function correctly when you boot from snapshots because the network interface is created as eth1 instead of eth0 and the IP address is not assigned. Add the following line to /etc/sysconfig/network to avoid issues when accessing the EC2 metadata service: To ensure that the console messages appear in the Log tab on the dashboard and the nova console-log output, add the following boot option to the /etc/grub.conf file: Deregister the virtual machine so that the resulting image does not contain the same subscription details for this instance: Power off the instance: Reset and clean the image by using the virt-sysprep command so that it can be used to create instances without issues: Reduce the image size by using the virt-sparsify command. This command converts any free space within the disk image back to free space within the host: This command creates a new <rhel86-cloud.qcow2> file in the location from where the command is run. Note You must manually resize the partitions of instances based on the image in accordance with the disk space in the flavor that is applied to the instance. The <rhel86-cloud.qcow2> image file is ready to be uploaded to the Image service. For more information about uploading this image to your RHOSP deployment, see Uploading an image . 2.1.5. Creating a Windows image Manually create a Red Hat OpenStack Platform (RHOSP) compatible image in the QCOW2 format by using a Windows ISO file. Note You must run all commands with the [root@host]# on your host machine. Procedure Start the installation by using virt-install : Replace the following values of the virt-install parameters: <name> - the name that the Windows instance has. <size> - disk size in GB. <path> - the path to the Windows installation ISO file. <RAM> - the requested amount of RAM in MB. Note The --os-type=windows parameter ensures that the clock is configured correctly for the Windows guest, and enables its Hyper-V enlightenment features. You must also set os_type=windows in the image metadata before uploading the image to the Image service (glance). The virt-install command saves the guest image as /var/lib/libvirt/images/ <name> . qcow2 by default. If you want to keep the guest image elsewhere, change the parameter of the --disk option: Replace <filename> with the name of the file that stores the instance image, and optionally its path. For example, path=win8.qcow2,size=8 creates an 8 GB file named win8.qcow2 in the current working directory. Tip If the guest does not launch automatically, run the virt-viewer command to view the console: For more information about how to install Windows, see the relevant Microsoft documentation. To allow the newly installed Windows system to use the virtualized hardware, you might need to install VirtIO drivers. To do so, install the image by attaching it as a CD-ROM drive to the Windows instance. To install the virtio-win package, you must add the VirtIO ISO image to the instance, and install the VirtIO drivers. For more information, see Installing KVM paravirtualized drivers for Windows virtual machines in Configuring and managing virtualization . To complete the configuration, download and execute Cloudbase-Init on the Windows system. At the end of the installation of Cloudbase-Init, select the Run Sysprep and Shutdown checkboxes. The Sysprep tool makes the guest unique by generating an OS ID, which is used by certain Microsoft services. Important Red Hat does not provide technical support for Cloudbase-Init. If you encounter an issue, see Contact Cloudbase Solutions . When the Windows system shuts down, the <name>.qcow2 image file is ready to be uploaded to the Image service. For more information about uploading this image to your RHOSP deployment, see Uploading an image . 2.1.5.1. Metadata properties The Compute service (nova) has deprecated support for using libosinfo data to set default device models. Instead, use the following image metadata properties to configure the optimal virtual hardware for an instance: os_distro os_version hw_cdrom_bus hw_disk_bus hw_scsi_model hw_vif_model hw_video_model hypervisor_type For more information about these metadata properties, see Image configuration parameters . 2.1.6. Create an image for UEFI Secure Boot When the overcloud contains UEFI Secure Boot Compute nodes, you can create a Secure Boot instance image that cloud users can use to launch Secure Boot instances. Procedure Create a new image for UEFI Secure Boot: Replace <base_image_file> with an image file that supports UEFI and the GUID Partition Table (GPT) standard, and includes an EFI system partition. If the default machine type is not q35 , then set the machine type to q35 : Specify that the instance must be scheduled on a UEFI Secure Boot host: 2.2. Uploading an image Upload an image to the Red Hat OpenStack Platform (RHOSP) Image service (glance). Procedure Use the glance image-create command with the property option to upload an image. For example: For a list of glance image-create command options, see Image service (glance) command options . For a list of property keys, see Image configuration parameters . 2.3. Updating an image Update an image. Procedure Use the glance image-update command with the property option to update an image. For example: For a list of glance image-update command options, see Image service (glance) command options . For a list of property keys, see Image configuration parameters . 2.4. Importing an image You can import images to the Image service (glance) by using one of the following two methods: Use web-download to import an image from a URI. Use glance-direct to import an image from a local file system. The web-download method is enabled by default. The cloud administrator configures import methods. You can run the glance import-info command to list available import options. 2.4.1. Import an image from a remote URI You can use the web-download method to copy an image from a remote URI. Create an image and specify the URI of the image to import: Replace <CONTAINER FORMAT> with the container format that you are setting set for your image (None, ami, ari, aki, bare, ovf, ova, docker). Replace <DISK-FORMAT> with the disk format that you are setting set for your image (None, ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi, iso, ploop). Replace <NAME> with a descriptive name for your image. Replace <URI> with the URI of your image. You can check the availability of the image by using the glance image-show <IMAGE_ID> command. Replace <IMAGE_ID> with the ID you provided during image creation. The Image service web download method uses a two-stage process to perform the import: The web download method creates an image record. The web download method retrieves the image from the specified URI. The URI is subject to optional denylist and allowlist filtering. The Image Property Injection plugin may inject metadata properties to the image. These injected properties determine which compute nodes the image instances are launched on. 2.4.2. Import an image from a local volume The glance-direct method creates an image record, which generates an image ID. After the image is uploaded to the Image service from a local volume, it is stored in a staging area and is made active after it passes any configured checks. The glance-direct method requires a shared staging area when used in a highly available (HA) configuration. Note Image uploads that use the glance-direct method can fail in a HA environment if a common staging area is not present. In a HA active-active environment, API calls are distributed to the Image service controllers. The download API call can be sent to a different controller than the API call to upload the image. The glance-direct method uses three different calls to import an image: glance image-create glance image-stage glance image-import You can use the glance image-create-via-import command to perform all three of these calls in one command: Replace <CONTAINER FORMAT> , <DISK-FORMAT> , <NAME> , and </PATH/TO/IMAGE> with the relevant values for your image. After the image moves from the staging area to the back-end location, the image is listed. However, it might take some time for the image to become active. You can check the availability of the image by using the glance image-show <IMAGE_ID> command. Replace <IMAGE_ID with the ID you provided during image creation. 2.5. Deleting an image Procedure Use the glance image-delete command to delete one or more images: Replace <IMAGE_ID> with the ID of the image you want to delete. Note The glance image-delete command permanently deletes the image and all copies of the image, as well as the image instance and metadata. 2.6. Hiding or unhiding an image You can hide public images from normal listings presented to users. For instance, you can hide obsolete CentOS 7 images and show only the latest version to simplify the user experience. Users can discover and use hidden images. To hide an image: To create a hidden image, add the --hidden argument to the glance image-create command. To unhide an image: Show hidden images To list hidden images: 2.7. Enabling image conversion You can upload a QCOW2 image to the Image service (glance) by enabling the GlanceImageImportPlugins parameter. You can then convert the QCOW2 image to RAW format. Note Image conversion is automatically enabled when you use Red Hat Ceph Storage RADOS Block Device (RBD) to store images and boot Nova instances. To enable image conversion, create an environment file that contains the following parameter value. Include the new environment file with the -e option in the openstack overcloud deploy command: Use the Image service command-line client for image management. 2.7.1. Converting an image to RAW format Red Hat Ceph Storage can store, but does not support using, QCOW2 images to host virtual machine (VM) disks. When you upload a QCOW2 image and create a VM from it, the compute node downloads the image, converts the image to RAW, and uploads it back into Ceph, which can then use it. This process affects the time it takes to create VMs, especially during parallel VM creation. For example, when you create multiple VMs simultaneously, uploading the converted image to the Ceph cluster might impact already running workloads. The upload process can starve those workloads of IOPS and impede storage responsiveness. To boot VMs in Ceph more efficiently (ephemeral back end or boot from volume), the glance image format must be RAW. Procedure Converting an image to RAW might yield an image that is larger in size than the original QCOW2 image file. Run the following command before the conversion to determine the final RAW image size: Convert an image from QCOW2 to RAW format: 2.7.1.1. Configuring disk formats in the Image service (glance) You can the configure the Image service (glance) to enable or reject disk formats by using the GlanceDiskFormats parameter. Procedure Log in to the undercloud host as the stack user. Source the undercloud credentials file: Include the GlanceDiskFormats parameter in an environment file, for example, glance_disk_formats.yaml : For example, use the following configuration to enable only RAW and ISO disk formats: Use the following example configuration to reject QCOW2 disk images: Include the environment file that contains your new configuration in the openstack overcloud deploy command with any other environment files that are relevant to your environment: Replace <overcloud_environment_files> with the list of environment files that are part of your deployment. Replace <new_environment_file> with the environment file that contains your new configuration. For more information about the disk formats available in RHOSP, see Image configuration parameters . 2.7.2. Storing an image in RAW format With the GlanceImageImportPlugins parameter enabled, run the following command to store a previously created image in RAW format: For --name , replace NAME with the name of the image; this is the name that will appear in glance image-list . For --uri , replace http://server/image.qcow2 with the location and file name of the QCOW2 image. Note This command example creates the image record and imports it by using the web-download method. The glance-api downloads the image from the --uri location during the import process. If web-download is not available, glanceclient cannot automatically download the image data. Run the glance import-info command to list the available image import methods.
[ "sudo subscription-manager repos --enable=advanced-virt-for-rhel-8-x86_64-rpms", "sudo dnf module install -y virt", "sudo dnf install -y libguestfs-tools-c", "sudo systemctl disable --now iscsid.socket", "virt-install --virt-type kvm --name <rhel9> --ram <2048> --cdrom </var/lib/libvirt/images/rhel-9.0-x86_64-dvd.iso> --disk <rhel9.qcow2>,format=qcow2,size=<10> --network=bridge:virbr0 --graphics vnc,listen=127.0.0.1 --noautoconsole --os-variant=<rhel9.0>", "virt-viewer <rhel9>", "TYPE=Ethernet DEVICE=eth0 ONBOOT=yes BOOTPROTO=dhcp NM_CONTROLLED=no", "sudo subscription-manager register sudo subscription-manager attach --pool=Valid-Pool-Number-123456 sudo subscription-manager repos --enable=rhel-9-server-rpms", "dnf -y update", "dnf install -y cloud-utils-growpart cloud-init", "- resolv-conf", "NOZEROCONF=yes", "GRUB_CMDLINE_LINUX_DEFAULT=\"console=tty0 console=ttyS0,115200n8\"", "grub2-mkconfig -o /boot/grub2/grub.cfg", "Generating grub configuration file Found linux image: /boot/vmlinuz-3.10.0-229.9.2.el9.x86_64 Found initrd image: /boot/initramfs-3.10.0-229.9.2.el9.x86_64.img Found linux image: /boot/vmlinuz-3.10.0-121.el9.x86_64 Found initrd image: /boot/initramfs-3.10.0-121.el9.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-b82a3044fb384a3f9aeacf883474428b Found initrd image: /boot/initramfs-0-rescue-b82a3044fb384a3f9aeacf883474428b.img done", "subscription-manager repos --disable=* subscription-manager unregister dnf clean all", "poweroff", "virt-sysprep -d <rhel9>", "virt-sparsify --compress <rhel9.qcow2> <rhel9-cloud.qcow2>", "virt-install --virt-type kvm --name <rhel86-cloud-image> --ram <2048> --vcpus <2> --disk <rhel86.qcow2>,format=qcow2,size=<10> --location <rhel-8.6-x86_64-boot.iso> --network=bridge:virbr0 --graphics vnc,listen=127.0.0.1 --noautoconsole --os-variant <rhel8.6>", "virt-viewer <rhel86-cloud-image>", "TYPE=Ethernet DEVICE=eth0 ONBOOT=yes BOOTPROTO=dhcp NM_CONTROLLED=no", "sudo subscription-manager register sudo subscription-manager attach --pool=Valid-Pool-Number-123456 sudo subscription-manager repos --enable=rhel-8-server-rpms", "dnf -y update", "dnf install -y cloud-utils-growpart cloud-init", "- resolv-conf", "echo \"#\" > /etc/udev/rules.d/75-persistent-net-generator.rules", "NOZEROCONF=yes", "console=tty0 console=ttyS0,115200n8", "subscription-manager repos --disable=* subscription-manager unregister dnf clean all", "poweroff", "virt-sysprep -d <rhel86-cloud-image>", "virt-sparsify --compress <rhel86.qcow2> <rhel86-cloud.qcow2>", "virt-install --name=<name> --disk size=<size> --cdrom=<path> --os-type=windows --network=bridge:virbr0 --graphics spice --ram=<ram>", "--disk path=<filename>,size=<size>", "virt-viewer <name>", "openstack image create --file <base_image_file> uefi_secure_boot_image", "openstack image set --property hw_machine_type=q35 uefi_secure_boot_image", "openstack image set --property hw_firmware_type=uefi --property os_secure_boot=required uefi_secure_boot_image", "glance image-create --name <NAME> --is-public true --disk-format qcow2 --container-format bare --file <IMAGE_FILE> --property <IMAGE_METADATA>", "glance image-update IMG-UUID --property architecture=x86_64", "glance image-create-via-import --container-format <CONTAINER FORMAT> --disk-format <DISK-FORMAT> --name <NAME> --import-method web-download --uri <URI>", "glance image-create-via-import --container-format <CONTAINER FORMAT> --disk-format <DISK-FORMAT> --name <NAME> --file </PATH/TO/IMAGE>", "glance image-delete <IMAGE_ID> [<IMAGE_ID> ...]", "glance image-update <image_id> --hidden 'true'", "glance image-update <image_id> --hidden 'false'", "glance image-list --hidden 'true'", "parameter_defaults: GlanceImageImportPlugins:'image_conversion'", "qemu-img info <image>.qcow2", "qemu-img convert -p -f qcow2 -O raw <original qcow2 image>.qcow2 <new raw image>.raw", "source ~/stackrc", "parameter_defaults: GlanceDiskFormats: - <disk_format>", "parameter_defaults: GlanceDiskFormats: - raw - iso", "parameter_defaults: GlanceDiskFormats: - raw - iso - aki - ari - ami", "openstack overcloud deploy --templates -e <overcloud_environment_files> -e <new_environment_file> ...", "glance image-create-via-import --disk-format qcow2 --container-format bare --name NAME --visibility public --import-method web-download --uri http://server/image.qcow2" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/creating_and_managing_images/assembly_managing-images_osp
Chapter 5. Authentication and Interoperability
Chapter 5. Authentication and Interoperability SSSD in a container now fully supported The rhel7/sssd container image, which provides the System Security Services Daemon (SSSD), is no longer a Technology Preview feature. The image is now fully supported. Note that the rhel7/ipa-server container image is still a Technology Preview feature. For details, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/using_containerized_identity_management_services . (BZ#1467260) Identity Management now supports FIPS With this enhancement, Identity Management (IdM) supports the Federal Information Processing Standard (FIPS). This enables you to run IdM in environments that must meet the FIPS criteria. To run IdM with FIPS mode enabled, you must set up all servers in the IdM environment using Red Hat Enterprise Linux 7.4 with FIPS mode enabled. Note that you cannot: Enable FIPS mode on existing IdM servers previously installed with FIPS mode disabled. Install a replica in FIPS mode when using an existing IdM server with FIPS mode disabled. For further details, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/Linux_Domain_Identity_Authentication_and_Policy_Guide/index.html#prerequisites . (BZ# 1125174 ) SSSD supports obtaining a Kerberos ticket when users authenticate with a smart card The System Security Services Daemon (SSSD) now supports the Kerberos PKINIT preauthentication mechanism. When authenticating with a smart card to a desktop client system enrolled in an Identity Management (IdM) domain, users receive a valid Kerberos ticket-granting ticket (TGT) if the authentication was successful. Users can then use the TGT for further single sign-on (SSO) authentication from the client system. For details, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Linux_Domain_Identity_Authentication_and_Policy_Guide/sc-pkinit-auth.html . (BZ# 1200767 , BZ# 1405075 ) SSSD enables logging in to different user accounts with the same smart card certificate Previously, the System Security Services Daemon (SSSD) required every certificate to be uniquely mapped to a single user. When using smart card authentication, users with multiple accounts were not able to log in to all of these accounts with the same smart card certificate. For example, a user with a personal account and a functional account (such as a database administrator account) was able to log in only to the personal account. With this update, SSSD no longer requires certificates to be uniquely mapped to a single user. As a result, users can now log in to different accounts with a single smart card certificate. For details, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Linux_Domain_Identity_Authentication_and_Policy_Guide/smart-cards.html . (BZ# 1340711 , BZ# 1402959 ) IdM web UI enables smart card login The Identity Management web UI enables users to log in using smart cards. For details, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Linux_Domain_Identity_Authentication_and_Policy_Guide/sc-web-ui-auth.html . (BZ# 1366572 ) New packages: keycloak-httpd-client-install The keycloak-httpd-client-install packages provide various libraries and tools that can automate and simplify the configuration of Apache httpd authentication modules when registering as a Red Hat Single Sign-On (RH-SSO, also called Keycloak) federated Identity Provider (IdP) client. For details on RH-SSO, see https://access.redhat.com/products/red-hat-single-sign-on . As part of this update, new dependencies have been added to Red Hat Enterprise Linux: The python-requests-oauthlib package: This package provides the OAuth library support for the python-requests package, which enables python-requests to use OAuth for authentication. The python-oauthlib package: This package is a Python library providing OAuth authentication message creation and consumption. It is meant to be used in conjunction with tools providing message transport. (BZ# 1401781 , BZ#1401783, BZ#1401784) New Kerberos credential cache type: KCM This update adds a new SSSD service named kcm . The service is included in the sssd-kcm subpackage. When the kcm service is installed, you can configure the Kerberos library to use a new credential cache type named KCM . When the KCM credential cache type is configured, the sssd-kcm service manages the credentials. The KCM credential cache type is well-suited for containerized environments: With KCM, you can share credential caches between containers on demand, based on mounting the UNIX socket on which the kcm service listens. The kcm service runs in user space outside the kernel, unlike the KEYRING credential cache type that RHEL uses by default. With KCM, you can run the kcm service only in selected containers. With KEYRING, all containers share the credential caches because they share the kernel. Additionally, the KCM credential cache type supports cache collections, unlike the FILE ccache type. For details, see the sssd-kcm(8) man page. (BZ# 1396012 ) AD users can log in to the web UI to access their self-service page Previously, Active Directory (AD) users were only able to authenticate using the kinit utility from the command line. With this update, AD users can also log in to the Identity Management (IdM) web UI. Note that the IdM administrator must create an ID override for an AD user before the user is able to log in. As a result, AD users can access their self-service page through the IdM web UI. The self-service page displays the information from the AD users' ID override. For details, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Linux_Domain_Identity_Authentication_and_Policy_Guide/using-the-ui.html#ad-users-idm-web-ui . (BZ# 872671 ) SSSD enables configuring an AD subdomain in the SSSD server mode Previously, the System Security Services Daemon (SSSD) automatically configured trusted Active Directory (AD) domains. With this update, SSSD supports configuring certain parameters for trusted AD domains in the same way as the joined domain. As a result, you can set individual settings for trusted domains, such as the domain controller that SSSD communicates with. To do this, create a section in the /etc/sssd/sssd.conf file with a name that follows this template: For example, if the main IdM domain name is ipa.com and the trusted AD domain name is ad.com, the corresponding section name is: (BZ#1214491) SSSD supports user and group lookups and authentication with short names in AD environments Previously, the System Security Services Daemon (SSSD) supported user names without the domain component, also called short names, for user and group resolution and authentication only when the daemon was joined to a standalone domain. Now, you can use short names for these purposes in all SSSD domains in these environments: On clients joined to Active Directory (AD) In Identity Management (IdM) deployments with a trust relationship to an AD forest The output format of all commands is always fully-qualified even when using short names. This feature is enabled by default after you set up a domain's resolution order list in one of the following ways (listed in order of preference): Locally, by configuring the list using the domain_resolution_order option in the [sssd] section of the /etc/sssd/sssd.conf file By using an ID view Globally, in the IdM configuration To disable the feature, set the use_fully_qualified_names option to True in the [domain/example.com] section of the /etc/sssd/sssd.conf file. (BZ#1330196) SSSD supports user and group resolution, authentication, and authorization in setups without UIDs or SIDs In traditional System Security Services Daemon (SSSD) deployments, users and groups either have POSIX attributes set or SSSD can resolve the users and groups based on Windows security identifiers (SID). With this update, in setups that use LDAP as the identity provider, SSSD now supports the following functionality even when UIDs or SIDs are not present in the LDAP directory: User and group resolution through the D-Bus interface Authentication and authorization through the plugabble authentication module (PAM) interface (BZ# 1425891 ) SSSD introduces the sssctl user-checks command, which checks basic SSSD functionality in a single operation The sssctl utility now includes a new command named user-checks . The sssctl user-checks command helps debug problems in applications that use the System Security Services Daemon (SSSD) as a back end for user lookup, authentication, and authorization. The sssctl user-checks [USER_NAME] command displays user data available through Name Service Switch (NSS) and the InfoPipe responder for the D-Bus interface. The displayed data shows whether the user is authorized to log in using the system-auth pluggable authentication module (PAM) service. Additional options accepted by sssctl user-checks check authentication or different PAM services. For details on sssctl user-checks , use the sssctl user-checks --help command. (BZ#1414023) Support for secrets as a service This update adds a responder named secrets to the System Security Services Daemon (SSSD). This responder allows an application to communicate with SSSD over a UNIX socket using the Custodia API. This enables SSSD to store secrets in its local database or to forward them to a remote Custodia server. (BZ# 1311056 ) IdM enables semi-automatic upgrades of the IdM DNS records on an external DNS server To simplify updating the Identity Management (IdM) DNS records on an external DNS server, IdM introduces the ipa dns-update-system-records --dry-run --out [file] command. The command generates a list of records in a format accepted by the nsupdate utility. You can use the generated file to update the records on the external DNS server by using a standard dynamic DNS update mechanism secured with the Transaction Signature (TSIG) protocol or the GSS algorithm for TSIG (GSS-TSIG). For details, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Linux_Domain_Identity_Authentication_and_Policy_Guide/dns-updates-external.html . (BZ# 1409628 ) IdM now generates SHA-256 certificate and public key fingerprints Previously, Identity Management (IdM) used the MD5 hash algorithm when generating fingerprints for certificates and public keys. To increase security, IdM now uses the SHA-256 algorithm in the mentioned scenario. (BZ# 1444937 ) IdM supports flexible mapping mechanisms for linking smart card certificates to user accounts Previously, the only way to find a user account corresponding to a certain smart card in Identity Management (IdM) was to provide the whole smart card certificate as a Base64-encoded DER string. With this update, it is possible to find a user account also by specifying attributes of the smart card certificates, not just the certificate string itself. For example, the administrator can now define matching and mapping rules to link smart card certificates issued by a certain certificate authority (CA) to a user account in IdM. (BZ# 1402959 ) New user-space tools enable a more convenient LMDB debugging This update introduces the mdb_copy , mdb_dump , mdb_load , and mdb_stat tool in the /usr/libexec/openldap/ directory. The addition includes relevant man pages in the man/man1 subdirectory. Use the new tools only to debug problems related to the Lightning Memory-Mapped Database (LMDB) back end. (BZ#1428740) openldap rebased to version 2.4.44 The openldap packages have been upgraded to upstream version 2.4.44, which provides a number of bug fixes and enhancements over the version. In particular, this new version fixes many replication and Lightning Memory-Mapped Database (LMDB) bugs. (BZ#1386365) Improved security of DNS lookups and robustness of service principal lookups in Identity Management The Kerberos client library no longer attempts to canonicalize host names when issuing ticket-granting server (TGS) requests. This feature improves: Security because DNS lookups, which were previously required during canonicalization, are no longer performed Robustness of service principal lookups in more complex DNS environments, such as clouds or containerized applications Make sure you specify the correct fully qualified domain name (FQDN) in host and service principals. Due to this change in behavior, Kerberos does not attempt to resolve any other form of names in principals, such as short names. (BZ# 1404750 ) samba rebased to version 4.6.2 The samba packages have been upgraded to version 4.6.2, which provides a number of bug fixes and enhancements over the version: Samba now verifies the ID mapping configuration before the winbindd service starts. If the configuration is invalid, winbindd fails to start. Use the testparm utility to verify your /etc/samba/smb.conf file. For further details, see the IDENTITY MAPPING CONSIDERATIONS section in the smb.conf man page. Uploading printer drivers from Windows 10 now works correctly. Previously, the default value of the rpc server dynamic port range parameter was 1024-1300 . With this update, the default has been changed to 49152-65535 and now matches the range used in Windows Server 2008 and later. Update your firewall rules if necessary. The net ads unregister command can now delete the DNS entry of the host from the Active Directory DNS zone when leaving the domain. SMB 2.1 leases are now enabled by default in the smb2 leases parameter. SMB leasing enables clients to aggressively cache files. To improve security, the NT LAN manager version 1 (NTLMv1) protocol is now disabled by default. If you require the insecure NTLMv1 protocol, set the ntlm auth parameter in the /etc/samba/smb.conf file to yes . The event subcommand has been added to the ctdb utility for interacting with event scripts. The idmap_hash ID mapping back end is marked as deprecated will be removed in a future Samba version. The deprecated only user and username parameters have been removed. Samba automatically updates its tdb database files when the smbd , nmbd , or winbind daemon starts. Back up the databases files before starting Samba. Note that Red Hat does not support downgrading tdb database files. For further information about notable changes, read the upstream release notes before updating. (BZ#1391954) authconfig can enable SSSD to authenticate users with smart cards This new feature allows the authconfig command to configure the System Security Services Daemon (SSSD) to authenticate users with smart cards, for example: With this update, smart card authentication can now be performed on systems where pam_pkcs11 is not installed. However, if pam_pkcs11 is installed, the --smartcardmodule=sssd option is ignored. Instead, the first pkcs11_module defined in the /etc/pam_pkcs11/pam_pkcs11.conf is used as default. For details, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Linux_Domain_Identity_Authentication_and_Policy_Guide/auth-idm-client-sc.html . (BZ# 1378943 ) authconfig can now enable account locking This update adds the --enablefaillock option for the authconfig command. When the option is enabled, the configured account will be locked for 20 minutes after four consecutive failed login attempts within a 15-minute interval. (BZ#1334449) Improved performance of the IdM server The Identity Management (IdM) server has a higher performance across many of the common workflows and setups. These improvements include: Vault performance has been increased by reducing the round trips within the IdM server management framework. The IdM server management framework has been tuned to reduce the time spent in internal communication and authentication. The Directory Server connection management has been made more scalable with the use of the nunc-stans framework. On new installations, the Directory Server now auto-tunes the database entry cache and the number of threads based on the hardware resources of the server. The memberOf plug-in performance has been improved when working with large or nested groups. (BZ# 1395940 , BZ# 1425906 , BZ# 1400653 ) The default session expiration period in the IdM web UI has changed Previously, when the user logged in to the Identity Management (IdM) web UI using a user name and password, the web UI automatically logged the user out after 20 minutes of inactivity. With this update, the default session length is the same as the expiration period of the Kerberos ticket obtained during the login operation. To change the default session length, use the kinit_lifetime option in the /etc/ipa/default.conf file, and restart the httpd service. (BZ# 1459153 ) The dbmon.sh script now uses instance names to connect to Directory Server instances The dbmon.sh shell script enables you to monitor the Directory Server database and entry cache usage. With this update, the script no longer uses the HOST and PORT environment variables. To support secure binds, the script now reads the Directory Server instance name from the SERVID environment variable and uses it to retrieve the host name, port, and the information if the server requires a secure connection. For example, to monitor the slapd-localhost instance, enter: (BZ# 1394000 ) Directory Server now uses the SSHA_512 password storage scheme as default Previously, Directory Server used the weak 160-bit salted secure hash algorithm (SSHA) as default password storage scheme set in the passwordStorageScheme and nsslapd-rootpwstoragescheme parameters in the cn=config entry. To increase security, the default of both parameters has been changed to the strong 512-bit SSHA scheme (SSHA_512). The new default is used: When performing new Directory Server installations. When the passwordStorageScheme parameter is not set, and you are updating passwords stored in userPassword attributes. When the nsslapd-rootpwstoragescheme parameter is not set, and you are updating the Directory Server manager password set in the nsslapd-rootpw attribute. (BZ# 1425907 ) Directory Server now uses the tcmalloc memory allocator Red Hat Directory Server now uses the tcmalloc memory allocator. The previously used standard glibc allocator required more memory, and in certain situations, the server could run out of memory. Using the tcmalloc memory allocator, Directory Server now requires less memory, and the performance increased. (BZ# 1426275 ) Directory Server now uses the nunc-stans framework The nunc-stans event-based framework has been integrated into Directory Server. Previously, the performance could be slow when many simultaneous incoming connections were established to Directory Server. With this update, the server is able to handle a significantly larger number of connections without performance degradation. (BZ# 1426278 , BZ# 1206301 , BZ# 1425906 ) Improved performance of the Directory Server memberOf plug-in Previously, when working with large or nested groups, plug-in operations could take a long time. With this update, the performance of the Red Hat Directory Server memberOf plug-in has been improved. As a result, the memberOf plug-in now adds and removes users faster from groups. (BZ# 1426283 ) Directory Server now logs severity levels in the error log file Directory Server now logs severity levels in the /var/log/dirsrv/slapd-instance_name/errors log file. Previously, it was difficult to distinguish the severity of entries in the error log file. With this enhancement, administrators can use the severity level to filter the error log. (BZ# 1426289 ) Directory Server now supports the PBKDF2_SHA256 password storage scheme To increase security, this update adds the 256-bit password-based key derivation function 2 (PBKDF2_SHA256) to the list of supported password-storage schemes in Directory Server. The scheme uses 30,000 iterations to apply the 256-bit secure hash algorithm (SHA256). Note that the network security service (NSS) database in Red Hat Enterprise Linux prior to version 7.4 does not support PBKDF2. Therefore, you cannot use this password scheme in a replication topology with Directory Server versions. (BZ# 1436973 ) Improved auto-tuning support in Directory Server Previously, you had to monitor the databases and manually tune settings to improve the performance. With this update, Directory Server supports optimized auto-tuning for: The database and entry cache The number of threads created Directory Server tunes these settings, based on the hardware resources of the server. Auto-tuning is now automatically enabled by default if you install a new Directory Server instance. On instances upgraded from earlier versions, Red Hat recommends to enable auto-tuning. (BZ# 1426286 ) New PKI configuration parameter allows control of the TCP keepalive option This update adds the tcp.keepAlive parameter to the CS.cfg configuration file. This parameter accepts boolean values, and is set to true by default. Use this parameter to configure the TCP keepalive option for all LDAP connections created by the PKI subsystem. This option is useful in cases where certificate issuance takes a very long time and connections are being closed automatically after being idle for too long. (BZ#1413132) PKI Server now creates PKCS #12 files using strong encryption When generating PKCS #12 files, the pki pkcs12 command previously used the PKCS #12 deprecated key derivation function (KDF) and the triple DES (3DES) algorithm. With this update, the command now uses the password-based encryption standard 2 (PBES2) scheme with the password-based key derivation function 2 (PBKDF2) and the Advanced Encryption Standard (AES) algorithm to encrypt private keys. As a result, this enhancement increases the security and complies the Common Criteria certification requirements. (BZ#1426754) CC-compliant algorithms available for encryption operations Common Criteria requires that encryption and key-wrapping operations are performed using approved algorithms. These algorithms are specified in section FCS_COP.1.1(1) in the Protection Profile for Certification Authorities. This update modifies encryption and decryption in the KRA to use approved AES encryption and wrapping algorithms in the transport and storage of secrets and keys. This update required changes in both the server and client software. (BZ#1445535) New options to allow configuring visibility of menu items in the TPS interface Previously, menu items grouped under the System menu in the Token Processing System (TPS) user interface were determined statically based on user roles. In certain circumstances, the displayed menu items did not match components actually accessible by the user. With this update, the System menu in the TPS user interface only displays menu items based on the target.configure.list parameter for TPS administrators, and the target.agent_approve.list parameter for TPS agents. These parameters can be modified in the instance CS.cfg file to match accessible components. (BZ#1391737) Added a profile component to copy certificate Subject Common Name to the Subject Alternative Name extension Some TLS libraries now warn or refuse to validate DNS names when the DNS name only appears in the Subject Common Name (CN) field, which is a practice that was deprecated by RFC 2818. This update adds the CommonNameToSANDefault profile component, which copies the Subject Common Name to the Subject Alternative Name (SAN) extension, and ensures that certificates are compliant with current standards. (BZ# 1305993 ) New option to remove LDAP entries before LDIF import When migrating a CA, if an LDAP entry existed before the LDIF import, then the entry was not recreated from the LDAP import, causing some fields to be missing. Consequently, the request ID showed up as undefined. This update adds an option to remove the LDAP entry for the signing certificate at the end of the pkispawn process. This entry is then re-created in the subsequent LDIF import. Now, the request ID and other fields show up correctly if the signing entry is removed and re-added in the LDIF import. The correct parameters to add are (X represents the serial number of the signing certificate being imported, in decimal): (BZ#1409946) Certificate System now supports externally authenticated users Previously, you had to create users and roles in Certificate System. With this enhancement, you can now configure Certificate System to admit users authenticated by an external identity provider. Additionally, you can use realm-specific authorization access control lists (ACLs). As a result, it is no longer necessary to create users in Certificate System. (BZ# 1303683 ) Certificate System now supports enabling and disabling certificate and CRL publishing Prior to this update, if publishing was enabled in a certificate authority (CA), Certificate System automatically enabled both certificate revocation list (CRL) and certificate publishing. Consequently, on servers that did not have certificate publishing enabled, error messages were logged. Certificate System has been enhanced, and now supports enabling and disabling certificate and CRL publishing independently in the /var/lib/pki/<instance>/ca/conf/CS.cfg file. To enable or disable both certificate and CRL publishing, set: To enable only CRL publishing, set: To enable only certificate publishing, set: (BZ# 1325071 ) The searchBase configuration option has been added to the DirAclAuthz PKI Server plug-in To support reading different sets of authorization access control lists (ACL), the searchBase configuration option has been added to the DirAclAuthz PKI Server plug-in. As a result, you can set the sub-tree from which the plug-in loads ACLs. (BZ# 1388622 ) For better performance, Certificate System now supports ephemeral Before this update, Certificate System key recovery agent (KRA) instances always stored recovery and storage requests of secrets in the LDAP back end. This is required to store the state if multiple agents must approve the request. However, if the request is processed immediately and only one agent must approve the request, storing the state is not required. To improve performance, you can now set the kra.ephemeralRequests=true option in the /var/lib/pki/<instance>/kra/conf/CS.cfg file to no longer store requests in the LDAP back end. (BZ# 1392068 ) Section headers in PKI deployment configuration file are no longer case sensitive The section headers (such as [Tomcat] ) in the PKI deployment configuration file were previously case-sensitive. This behavior increased the chance of an error while providing no benefit. Starting with this release, section headers in the configuration file are case-insensitive, reducing the chance of an error occurring. (BZ# 1447144 ) Certificate System now supports installing a CA using HSM on FIPS-enabled Red Hat Enterprise Linux During the installation of a Certificate System Certificate Authority (CA) instance, the installer needs to restart the instance. During this restart, instances on an operating system having the Federal Information Processing Standard (FIPS) mode enabled and using a hardware security module (HSM), need to connect to the non-secure HTTP port instead of the HTTPS port. With this update, it is now possible to install a Certificate System instance on FIPS-enabled Red Hat Enterprise Linux using an HSM. (BZ# 1450143 ) CMC requests now use a random IV for AES and 3DES encryption With this update, Certificate Management over CMS (CMC) requests in PKI Server use a randomly generated initialization vector (IV) when encrypting a key to be archived. Previously, the client and server code used a fixed IV in this scenario. The CMC client code has been enhanced, and as a result, using random IVs increase security when performing encryption for both Advanced Encryption Standard (AES) and Triple Data Encryption Algorithm (3DES). (BZ# 1458055 )
[ "[domain/main_domain/trusted_domain]", "[domain/ipa.com/ad.com]", "authconfig --enablesssd --enablesssdauth --enablesmartcard --smartcardmodule=sssd --smartcardaction=0 --updateall", "SERVID=slapd-localhost INCR=1 BINDDN=\"cn=Directory Manager\" BINDPW=\"password\" dbmon.sh", "pki_ca_signing_record_create=False pki_ca_signing_serial_number=X", "ca.publish.enable = True|False", "ca.publish.enable = True ca.publish.cert.enable = False", "ca.publish.enable = True ca.publish.crl.enable = False" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/new_features_authentication_and_interoperability