title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
βŒ€
url
stringlengths
79
342
Chapter 5. Creating overcloud nodes with director Operator
Chapter 5. Creating overcloud nodes with director Operator A Red Hat OpenStack Platform (RHOSP) overcloud consists of multiple nodes, such as Controller nodes to provide control plane services and Compute nodes to provide computing resources. For a functional overcloud with high availability, you must have 3 Controller nodes and at least one Compute node. You can create Controller nodes with the OpenStackControlPlane Custom Resource Definition (CRD) and Compute nodes with the OpenStackBaremetalSet CRD. Note Red Hat OpenShift Container Platform (RHOCP) does not autodiscover issues on RHOCP worker nodes, or perform autorecovery of worker nodes that host RHOSP Controller VMs if the worker node fails or has an issue. You must enable health checks on your RHOCP cluster to automatically relocate Controller VM pods when a host worker node fails. For information on how to autodiscover issues on RHOCP worker nodes, see Deploying machine health checks . 5.1. Creating a control plane with the OpenStackControlPlane CRD The Red Hat OpenStack Platform (RHOSP) control plane contains the RHOSP services that manage the overcloud. The default control plane consists of 3 Controller nodes. You can use composable roles to manage services on dedicated controller virtual machines (VMs). For more information on composable roles, see Composable services and custom roles . Define an OpenStackControlPlane custom resource (CR) to create the Controller nodes as OpenShift Virtualization virtual machines (VMs). Tip Use the following commands to view the OpenStackControlPlane CRD definition and specification schema: Prerequisites You have used the OpenStackNetConfig CR to create a control plane network and any additional isolated networks. Procedure Create a file named openstack-controller.yaml on your workstation. Include the resource specification for the Controller nodes. The following example defines a specification for a control plane that consists of 3 Controller nodes: 1 The name of the overcloud control plane, for example, overcloud . 2 The OSPdO namespace, for example, openstack . 3 The configuration for the control plane. 4 Optional: The Secret resource that provides root access on each node to users with the password. 5 The name of the data volume that stores the base operating system image for your Controller VMs. For more information on creating the data volume, see Creating a data volume for the base operating system . 6 For information on configuring Red Hat OpenShift Container Platform (RHOCP) storage, see Dynamic provisioning . Save the openstack-controller.yaml file. Create the control plane: Wait until RHOCP creates the resources related to OpenStackControlPlane CR. OSPdO also creates an OpenStackClient pod that you can access through a remote shell to run RHOSP commands. Verification View the resource for the control plane: View the OpenStackVMSet resources to verify the creation of the control plane VM set: View the VMs to verify the creation of the control plane OpenShift Virtualization VMs: Test access to the openstackclient remote shell: 5.2. Creating Compute nodes with the OpenStackBaremetalSet CRD Compute nodes provide computing resources to your Red Hat OpenStack Platform (RHOSP) environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment. Define an OpenStackBaremetalSet custom resource (CR) to create Compute nodes from bare-metal machines that the Red Hat OpenShift Container Platform (RHOCP) manages. Tip Use the following commands to view the OpenStackBareMetalSet CRD definition and specification schema: Prerequisites You have used the OpenStackNetConfig CR to create a control plane network and any additional isolated networks. You have created a control plane with the OpenStackControlPlane CRD. You have created a BareMetalHost CR for each bare-metal node that you want to add as a Compute node to the overcloud. For information about how to create a BareMetalHost CR, see About the BareMetalHost resource in the Red Hat OpenShift Container Platform (RHOCP) Postinstallation configuration guide. Procedure Create a file named openstack-compute.yaml on your workstation. Include the resource specification for the Compute nodes. The following example defines a specification for 1 Compute node: 1 The name of the Compute node bare-metal set, for example, compute . 2 The OSPdO namespace, for example, openstack . 3 The configuration for the Compute nodes. 4 Optional: The Secret resource that provides root access on each node to users with the password. Save the openstack-compute.yaml file. Create the Compute nodes: Verification View the resource for the Compute nodes: View the bare-metal machines that RHOCP manages to verify the creation of the Compute nodes: 5.3. Creating a provisioning server with the OpenStackProvisionServer CRD Provisioning servers provide a specific Red Hat Enterprise Linux (RHEL) QCOW2 image for provisioning Compute nodes for the Red Hat OpenStack Platform (RHOSP). An OpenStackProvisionServer CR is automatically created for any OpenStackBaremetalSet CRs you create. You can create the OpenStackProvisionServer CR manually and provide the name to any OpenStackBaremetalSet CRs that you create. The OpenStackProvisionServer CRD creates an Apache server on the Red Hat OpenShift Container Platform (RHOCP) provisioning network for a specific RHEL QCOW2 image. Procedure Create a file named openstack-provision.yaml on your workstation. Include the resource specification for the Provisioning server. The following example defines a specification for a Provisioning server using a specific RHEL 9.2 QCOW2 images: 1 The name that identifies the OpenStackProvisionServer CR. 2 The OSPdO namespace, for example, openstack . 3 The initial source of the RHEL QCOW2 image for the Provisioning server. The image is downloaded from this remote source when the server is created. 4 The Provisioning server port, set to 8080 by default. You can change it for a specific port configuration. For further descriptions of the values you can use to configure your OpenStackProvisionServer CR, view the OpenStackProvisionServer CRD specification schema: Save the openstack-provision.yaml file. Create the Provisioning Server: Verify that the resource for the Provisioning server is created:
[ "oc describe crd openstackcontrolplane oc explain openstackcontrolplane.spec", "apiVersion: osp-director.openstack.org/v1beta2 kind: OpenStackControlPlane metadata: name: overcloud 1 namespace: openstack 2 spec: 3 openStackClientNetworks: - ctlplane - internal_api - external openStackClientStorageClass: host-nfs-storageclass passwordSecret: userpassword 4 virtualMachineRoles: Controller: roleName: Controller roleCount: 3 networks: - ctlplane - internal_api - external - tenant - storage - storage_mgmt cores: 12 memory: 64 rootDisk: diskSize: 500 baseImageVolumeName: openstack-base-img 5 storageClass: host-nfs-storageclass 6 storageAccessMode: ReadWriteMany storageVolumeMode: Filesystem # optional configure additional discs to be attached to the VMs, # need to be configured manually inside the VMs where to be used. additionalDisks: - name: datadisk diskSize: 500 storageClass: host-nfs-storageclass storageAccessMode: ReadWriteMany storageVolumeMode: Filesystem openStackRelease: \"17.1\"", "oc create -f openstack-controller.yaml -n openstack", "oc get openstackcontrolplane/overcloud -n openstack", "oc get openstackvmsets -n openstack", "oc get virtualmachines -n openstack", "oc rsh -n openstack openstackclient", "oc describe crd openstackbaremetalset oc explain openstackbaremetalset.spec", "apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet metadata: name: compute 1 namespace: openstack 2 spec: 3 count: 1 baseImageUrl: http://<source_host>/rhel-9.2-x86_64-kvm.qcow2 deploymentSSHSecret: osp-controlplane-ssh-keys # If you manually created an OpenStackProvisionServer, you can use it here, # otherwise director Operator will create one for you (with `baseImageUrl` as the image that it server) # to use with this OpenStackBaremetalSet # provisionServerName: openstack-provision-server ctlplaneInterface: enp2s0 networks: - ctlplane - internal_api - tenant - storage roleName: Compute passwordSecret: userpassword 4", "oc create -f openstack-compute.yaml -n openstack", "oc get openstackbaremetalset/compute -n openstack", "oc get baremetalhosts -n openshift-machine-api", "apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackProvisionServer metadata: name: openstack-provision-server 1 namespace: openstack 2 spec: baseImageUrl: http://<source_host>/rhel-9.2-x86_64-kvm.qcow2 3 port: 8080 4", "oc describe crd openstackprovisionserver", "oc create -f openstack-provision.yaml -n openstack", "oc get openstackprovisionserver/openstack-provision-server -n openstack" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/assembly_creating-overcloud-nodes-with-director-operator
Chapter 2. Managing your cluster resources
Chapter 2. Managing your cluster resources You can apply global configuration options in OpenShift Container Platform. Operators apply these configuration settings across the cluster. 2.1. Interacting with your cluster resources You can interact with cluster resources by using the OpenShift CLI ( oc ) tool in OpenShift Container Platform. The cluster resources that you see after running the oc api-resources command can be edited. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the web console or you have installed the oc CLI tool. Procedure To see which configuration Operators have been applied, run the following command: USD oc api-resources -o name | grep config.openshift.io To see what cluster resources you can configure, run the following command: USD oc explain <resource_name>.config.openshift.io To see the configuration of custom resource definition (CRD) objects in the cluster, run the following command: USD oc get <resource_name>.config -o yaml To edit the cluster resource configuration, run the following command: USD oc edit <resource_name>.config -o yaml
[ "oc api-resources -o name | grep config.openshift.io", "oc explain <resource_name>.config.openshift.io", "oc get <resource_name>.config -o yaml", "oc edit <resource_name>.config -o yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/support/managing-cluster-resources
Chapter 8. Using the vSphere Problem Detector Operator
Chapter 8. Using the vSphere Problem Detector Operator 8.1. About the vSphere Problem Detector Operator The vSphere Problem Detector Operator checks clusters that are deployed on vSphere for common installation and misconfiguration issues that are related to storage. The Operator runs in the openshift-cluster-storage-operator namespace and is started by the Cluster Storage Operator when the Cluster Storage Operator detects that the cluster is deployed on vSphere. The vSphere Problem Detector Operator communicates with the vSphere vCenter Server to determine the virtual machines in the cluster, the default datastore, and other information about the vSphere vCenter Server configuration. The Operator uses the credentials from the Cloud Credential Operator to connect to vSphere. The Operator runs the checks according to the following schedule: The checks run every hour. If any check fails, the Operator runs the checks again in intervals of 1 minute, 2 minutes, 4, 8, and so on. The Operator doubles the interval up to a maximum interval of 8 hours. When all checks pass, the schedule returns to an hour interval. The Operator increases the frequency of the checks after a failure so that the Operator can report success quickly after the failure condition is remedied. You can run the Operator manually for immediate troubleshooting information. 8.2. Running the vSphere Problem Detector Operator checks You can override the schedule for running the vSphere Problem Detector Operator checks and run the checks immediately. The vSphere Problem Detector Operator automatically runs the checks every hour. However, when the Operator starts, it runs the checks immediately. The Operator is started by the Cluster Storage Operator when the Cluster Storage Operator starts and determines that the cluster is running on vSphere. To run the checks immediately, you can scale the vSphere Problem Detector Operator to 0 and back to 1 so that it restarts the vSphere Problem Detector Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Scale the Operator to 0 : USD oc scale deployment/vsphere-problem-detector-operator --replicas=0 \ -n openshift-cluster-storage-operator Verification Verify that the pods have restarted by running the following command: USD oc -n openshift-cluster-storage-operator get pod -l name=vsphere-problem-detector-operator -w Example output NAME READY STATUS RESTARTS AGE vsphere-problem-detector-operator-77486bd645-9ntpb 1/1 Running 0 11s The AGE field must indicate that the pod is restarted. 8.3. Viewing the events from the vSphere Problem Detector Operator After the vSphere Problem Detector Operator runs and performs the configuration checks, it creates events that can be viewed from the command line or from the OpenShift Container Platform web console. Procedure To view the events by using the command line, run the following command: USD oc get event -n openshift-cluster-storage-operator \ --sort-by={.metadata.creationTimestamp} Example output 16m Normal Started pod/vsphere-problem-detector-operator-xxxxx Started container vsphere-problem-detector 16m Normal Created pod/vsphere-problem-detector-operator-xxxxx Created container vsphere-problem-detector 16m Normal LeaderElection configmap/vsphere-problem-detector-lock vsphere-problem-detector-operator-xxxxx became leader To view the events by using the OpenShift Container Platform web console, navigate to Home Events and select openshift-cluster-storage-operator from the Project menu. 8.4. Viewing the logs from the vSphere Problem Detector Operator After the vSphere Problem Detector Operator runs and performs the configuration checks, it creates log records that can be viewed from the command line or from the OpenShift Container Platform web console. Procedure To view the logs by using the command line, run the following command: USD oc logs deployment/vsphere-problem-detector-operator \ -n openshift-cluster-storage-operator Example output I0108 08:32:28.445696 1 operator.go:209] ClusterInfo passed I0108 08:32:28.451029 1 datastore.go:57] CheckStorageClasses checked 1 storage classes, 0 problems found I0108 08:32:28.451047 1 operator.go:209] CheckStorageClasses passed I0108 08:32:28.452160 1 operator.go:209] CheckDefaultDatastore passed I0108 08:32:28.480648 1 operator.go:271] CheckNodeDiskUUID:<host_name> passed I0108 08:32:28.480685 1 operator.go:271] CheckNodeProviderID:<host_name> passed To view the Operator logs with the OpenShift Container Platform web console, perform the following steps: Navigate to Workloads Pods . Select openshift-cluster-storage-operator from the Projects menu. Click the link for the vsphere-problem-detector-operator pod. Click the Logs tab on the Pod details page to view the logs. 8.5. Configuration checks run by the vSphere Problem Detector Operator The following tables identify the configuration checks that the vSphere Problem Detector Operator runs. Some checks verify the configuration of the cluster. Other checks verify the configuration of each node in the cluster. Table 8.1. Cluster configuration checks Name Description CheckDefaultDatastore Verifies that the default datastore name in the vSphere configuration is short enough for use with dynamic provisioning. If this check fails, you can expect the following: systemd logs errors to the journal such as Failed to set up mount unit: Invalid argument . systemd does not unmount volumes if the virtual machine is shut down or rebooted without draining all the pods from the node. If this check fails, reconfigure vSphere with a shorter name for the default datastore. CheckFolderPermissions Verifies the permission to list volumes in the default datastore. This permission is required to create volumes. The Operator verifies the permission by listing the / and /kubevols directories. The root directory must exist. It is acceptable if the /kubevols directory does not exist when the check runs. The /kubevols directory is created when the datastore is used with dynamic provisioning if the directory does not already exist. If this check fails, review the required permissions for the vCenter account that was specified during the OpenShift Container Platform installation. CheckStorageClasses Verifies the following: The fully qualified path to each persistent volume that is provisioned by this storage class is less than 255 characters. If a storage class uses a storage policy, the storage class must use one policy only and that policy must be defined. CheckTaskPermissions Verifies the permission to list recent tasks and datastores. ClusterInfo Collects the cluster version and UUID from vSphere vCenter. Table 8.2. Node configuration checks Name Description CheckNodeDiskUUID Verifies that all the vSphere virtual machines are configured with disk.enableUUID=TRUE . If this check fails, see the How to check 'disk.EnableUUID' parameter from VM in vSphere Red Hat Knowledgebase solution. CheckNodeProviderID Verifies that all nodes are configured with the ProviderID from vSphere vCenter. This check fails when the output from the following command does not include a provider ID for each node. USD oc get nodes -o custom-columns=NAME:.metadata.name,PROVIDER_ID:.spec.providerID,UUID:.status.nodeInfo.systemUUID If this check fails, refer to the vSphere product documentation for information about setting the provider ID for each node in the cluster. CollectNodeESXiVersion Reports the version of the ESXi hosts that run nodes. CollectNodeHWVersion Reports the virtual machine hardware version for a node. 8.6. About the storage class configuration check The names for persistent volumes that use vSphere storage are related to the datastore name and cluster ID. When a persistent volume is created, systemd creates a mount unit for the persistent volume. The systemd process has a 255 character limit for the length of the fully qualified path to the VDMK file that is used for the persistent volume. The fully qualified path is based on the naming conventions for systemd and vSphere. The naming conventions use the following pattern: /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[<datastore>] 00000000-0000-0000-0000-000000000000/<cluster_id>-dynamic-pvc-00000000-0000-0000-0000-000000000000.vmdk The naming conventions require 205 characters of the 255 character limit. The datastore name and the cluster ID are determined from the deployment. The datastore name and cluster ID are substituted into the preceding pattern. Then the path is processed with the systemd-escape command to escape special characters. For example, a hyphen character uses four characters after it is escaped. The escaped value is \x2d . After processing with systemd-escape to ensure that systemd can access the fully qualified path to the VDMK file, the length of the path must be less than 255 characters. 8.7. Metrics for the vSphere Problem Detector Operator The vSphere Problem Detector Operator exposes the following metrics for use by the OpenShift Container Platform monitoring stack. Table 8.3. Metrics exposed by the vSphere Problem Detector Operator Name Description vsphere_cluster_check_total Cumulative number of cluster-level checks that the vSphere Problem Detector Operator performed. This count includes both successes and failures. vsphere_cluster_check_errors Number of failed cluster-level checks that the vSphere Problem Detector Operator performed. For example, a value of 1 indicates that one cluster-level check failed. vsphere_esxi_version_total Number of ESXi hosts with a specific version. Be aware that if a host runs more than one node, the host is counted only once. vsphere_node_check_total Cumulative number of node-level checks that the vSphere Problem Detector Operator performed. This count includes both successes and failures. vsphere_node_check_errors Number of failed node-level checks that the vSphere Problem Detector Operator performed. For example, a value of 1 indicates that one node-level check failed. vsphere_node_hw_version_total Number of vSphere nodes with a specific hardware version. vsphere_vcenter_info Information about the vSphere vCenter Server. 8.8. Additional resources About OpenShift Container Platform monitoring
[ "oc scale deployment/vsphere-problem-detector-operator --replicas=0 -n openshift-cluster-storage-operator", "oc -n openshift-cluster-storage-operator get pod -l name=vsphere-problem-detector-operator -w", "NAME READY STATUS RESTARTS AGE vsphere-problem-detector-operator-77486bd645-9ntpb 1/1 Running 0 11s", "oc get event -n openshift-cluster-storage-operator --sort-by={.metadata.creationTimestamp}", "16m Normal Started pod/vsphere-problem-detector-operator-xxxxx Started container vsphere-problem-detector 16m Normal Created pod/vsphere-problem-detector-operator-xxxxx Created container vsphere-problem-detector 16m Normal LeaderElection configmap/vsphere-problem-detector-lock vsphere-problem-detector-operator-xxxxx became leader", "oc logs deployment/vsphere-problem-detector-operator -n openshift-cluster-storage-operator", "I0108 08:32:28.445696 1 operator.go:209] ClusterInfo passed I0108 08:32:28.451029 1 datastore.go:57] CheckStorageClasses checked 1 storage classes, 0 problems found I0108 08:32:28.451047 1 operator.go:209] CheckStorageClasses passed I0108 08:32:28.452160 1 operator.go:209] CheckDefaultDatastore passed I0108 08:32:28.480648 1 operator.go:271] CheckNodeDiskUUID:<host_name> passed I0108 08:32:28.480685 1 operator.go:271] CheckNodeProviderID:<host_name> passed", "oc get nodes -o custom-columns=NAME:.metadata.name,PROVIDER_ID:.spec.providerID,UUID:.status.nodeInfo.systemUUID", "/var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[<datastore>] 00000000-0000-0000-0000-000000000000/<cluster_id>-dynamic-pvc-00000000-0000-0000-0000-000000000000.vmdk" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_vsphere/using-vsphere-problem-detector-operator
probe::kprocess.exec_complete
probe::kprocess.exec_complete Name probe::kprocess.exec_complete - Return from exec to a new program Synopsis Values success A boolean indicating whether the exec was successful errno The error number resulting from the exec Context On success, the context of the new executable. On failure, remains in the context of the caller. Description Fires at the completion of an exec call.
[ "kprocess.exec_complete" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-kprocess-exec-complete
Chapter 30. Installation and Booting
Chapter 30. Installation and Booting The installer no longer crashes when you select an incomplete IMSM RAID array during manual partitioning Previously, if the system being installed had a storage drive which was previously part of an Intel Matrix (IMSI) RAID array which was broken at the time of the installation, the disk was displayed as Unknown in the Installation Destination screen in the graphical installer. If you attempted to select this drive as an installation target, the installer crashed with the An unknown error has occured message. This update adds proper handling for such drives, and allows you to use them as standard installation targets. (BZ#1465944) Installer now accepts additional time zone definitions in Kickstart files Starting with Red Hat Enterprise Linux 7.0, Anaconda switched to a different, more restrictive method of validating time zone selections. This caused some time zone definitions, such as Japan , to be no longer valid despite being acceptable in versions, and legacy Kickstart files with these definitions had to be updated or they would default to the Americas/New_York time zone. The list of valid time zones was previously taken from pytz.common_timezones in the pytz Python library. This update changes the validation settings for the timezone Kickstart command to use pytz.all_timezones , which is a superset of the common_timezones list and which allows significantly more time zones to be specified. This change ensures that old Kickstart files made for Red Hat Enterprise Linux 6 still specify valid time zones. Note that this change only applies to the timezone Kickstart command. The time zone selection in the graphical and text-based interactive interfaces remains unchanged. Existing Kickstart files for Red Hat Enterprise Linux 7 that had valid time zone selections do not require any updates. (BZ# 1452873 ) Proxy configuration set up using a boot option now works correctly in Anaconda Previously, proxy configuration made in the boot menu command line using the proxy= option was not correctly applied when probing remote package repositories. This was caused by an attempt to avoid a refresh of the Installation Source screen if network settings were changed. This update improves the installer logic so that proxy configuration now applies at all times but still avoids blocking the user interface on settings changes. (BZ#1478970) FIPS mode now supports loading files over HTTPS during installation Previously, installation images did not support FIPS mode ( fips=1 ) during installation where a Kickstart file is being loaded from an HTTPS source ( inst.ks=https://<location>/ks.cfg ). This release implements support for this previously missing functionality, and loading files over HTTPS in FIPS mode works as expected. (BZ# 1341280 ) Network scripts now correctly update /etc/resolv.conf Network scripts have been enhanced to update the /etc/resolv.conf file correctly. Notably: The scripts now update the nameserver and search entries in the /etc/resolv.conf file after the DNS* and DOMAIN options, respectively, have been updated in the ifcfg-* files in the /etc/sysconfig/network-scripts/ directory The scripts now also update the order of nameserver entries after it has been updated in the ifcfg-* files in /etc/sysconfig/network-scripts/ Support for the DNS3 option has been added The scripts now correctly process duplicate and randomly omitted DNS* options (BZ# 1364895 ) Files with the .old extension are now ignored by network scripts Network scripts in Red Hat Enterprise Linux contain a regular expression which causes them to ignore ifcfg-* configuration files with certain extensions, such as .bak , .rpmnew or .rpmold . However, the .old extension was missing from this set, despite being used in documentation and in common practice. This update adds the .old extension into the list, which ensures that script files which use it will be ignored by network scripts as expected. (BZ# 1455419 ) Bridge devices no longer fail to obtain an IP address Previously, bridge devices sometimes failed to obtain an IP address from the DHCP server immediately after system startup. This was caused by a race condition where the ifup-eth script did not wait for the Spanning Tree Protocol (STP) to complete its startup. This bug has been fixed by adding a delay that causes ifup-eth to wait long enough for STP to finish starting. (BZ# 1380496 ) The rhel-dmesg service can now be disabled correctly Previously, even if the rhel-dmesg.service was explicitly disabled using systemd , it continued to run anyway. This bug has been fixed, and the service can now be disabled correctly. (BZ# 1395391 )
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/bug_fixes_installation_and_booting
Operators
Operators OpenShift Dedicated 4 OpenShift Dedicated Operators Red Hat OpenShift Documentation Team
[ "etcd β”œβ”€β”€ manifests β”‚ β”œβ”€β”€ etcdcluster.crd.yaml β”‚ └── etcdoperator.clusterserviceversion.yaml β”‚ └── secret.yaml β”‚ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml", "annotations: operators.operatorframework.io.bundle.mediatype.v1: \"registry+v1\" 1 operators.operatorframework.io.bundle.manifests.v1: \"manifests/\" 2 operators.operatorframework.io.bundle.metadata.v1: \"metadata/\" 3 operators.operatorframework.io.bundle.package.v1: \"test-operator\" 4 operators.operatorframework.io.bundle.channels.v1: \"beta,stable\" 5 operators.operatorframework.io.bundle.channel.default.v1: \"stable\" 6", "dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml", "catalog β”œβ”€β”€ packageA β”‚ └── index.yaml β”œβ”€β”€ packageB β”‚ β”œβ”€β”€ .indexignore β”‚ β”œβ”€β”€ index.yaml β”‚ └── objects β”‚ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml", "_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }", "#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }", "#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }", "#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }", "schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support.", "my-catalog └── my-operator β”œβ”€β”€ index.yaml └── deprecations.yaml", "#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }", "#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }", "#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }", "#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }", "name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317", "name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"", "\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace", "registry.redhat.io/redhat/redhat-operator-index:v4.18", "registry.redhat.io/redhat/redhat-operator-index:v4.18", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}\" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.31 priority: -400 publisher: Example Org", "quay.io/example-org/example-catalog:v1.31", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace", "apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created", "packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1", "olm.skipRange: <semver_range>", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2'", "properties: - type: olm.kubeversion value: version: \"1.16.0\"", "properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource", "dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "type: olm.constraint value: failureMessage: 'require to have \"certified\"' cel: rule: 'properties.exists(p, p.type == \"certified\")'", "type: olm.constraint value: failureMessage: 'require to have \"certified\" and \"stable\" properties' cel: rule: 'properties.exists(p, p.type == \"certified\") && properties.exists(p, p.type == \"stable\")'", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for gvk: group: greens.example.com version: v1 kind: Green", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens", "schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue", "apiVersion: \"operators.coreos.com/v1alpha1\" kind: \"CatalogSource\" metadata: name: \"my-operators\" namespace: \"operators\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: \"My Operators\" priority: 100", "dependencies: - type: olm.package value: packageName: etcd version: \">3.1.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: \"true\"", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: \"true\"", "attenuated service account query failed - more than one operator group(s) are managing this namespace count=2", "apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: \"False\" 2 reason: \"migration\" message: \"The Operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"", "registry.redhat.io/redhat/redhat-operator-index:v4.8", "registry.redhat.io/redhat/redhat-operator-index:v4.9", "apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image", "oc create -f <file_name>.yaml", "oc get <kind>", "oc get crontab", "NAME KIND my-new-cron-object CronTab.v1.stable.example.com", "oc get crontabs", "oc get crontab", "oc get ct", "oc get <kind> -o yaml", "oc get ct -o yaml", "apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2", "oc get csv", "oc policy add-role-to-user edit <user> -n <target_project>", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4", "oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml", "oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2", "kind: Subscription spec: installPlanApproval: Manual 1", "kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1", "kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3", "kind: Subscription spec: config: env: - name: AUDIENCE value: \"<audience_url>\" 1 - name: SERVICE_ACCOUNT_EMAIL value: \"<service_account_email>\" 2", "//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>", "<service_account_name>@<project_id>.iam.gserviceaccount.com", "oc apply -f subscription.yaml", "oc describe subscription <subscription_name> -n <namespace>", "oc describe operatorgroup <operatorgroup_name> -n <namespace>", "oc new-project team1-operator", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: team1-operatorgroup namespace: team1-operator spec: targetNamespaces: - team1 1", "oc create -f team1-operatorgroup.yaml", "oc new-project global-operators", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: global-operatorgroup namespace: global-operators", "oc create -f global-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>", "oc get subscription.operators.coreos.com serverless-operator -n openshift-serverless -o yaml | grep currentCSV", "currentCSV: serverless-operator.v1.28.0", "oc delete subscription.operators.coreos.com serverless-operator -n openshift-serverless", "subscription.operators.coreos.com \"serverless-operator\" deleted", "oc delete clusterserviceversion serverless-operator.v1.28.0 -n openshift-serverless", "clusterserviceversion.operators.coreos.com \"serverless-operator.v1.28.0\" deleted", "ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"", "rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host", "oc get sub,csv -n <namespace>", "NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded", "oc delete subscription <subscription_name> -n <namespace>", "oc delete csv <csv_name> -n <namespace>", "oc get job,configmap -n openshift-marketplace", "NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s", "oc delete job <job_name> -n openshift-marketplace", "oc delete configmap <configmap_name> -n openshift-marketplace", "oc get sub,csv,installplan -n <namespace>", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: etcd-config-test namespace: openshift-operators spec: config: env: - name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test channel: clusterwide-alpha installPlanApproval: Automatic name: etcd source: community-operators sourceNamespace: openshift-marketplace startingCSV: etcdoperator.v0.9.4-clusterwide", "oc get deployment -n openshift-operators etcd-operator -o yaml | grep -i \"PROXY\" -A 2", "- name: HTTP_PROXY value: test_http - name: HTTPS_PROXY value: test_https - name: NO_PROXY value: test image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21088a98b93838e284a6086b13917f96b0d9c", "apiVersion: v1 kind: ConfigMap metadata: name: trusted-ca 1 labels: config.openshift.io/inject-trusted-cabundle: \"true\" 2", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: my-operator spec: package: etcd channel: alpha config: 1 selector: matchLabels: <labels_for_pods> 2 volumes: 3 - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 4 path: tls-ca-bundle.pem 5 volumeMounts: 6 - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true", "oc get subs -n <operator_namespace>", "oc describe sub <subscription_name> -n <operator_namespace>", "Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy", "oc get catalogsources -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m", "oc describe catalogsource example-catalog -n openshift-marketplace", "Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m", "oc describe pod example-catalog-bwt8z -n openshift-marketplace", "Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull", "oc edit operatorcondition <name>", "apiVersion: operators.coreos.com/v2 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: overrides: - type: Upgradeable 1 status: \"True\" reason: \"upgradeIsSafe\" message: \"This is a known issue with the Operator where it always reports that it cannot be upgraded.\" conditions: - type: Upgradeable status: \"False\" reason: \"migration\" message: \"The operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"", "mkdir <catalog_dir>", "opm generate dockerfile <catalog_dir> -i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4 1", ". 1 β”œβ”€β”€ <catalog_dir> 2 └── <catalog_dir>.Dockerfile 3", "opm init <operator_name> \\ 1 --default-channel=preview \\ 2 --description=./README.md \\ 3 --icon=./operator-icon.svg \\ 4 --output yaml \\ 5 > <catalog_dir>/index.yaml 6", "opm render <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --output=yaml >> <catalog_dir>/index.yaml 2", "--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1", "opm validate <catalog_dir>", "echo USD?", "0", "podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>", "podman login <registry>", "podman push <registry>/<namespace>/<catalog_image_name>:<tag>", "opm render <registry>/<namespace>/<catalog_image_name>:<tag> -o yaml > <catalog_dir>/index.yaml", "--- defaultChannel: release-2.7 icon: base64data: <base64_string> mediatype: image/svg+xml name: example-operator schema: olm.package --- entries: - name: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.0' - name: example-operator.v2.7.1 replaces: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.1' - name: example-operator.v2.7.2 replaces: example-operator.v2.7.1 skipRange: '>=2.6.0 <2.7.2' - name: example-operator.v2.7.3 replaces: example-operator.v2.7.2 skipRange: '>=2.6.0 <2.7.3' - name: example-operator.v2.7.4 replaces: example-operator.v2.7.3 skipRange: '>=2.6.0 <2.7.4' name: release-2.7 package: example-operator schema: olm.channel --- image: example.com/example-inc/example-operator-bundle@sha256:<digest> name: example-operator.v2.7.0 package: example-operator properties: - type: olm.gvk value: group: example-group.example.io kind: MyObject version: v1alpha1 - type: olm.gvk value: group: example-group.example.io kind: MyOtherObject version: v1beta1 - type: olm.package value: packageName: example-operator version: 2.7.0 - type: olm.bundle.object value: data: <base64_string> - type: olm.bundle.object value: data: <base64_string> relatedImages: - image: example.com/example-inc/example-related-image@sha256:<digest> name: example-related-image schema: olm.bundle ---", "opm validate <catalog_dir>", "podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>", "podman push <registry>/<namespace>/<catalog_image_name>:<tag>", "opm index add --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --tag <registry>/<namespace>/<index_image_name>:<tag> \\ 2 [--binary-image <registry_base_image>] 3", "podman login <registry>", "podman push <registry>/<namespace>/<index_image_name>:<tag>", "opm index add --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \\ 1 --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \\ 2 --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \\ 3 --pull-tool podman 4", "opm index add --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 --from-index mirror.example.com/abc/abc-redhat-operator-index:4 --tag mirror.example.com/abc/abc-redhat-operator-index:4.1 --pull-tool podman", "podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>", "oc get packagemanifests -n openshift-marketplace", "podman login <target_registry>", "podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4", "Trying to pull registry.redhat.io/redhat/redhat-operator-index:v4 Getting image source signatures Copying blob ae8a0c23f5b1 done INFO[0000] serving registry database=/database/index.db port=50051", "grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out", "{ \"name\": \"advanced-cluster-management\" } { \"name\": \"jaeger-product\" } { { \"name\": \"quay-operator\" }", "opm index prune -f registry.redhat.io/redhat/redhat-operator-index:v4 \\ 1 -p advanced-cluster-management,jaeger-product,quay-operator \\ 2 [-i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4] \\ 3 -t <target_registry>:<port>/<namespace>/redhat-operator-index:v4 4", "podman push <target_registry>:<port>/<namespace>/redhat-operator-index:v4", "opm migrate <registry_image> <fbc_directory>", "opm generate dockerfile <fbc_directory> --binary-image registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4", "opm index add --binary-image registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4 --from-index <your_registry_image> --bundles \"\" -t \\<your_registry_image>", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-catsrc namespace: my-ns spec: sourceType: grpc grpcPodConfig: securityContextConfig: legacy image: my-image:latest", "apiVersion: v1 kind: Namespace metadata: labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" 1 openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: baseline 2 name: \"<namespace_name>\"", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog namespace: openshift-marketplace 1 annotations: olm.catalogImageTemplate: 2 \"<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/<index_image_name>:<tag> 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m", "oc apply -f catalogSource.yaml", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h", "oc get catalogsource -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s", "oc get packagemanifest -n openshift-marketplace", "NAME CATALOG AGE jaeger-product My Operator Catalog 93s", "oc patch operatorhub cluster -p '{\"spec\": {\"disableAllDefaultSources\": true}}' --type=merge", "grpcPodConfig: nodeSelector: custom_label: <label>", "grpcPodConfig: priorityClassName: <priority_class>", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: example-catalog namespace: openshift-marketplace annotations: operatorframework.io/priorityclass: system-cluster-critical", "grpcPodConfig: tolerations: - key: \"<key_name>\" operator: \"<operator_type>\" value: \"<value>\" effect: \"<effect>\"", "oc get subs -n <operator_namespace>", "oc describe sub <subscription_name> -n <operator_namespace>", "Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy", "oc get catalogsources -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m", "oc describe catalogsource example-catalog -n openshift-marketplace", "Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m", "oc describe pod example-catalog-bwt8z -n openshift-marketplace", "Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull", "oc get clusteroperators", "oc get pod -n <operator_namespace>", "oc describe pod <operator_pod_name> -n <operator_namespace>", "oc get pods -n <operator_namespace>", "oc logs pod/<pod_name> -n <operator_namespace>", "oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>", "tar xvf operator-sdk-v1.38.0-ocp-linux-x86_64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.38.0-ocp\",", "tar xvf operator-sdk-v1.38.0-ocp-darwin-x86_64.tar.gz", "chmod +x operator-sdk", "echo USDPATH", "sudo mv ./operator-sdk /usr/local/bin/operator-sdk", "operator-sdk version", "operator-sdk version: \"v1.38.0-ocp\",", "mkdir -p USDHOME/projects/memcached-operator", "cd USDHOME/projects/memcached-operator", "export GO111MODULE=on", "operator-sdk init --domain=example.com --repo=github.com/example-inc/memcached-operator", "domain: example.com layout: - go.kubebuilder.io/v3 projectName: memcached-operator repo: github.com/example-inc/memcached-operator version: \"3\" plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {}", "mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: namespace})", "mgr, err := ctrl.NewManager(cfg, manager.Options{Namespace: \"\"})", "var namespaces []string 1 mgr, err := ctrl.NewManager(cfg, manager.Options{ 2 NewCache: cache.MultiNamespacedCacheBuilder(namespaces), })", "operator-sdk edit --multigroup=true", "domain: example.com layout: go.kubebuilder.io/v3 multigroup: true", "operator-sdk create api --group=cache --version=v1 --kind=Memcached", "Create Resource [y/n] y Create Controller [y/n] y", "Writing scaffold for you to edit api/v1/memcached_types.go controllers/memcached_controller.go", "// MemcachedSpec defines the desired state of Memcached type MemcachedSpec struct { // +kubebuilder:validation:Minimum=0 // Size is the size of the memcached deployment Size int32 `json:\"size\"` } // MemcachedStatus defines the observed state of Memcached type MemcachedStatus struct { // Nodes are the names of the memcached pods Nodes []string `json:\"nodes\"` }", "make generate", "make manifests", "/* Copyright 2020. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package controllers import ( appsv1 \"k8s.io/api/apps/v1\" corev1 \"k8s.io/api/core/v1\" \"k8s.io/apimachinery/pkg/api/errors\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" \"k8s.io/apimachinery/pkg/types\" \"reflect\" \"context\" \"github.com/go-logr/logr\" \"k8s.io/apimachinery/pkg/runtime\" ctrl \"sigs.k8s.io/controller-runtime\" \"sigs.k8s.io/controller-runtime/pkg/client\" ctrllog \"sigs.k8s.io/controller-runtime/pkg/log\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) // MemcachedReconciler reconciles a Memcached object type MemcachedReconciler struct { client.Client Log logr.Logger Scheme *runtime.Scheme } // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; // Reconcile is part of the main kubernetes reconciliation loop which aims to // move the current state of the cluster closer to the desired state. // TODO(user): Modify the Reconcile function to compare the state specified by // the Memcached object against the actual cluster state, and then // perform operations to make the cluster state reflect the state specified by // the user. // // For more details, check Reconcile and its Result here: // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/reconcile func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { //log := r.Log.WithValues(\"memcached\", req.NamespacedName) log := ctrllog.FromContext(ctx) // Fetch the Memcached instance memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) if err != nil { if errors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. For additional cleanup logic use finalizers. // Return and don't requeue log.Info(\"Memcached resource not found. Ignoring since object must be deleted\") return ctrl.Result{}, nil } // Error reading the object - requeue the request. log.Error(err, \"Failed to get Memcached\") return ctrl.Result{}, err } // Check if the deployment already exists, if not create a new one found := &appsv1.Deployment{} err = r.Get(ctx, types.NamespacedName{Name: memcached.Name, Namespace: memcached.Namespace}, found) if err != nil && errors.IsNotFound(err) { // Define a new deployment dep := r.deploymentForMemcached(memcached) log.Info(\"Creating a new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) err = r.Create(ctx, dep) if err != nil { log.Error(err, \"Failed to create new Deployment\", \"Deployment.Namespace\", dep.Namespace, \"Deployment.Name\", dep.Name) return ctrl.Result{}, err } // Deployment created successfully - return and requeue return ctrl.Result{Requeue: true}, nil } else if err != nil { log.Error(err, \"Failed to get Deployment\") return ctrl.Result{}, err } // Ensure the deployment size is the same as the spec size := memcached.Spec.Size if *found.Spec.Replicas != size { found.Spec.Replicas = &size err = r.Update(ctx, found) if err != nil { log.Error(err, \"Failed to update Deployment\", \"Deployment.Namespace\", found.Namespace, \"Deployment.Name\", found.Name) return ctrl.Result{}, err } // Spec updated - return and requeue return ctrl.Result{Requeue: true}, nil } // Update the Memcached status with the pod names // List the pods for this memcached's deployment podList := &corev1.PodList{} listOpts := []client.ListOption{ client.InNamespace(memcached.Namespace), client.MatchingLabels(labelsForMemcached(memcached.Name)), } if err = r.List(ctx, podList, listOpts...); err != nil { log.Error(err, \"Failed to list pods\", \"Memcached.Namespace\", memcached.Namespace, \"Memcached.Name\", memcached.Name) return ctrl.Result{}, err } podNames := getPodNames(podList.Items) // Update status.Nodes if needed if !reflect.DeepEqual(podNames, memcached.Status.Nodes) { memcached.Status.Nodes = podNames err := r.Status().Update(ctx, memcached) if err != nil { log.Error(err, \"Failed to update Memcached status\") return ctrl.Result{}, err } } return ctrl.Result{}, nil } // deploymentForMemcached returns a memcached Deployment object func (r *MemcachedReconciler) deploymentForMemcached(m *cachev1.Memcached) *appsv1.Deployment { ls := labelsForMemcached(m.Name) replicas := m.Spec.Size dep := &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: m.Name, Namespace: m.Namespace, }, Spec: appsv1.DeploymentSpec{ Replicas: &replicas, Selector: &metav1.LabelSelector{ MatchLabels: ls, }, Template: corev1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: ls, }, Spec: corev1.PodSpec{ Containers: []corev1.Container{{ Image: \"memcached:1.4.36-alpine\", Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{ ContainerPort: 11211, Name: \"memcached\", }}, }}, }, }, }, } // Set Memcached instance as the owner and controller ctrl.SetControllerReference(m, dep, r.Scheme) return dep } // labelsForMemcached returns the labels for selecting the resources // belonging to the given memcached CR name. func labelsForMemcached(name string) map[string]string { return map[string]string{\"app\": \"memcached\", \"memcached_cr\": name} } // getPodNames returns the pod names of the array of pods passed in func getPodNames(pods []corev1.Pod) []string { var podNames []string for _, pod := range pods { podNames = append(podNames, pod.Name) } return podNames } // SetupWithManager sets up the controller with the Manager. func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }", "import ( appsv1 \"k8s.io/api/apps/v1\" ) func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). Complete(r) }", "func (r *MemcachedReconciler) SetupWithManager(mgr ctrl.Manager) error { return ctrl.NewControllerManagedBy(mgr). For(&cachev1.Memcached{}). Owns(&appsv1.Deployment{}). WithOptions(controller.Options{ MaxConcurrentReconciles: 2, }). Complete(r) }", "import ( ctrl \"sigs.k8s.io/controller-runtime\" cachev1 \"github.com/example-inc/memcached-operator/api/v1\" ) func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { // Lookup the Memcached instance for this reconcile request memcached := &cachev1.Memcached{} err := r.Get(ctx, req.NamespacedName, memcached) }", "// Reconcile successful - don't requeue return ctrl.Result{}, nil // Reconcile failed due to error - requeue return ctrl.Result{}, err // Requeue for any reason other than an error return ctrl.Result{Requeue: true}, nil", "import \"time\" // Reconcile for any reason other than an error after 5 seconds return ctrl.Result{RequeueAfter: time.Second*5}, nil", "// +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/status,verbs=get;update;patch // +kubebuilder:rbac:groups=cache.example.com,resources=memcacheds/finalizers,verbs=update // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=core,resources=pods,verbs=get;list; func (r *MemcachedReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) { }", "import ( \"github.com/operator-framework/operator-lib/proxy\" )", "for i, container := range dep.Spec.Template.Spec.Containers { dep.Spec.Template.Spec.Containers[i].Env = append(container.Env, proxy.ReadProxyVarsFromEnv()...) }", "containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "oc project memcached-operator-system", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3", "oc apply -f config/samples/cache_v1_memcached.yaml", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m", "oc get pods", "NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m", "oc get memcached/memcached-sample -o yaml", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7", "oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m", "oc delete -f config/samples/cache_v1_memcached.yaml", "make undeploy", "operator-sdk cleanup <project_name>", "Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.38.0 1", "go 1.22.0 github.com/onsi/ginkgo/v2 v2.17.1 github.com/onsi/gomega v1.32.0 k8s.io/api v0.30.1 k8s.io/apimachinery v0.30.1 k8s.io/client-go v0.30.1 sigs.k8s.io/controller-runtime v0.18.4", "go mod tidy", "- ENVTEST_K8S_VERSION = 1.29.0 + ENVTEST_K8S_VERSION = 1.30.0", "- KUSTOMIZE ?= USD(LOCALBIN)/kustomize-USD(KUSTOMIZE_VERSION) - CONTROLLER_GEN ?= USD(LOCALBIN)/controller-gen-USD(CONTROLLER_TOOLS_VERSION) - ENVTEST ?= USD(LOCALBIN)/setup-envtest-USD(ENVTEST_VERSION) - GOLANGCI_LINT = USD(LOCALBIN)/golangci-lint-USD(GOLANGCI_LINT_VERSION) + KUSTOMIZE ?= USD(LOCALBIN)/kustomize + CONTROLLER_GEN ?= USD(LOCALBIN)/controller-gen + ENVTEST ?= USD(LOCALBIN)/setup-envtest + GOLANGCI_LINT = USD(LOCALBIN)/golangci-lint", "- KUSTOMIZE_VERSION ?= v5.3.0 - CONTROLLER_TOOLS_VERSION ?= v0.14.0 - ENVTEST_VERSION ?= release-0.17 - GOLANGCI_LINT_VERSION ?= v1.57.2 + KUSTOMIZE_VERSION ?= v5.4.2 + CONTROLLER_TOOLS_VERSION ?= v0.15.0 + ENVTEST_VERSION ?= release-0.18 + GOLANGCI_LINT_VERSION ?= v1.59.1", "- USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD{GOLANGCI_LINT_VERSION}) + USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD(GOLANGCI_LINT_VERSION))", "- USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD{GOLANGCI_LINT_VERSION}) + USD(call go-install-tool,USD(GOLANGCI_LINT),github.com/golangci/golangci-lint/cmd/golangci-lint,USD(GOLANGCI_LINT_VERSION))", "- @[ -f USD(1) ] || { + @[ -f \"USD(1)-USD(3)\" ] || { echo \"Downloading USDUSD{package}\" ; + rm -f USD(1) || true ; - mv \"USDUSD(echo \"USD(1)\" | sed \"s/-USD(3)USDUSD//\")\" USD(1) ; - } + mv USD(1) USD(1)-USD(3) ; + } ; + ln -sf USD(1)-USD(3) USD(1)", "- exportloopref + - ginkgolinter - prealloc + - revive + + linters-settings: + revive: + rules: + - name: comment-spacings", "- FROM golang:1.21 AS builder + FROM golang:1.22 AS builder", "\"sigs.k8s.io/controller-runtime/pkg/log/zap\" + \"sigs.k8s.io/controller-runtime/pkg/metrics/filters\" var enableHTTP2 bool - flag.StringVar(&metricsAddr, \"metrics-bind-address\", \":8080\", \"The address the metric endpoint binds to.\") + var tlsOpts []func(*tls.Config) + flag.StringVar(&metricsAddr, \"metrics-bind-address\", \"0\", \"The address the metrics endpoint binds to. \"+ + \"Use :8443 for HTTPS or :8080 for HTTP, or leave as 0 to disable the metrics service.\") flag.StringVar(&probeAddr, \"health-probe-bind-address\", \":8081\", \"The address the probe endpoint binds to.\") flag.BoolVar(&enableLeaderElection, \"leader-elect\", false, \"Enable leader election for controller manager. \"+ \"Enabling this will ensure there is only one active controller manager.\") - flag.BoolVar(&secureMetrics, \"metrics-secure\", false, - \"If set the metrics endpoint is served securely\") + flag.BoolVar(&secureMetrics, \"metrics-secure\", true, + \"If set, the metrics endpoint is served securely via HTTPS. Use --metrics-secure=false to use HTTP instead.\") - tlsOpts := []func(*tls.Config){} + // Metrics endpoint is enabled in 'config/default/kustomization.yaml'. The Metrics options configure the server. + // More info: + // - https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/metrics/server + // - https://book.kubebuilder.io/reference/metrics.html + metricsServerOptions := metricsserver.Options{ + BindAddress: metricsAddr, + SecureServing: secureMetrics, + // TODO(user): TLSOpts is used to allow configuring the TLS config used for the server. If certificates are + // not provided, self-signed certificates will be generated by default. This option is not recommended for + // production environments as self-signed certificates do not offer the same level of trust and security + // as certificates issued by a trusted Certificate Authority (CA). The primary risk is potentially allowing + // unauthorized access to sensitive metrics data. Consider replacing with CertDir, CertName, and KeyName + // to provide certificates, ensuring the server communicates using trusted and secure certificates. + TLSOpts: tlsOpts, + } + + if secureMetrics { + // FilterProvider is used to protect the metrics endpoint with authn/authz. + // These configurations ensure that only authorized users and service accounts + // can access the metrics endpoint. The RBAC are configured in 'config/rbac/kustomization.yaml'. More info: + // https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/metrics/filters#WithAuthenticationAndAuthorization + metricsServerOptions.FilterProvider = filters.WithAuthenticationAndAuthorization + } + mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{ - Scheme: scheme, - Metrics: metricsserver.Options{ - BindAddress: metricsAddr, - SecureServing: secureMetrics, - TLSOpts: tlsOpts, - }, + Scheme: scheme, + Metrics: metricsServerOptions,", "[PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. #- ../prometheus + # [METRICS] Expose the controller manager metrics service. + - metrics_service.yaml + # Uncomment the patches line if you enable Metrics, and/or are using webhooks and cert-manager patches: - # Protect the /metrics endpoint by putting it behind auth. - # If you want your controller-manager to expose the /metrics - # endpoint w/o any authn/z, please comment the following line. - - path: manager_auth_proxy_patch.yaml + # [METRICS] The following patch will enable the metrics endpoint using HTTPS and the port :8443. + # More info: https://book.kubebuilder.io/reference/metrics + - path: manager_metrics_patch.yaml + target: + kind: Deployment", "This patch adds the args to allow exposing the metrics endpoint using HTTPS - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-bind-address=:8443", "apiVersion: v1 kind: Service metadata: labels: control-plane: controller-manager app.kubernetes.io/name: <operator-name> app.kubernetes.io/managed-by: kustomize name: controller-manager-metrics-service namespace: system spec: ports: - name: https port: 8443 protocol: TCP targetPort: 8443 selector: control-plane: controller-manager", "- --leader-elect + - --health-probe-bind-address=:8081", "- path: /metrics - port: https + port: https # Ensure this is the name of the port that exposes HTTPS metrics tlsConfig: + # TODO(user): The option insecureSkipVerify: true is not recommended for production since it disables + # certificate verification. This poses a significant security risk by making the system vulnerable to + # man-in-the-middle attacks, where an attacker could intercept and manipulate the communication between + # Prometheus and the monitored services. This could lead to unauthorized access to sensitive metrics data, + # compromising the integrity and confidentiality of the information. + # Please use the following options for secure configurations: + # caFile: /etc/metrics-certs/ca.crt + # certFile: /etc/metrics-certs/tls.crt + # keyFile: /etc/metrics-certs/tls.key insecureSkipVerify: true", "- leader_election_role_binding.yaml - # Comment the following 4 lines if you want to disable - # the auth proxy (https://github.com/brancz/kube-rbac-proxy) - # which protects your /metrics endpoint. - - auth_proxy_service.yaml - - auth_proxy_role.yaml - - auth_proxy_role_binding.yaml - - auth_proxy_client_clusterrole.yaml + # The following RBAC configurations are used to protect + # the metrics endpoint with authn/authz. These configurations + # ensure that only authorized users and service accounts + # can access the metrics endpoint. Comment the following + # permissions if you want to disable this protection. + # More info: https://book.kubebuilder.io/reference/metrics.html + - metrics_auth_role.yaml + - metrics_auth_role_binding.yaml + - metrics_reader_role.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-auth-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: metrics-auth-role subjects: - kind: ServiceAccount name: controller-manager namespace: system", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: metrics-reader rules: - nonResourceURLs: - \"/metrics\" verbs: - get", "mkdir -p USDHOME/projects/memcached-operator", "cd USDHOME/projects/memcached-operator", "operator-sdk init --plugins=ansible --domain=example.com", "domain: example.com layout: - ansible.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: memcached-operator version: \"3\"", "operator-sdk create api --group cache --version v1 --kind Memcached --generate-role 1", "--- - name: start memcached k8s: definition: kind: Deployment apiVersion: apps/v1 metadata: name: '{{ ansible_operator_meta.name }}-memcached' namespace: '{{ ansible_operator_meta.namespace }}' spec: replicas: \"{{size}}\" selector: matchLabels: app: memcached template: metadata: labels: app: memcached spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v image: \"docker.io/memcached:1.4.36-alpine\" ports: - containerPort: 11211", "--- defaults file for Memcached size: 1", "apiVersion: cache.example.com/v1 kind: Memcached metadata: labels: app.kubernetes.io/name: memcached app.kubernetes.io/instance: memcached-sample app.kubernetes.io/part-of: memcached-operator app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: memcached-operator name: memcached-sample spec: size: 3", "env: - name: HTTP_PROXY value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}' - name: http_proxy value: '{{ lookup(\"env\", \"HTTP_PROXY\") | default(\"\", True) }}'", "containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "oc project memcached-operator-system", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3", "oc apply -f config/samples/cache_v1_memcached.yaml", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 8m memcached-sample 3/3 3 3 1m", "oc get pods", "NAME READY STATUS RESTARTS AGE memcached-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m memcached-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m memcached-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m", "oc get memcached/memcached-sample -o yaml", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: size: 3 status: nodes: - memcached-sample-6fd7c98d8-7dqdr - memcached-sample-6fd7c98d8-g5k7v - memcached-sample-6fd7c98d8-m7vn7", "oc patch memcached memcached-sample -p '{\"spec\":{\"size\": 5}}' --type=merge", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE memcached-operator-controller-manager 1/1 1 1 10m memcached-sample 5/5 5 5 3m", "oc delete -f config/samples/cache_v1_memcached.yaml", "make undeploy", "operator-sdk cleanup <project_name>", "Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.38.0 1", "FROM registry.redhat.io/openshift4/ose-ansible-operator:v4", "- curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.3.0/kustomize_v5.3.0_USD(OS)_USD(ARCH).tar.gz | + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.4.2/kustomize_v5.4.2_USD(OS)_USD(ARCH).tar.gz | \\", "[PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. #- ../prometheus + # [METRICS] Expose the controller manager metrics service. + - metrics_service.yaml + # Uncomment the patches line if you enable Metrics, and/or are using webhooks and cert-manager patches: - # Protect the /metrics endpoint by putting it behind auth. - # If you want your controller-manager to expose the /metrics - # endpoint w/o any authn/z, please comment the following line. - - path: manager_auth_proxy_patch.yaml + # [METRICS] The following patch will enable the metrics endpoint using HTTPS and the port :8443. + # More info: https://book.kubebuilder.io/reference/metrics + - path: manager_metrics_patch.yaml + target: + kind: Deployment", "This patch adds the args to allow exposing the metrics endpoint using HTTPS - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-bind-address=:8443 This patch adds the args to allow securing the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-secure This patch adds the args to allow RBAC-based authn/authz the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-require-rbac", "apiVersion: v1 kind: Service metadata: labels: control-plane: controller-manager app.kubernetes.io/name: <operator-name> app.kubernetes.io/managed-by: kustomize name: controller-manager-metrics-service namespace: system spec: ports: - name: https port: 8443 protocol: TCP targetPort: 8443 selector: control-plane: controller-manager", "- --leader-elect + - --health-probe-bind-address=:6789", "- path: /metrics - port: https + port: https # Ensure this is the name of the port that exposes HTTPS metrics tlsConfig: + # TODO(user): The option insecureSkipVerify: true is not recommended for production since it disables + # certificate verification. This poses a significant security risk by making the system vulnerable to + # man-in-the-middle attacks, where an attacker could intercept and manipulate the communication between + # Prometheus and the monitored services. This could lead to unauthorized access to sensitive metrics data, + # compromising the integrity and confidentiality of the information. + # Please use the following options for secure configurations: + # caFile: /etc/metrics-certs/ca.crt + # certFile: /etc/metrics-certs/tls.crt + # keyFile: /etc/metrics-certs/tls.key insecureSkipVerify: true", "- leader_election_role_binding.yaml - # Comment the following 4 lines if you want to disable - # the auth proxy (https://github.com/brancz/kube-rbac-proxy) - # which protects your /metrics endpoint. - - auth_proxy_service.yaml - - auth_proxy_role.yaml - - auth_proxy_role_binding.yaml - - auth_proxy_client_clusterrole.yaml + # The following RBAC configurations are used to protect + # the metrics endpoint with authn/authz. These configurations + # ensure that only authorized users and service accounts + # can access the metrics endpoint. Comment the following + # permissions if you want to disable this protection. + # More info: https://book.kubebuilder.io/reference/metrics.html + - metrics_auth_role.yaml + - metrics_auth_role_binding.yaml + - metrics_reader_role.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-auth-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: metrics-auth-role subjects: - kind: ServiceAccount name: controller-manager namespace: system", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: metrics-reader rules: - nonResourceURLs: - \"/metrics\" verbs: - get", "apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"", "- version: v1alpha1 1 group: test1.example.com kind: Test1 role: /opt/ansible/roles/Test1 - version: v1alpha1 2 group: test2.example.com kind: Test2 playbook: /opt/ansible/playbook.yml - version: v1alpha1 3 group: test3.example.com kind: Test3 playbook: /opt/ansible/test3.yml reconcilePeriod: 0 manageStatus: false", "- version: v1alpha1 group: app.example.com kind: AppService playbook: /opt/ansible/playbook.yml maxRunnerArtifacts: 30 reconcilePeriod: 5s manageStatus: False watchDependentResources: False", "apiVersion: \"app.example.com/v1alpha1\" kind: \"Database\" metadata: name: \"example\" spec: message: \"Hello world 2\" newParameter: \"newParam\"", "{ \"meta\": { \"name\": \"<cr_name>\", \"namespace\": \"<cr_namespace>\", }, \"message\": \"Hello world 2\", \"new_parameter\": \"newParam\", \"_app_example_com_database\": { <full_crd> }, }", "--- - debug: msg: \"name: {{ ansible_operator_meta.name }}, {{ ansible_operator_meta.namespace }}\"", "sudo dnf install ansible", "pip install kubernetes", "ansible-galaxy collection install community.kubernetes", "ansible-galaxy collection install -r requirements.yml", "--- - name: set ConfigMap example-config to {{ state }} community.kubernetes.k8s: api_version: v1 kind: ConfigMap name: example-config namespace: <operator_namespace> 1 state: \"{{ state }}\" ignore_errors: true 2", "--- state: present", "--- - hosts: localhost roles: - <kind>", "ansible-playbook playbook.yml", "[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to present] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0", "oc get configmaps", "NAME DATA AGE example-config 0 2m1s", "ansible-playbook playbook.yml --extra-vars state=absent", "[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************** TASK [Gathering Facts] ******************************************************************************** ok: [localhost] TASK [memcached : set ConfigMap example-config to absent] ******************************************************************************** changed: [localhost] PLAY RECAP ******************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0", "oc get configmaps", "apiVersion: \"test1.example.com/v1alpha1\" kind: \"Test1\" metadata: name: \"example\" annotations: ansible.operator-sdk/reconcile-period: \"30s\"", "make install", "/usr/bin/kustomize build config/crd | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/memcacheds.cache.example.com created", "make run", "/home/user/memcached-operator/bin/ansible-operator run {\"level\":\"info\",\"ts\":1612739145.2871568,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612739148.347306,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\":8080\"} {\"level\":\"info\",\"ts\":1612739148.3488882,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612739148.3490262,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612739148.3490646,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612739148.350217,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} {\"level\":\"info\",\"ts\":1612739148.3506632,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} {\"level\":\"info\",\"ts\":1612739148.350784,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612739148.5511978,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612739148.5512562,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":8}", "apiVersion: <group>.example.com/v1alpha1 kind: <kind> metadata: name: \"<kind>-sample\"", "oc apply -f config/samples/<gvk>.yaml", "oc get configmaps", "NAME STATUS AGE example-config Active 3s", "apiVersion: cache.example.com/v1 kind: Memcached metadata: name: memcached-sample spec: state: absent", "oc apply -f config/samples/<gvk>.yaml", "oc get configmap", "make docker-build IMG=<registry>/<user>/<image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<image_name>:<tag>", "make deploy IMG=<registry>/<user>/<image_name>:<tag>", "oc get deployment -n <project_name>-system", "NAME READY UP-TO-DATE AVAILABLE AGE <project_name>-controller-manager 1/1 1 1 8m", "oc logs deployment/<project_name>-controller-manager -c manager \\ 1 -n <namespace> 2", "{\"level\":\"info\",\"ts\":1612732105.0579333,\"logger\":\"cmd\",\"msg\":\"Version\",\"Go Version\":\"go1.15.5\",\"GOOS\":\"linux\",\"GOARCH\":\"amd64\",\"ansible-operator\":\"v1.10.1\",\"commit\":\"1abf57985b43bf6a59dcd18147b3c574fa57d3f6\"} {\"level\":\"info\",\"ts\":1612732105.0587437,\"logger\":\"cmd\",\"msg\":\"WATCH_NAMESPACE environment variable not set. Watching all namespaces.\",\"Namespace\":\"\"} I0207 21:08:26.110949 7 request.go:645] Throttling request took 1.035521578s, request: GET:https://172.30.0.1:443/apis/flowcontrol.apiserver.k8s.io/v1alpha1?timeout=32s {\"level\":\"info\",\"ts\":1612732107.768025,\"logger\":\"controller-runtime.metrics\",\"msg\":\"metrics server is starting to listen\",\"addr\":\"127.0.0.1:8080\"} {\"level\":\"info\",\"ts\":1612732107.768796,\"logger\":\"watches\",\"msg\":\"Environment variable not set; using default value\",\"envVar\":\"ANSIBLE_VERBOSITY_MEMCACHED_CACHE_EXAMPLE_COM\",\"default\":2} {\"level\":\"info\",\"ts\":1612732107.7688773,\"logger\":\"cmd\",\"msg\":\"Environment variable not set; using default value\",\"Namespace\":\"\",\"envVar\":\"ANSIBLE_DEBUG_LOGS\",\"ANSIBLE_DEBUG_LOGS\":false} {\"level\":\"info\",\"ts\":1612732107.7688901,\"logger\":\"ansible-controller\",\"msg\":\"Watching resource\",\"Options.Group\":\"cache.example.com\",\"Options.Version\":\"v1\",\"Options.Kind\":\"Memcached\"} {\"level\":\"info\",\"ts\":1612732107.770032,\"logger\":\"proxy\",\"msg\":\"Starting to serve\",\"Address\":\"127.0.0.1:8888\"} I0207 21:08:27.770185 7 leaderelection.go:243] attempting to acquire leader lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.770202,\"logger\":\"controller-runtime.manager\",\"msg\":\"starting metrics server\",\"path\":\"/metrics\"} I0207 21:08:27.784854 7 leaderelection.go:253] successfully acquired lease memcached-operator-system/memcached-operator {\"level\":\"info\",\"ts\":1612732107.7850506,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting EventSource\",\"source\":\"kind source: cache.example.com/v1, Kind=Memcached\"} {\"level\":\"info\",\"ts\":1612732107.8853772,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting Controller\"} {\"level\":\"info\",\"ts\":1612732107.8854098,\"logger\":\"controller-runtime.manager.controller.memcached-controller\",\"msg\":\"Starting workers\",\"worker count\":4}", "containers: - name: manager env: - name: ANSIBLE_DEBUG_LOGS value: \"True\"", "apiVersion: \"cache.example.com/v1alpha1\" kind: \"Memcached\" metadata: name: \"example-memcached\" annotations: \"ansible.sdk.operatorframework.io/verbosity\": \"4\" spec: size: 4", "status: conditions: - ansibleResult: changed: 3 completion: 2018-12-03T13:45:57.13329 failures: 1 ok: 6 skipped: 0 lastTransitionTime: 2018-12-03T13:45:57Z message: 'Status code was -1 and not [200]: Request failed: <urlopen error [Errno 113] No route to host>' reason: Failed status: \"True\" type: Failure - lastTransitionTime: 2018-12-03T13:46:13Z message: Running reconciliation reason: Running status: \"True\" type: Running", "- version: v1 group: api.example.com kind: <kind> role: <role> manageStatus: false", "- operator_sdk.util.k8s_status: api_version: app.example.com/v1 kind: <kind> name: \"{{ ansible_operator_meta.name }}\" namespace: \"{{ ansible_operator_meta.namespace }}\" status: test: data", "collections: - operator_sdk.util", "k8s_status: status: key1: value1", "mkdir -p USDHOME/projects/nginx-operator", "cd USDHOME/projects/nginx-operator", "operator-sdk init --plugins=helm --domain=example.com --group=demo --version=v1 --kind=Nginx", "operator-sdk init --plugins helm --help", "domain: example.com layout: - helm.sdk.operatorframework.io/v1 plugins: manifests.sdk.operatorframework.io/v2: {} scorecard.sdk.operatorframework.io/v2: {} sdk.x-openshift.io/v1: {} projectName: nginx-operator resources: - api: crdVersion: v1 namespaced: true domain: example.com group: demo kind: Nginx version: v1 version: \"3\"", "Use the 'create api' subcommand to add watches to this file. - group: demo version: v1 kind: Nginx chart: helm-charts/nginx +kubebuilder:scaffold:watch", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 2 service: port: 8080", "- group: demo.example.com version: v1alpha1 kind: Nginx chart: helm-charts/nginx overrideValues: proxy.http: USDHTTP_PROXY", "proxy: http: \"\" https: \"\" no_proxy: \"\"", "containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}\" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: http_proxy value: \"{{ .Values.proxy.http }}\"", "containers: - args: - --leader-elect - --leader-election-id=ansible-proxy-demo image: controller:latest name: manager env: - name: \"HTTP_PROXY\" value: \"http_proxy_test\"", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "oc project nginx-operator-system", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3", "oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-operator-system:nginx-sample", "oc apply -f config/samples/demo_v1_nginx.yaml", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 8m nginx-sample 3/3 3 3 1m", "oc get pods", "NAME READY STATUS RESTARTS AGE nginx-sample-6fd7c98d8-7dqdr 1/1 Running 0 1m nginx-sample-6fd7c98d8-g5k7v 1/1 Running 0 1m nginx-sample-6fd7c98d8-m7vn7 1/1 Running 0 1m", "oc get nginx/nginx-sample -o yaml", "apiVersion: demo.example.com/v1 kind: Nginx metadata: name: nginx-sample spec: replicaCount: 3 status: nodes: - nginx-sample-6fd7c98d8-7dqdr - nginx-sample-6fd7c98d8-g5k7v - nginx-sample-6fd7c98d8-m7vn7", "oc patch nginx nginx-sample -p '{\"spec\":{\"replicaCount\": 5}}' --type=merge", "oc get deployments", "NAME READY UP-TO-DATE AVAILABLE AGE nginx-operator-controller-manager 1/1 1 1 10m nginx-sample 5/5 5 5 3m", "oc delete -f config/samples/demo_v1_nginx.yaml", "make undeploy", "operator-sdk cleanup <project_name>", "Set the Operator SDK version to use. By default, what is installed on the system is used. This is useful for CI or a project to utilize a specific version of the operator-sdk toolkit. OPERATOR_SDK_VERSION ?= v1.38.0 1", "FROM registry.redhat.io/openshift4/ose-helm-rhel9-operator:v4", "- curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.3.0/kustomize_v5.3.0_USD(OS)_USD(ARCH).tar.gz | + curl -sSLo - https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.4.2/kustomize_v5.4.2_USD(OS)_USD(ARCH).tar.gz | \\", "[PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'. #- ../prometheus + # [METRICS] Expose the controller manager metrics service. + - metrics_service.yaml + # Uncomment the patches line if you enable Metrics, and/or are using webhooks and cert-manager patches: - # Protect the /metrics endpoint by putting it behind auth. - # If you want your controller-manager to expose the /metrics - # endpoint w/o any authn/z, please comment the following line. - - path: manager_auth_proxy_patch.yaml + # [METRICS] The following patch will enable the metrics endpoint using HTTPS and the port :8443. + # More info: https://book.kubebuilder.io/reference/metrics + - path: manager_metrics_patch.yaml + target: + kind: Deployment", "This patch adds the args to allow exposing the metrics endpoint using HTTPS - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-bind-address=:8443 This patch adds the args to allow securing the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-secure This patch adds the args to allow RBAC-based authn/authz the metrics endpoint - op: add path: /spec/template/spec/containers/0/args/0 value: --metrics-require-rbac", "apiVersion: v1 kind: Service metadata: labels: control-plane: controller-manager app.kubernetes.io/name: <operator-name> app.kubernetes.io/managed-by: kustomize name: controller-manager-metrics-service namespace: system spec: ports: - name: https port: 8443 protocol: TCP targetPort: 8443 selector: control-plane: controller-manager", "- --leader-elect + - --health-probe-bind-address=:8081", "- path: /metrics - port: https + port: https # Ensure this is the name of the port that exposes HTTPS metrics tlsConfig: + # TODO(user): The option insecureSkipVerify: true is not recommended for production since it disables + # certificate verification. This poses a significant security risk by making the system vulnerable to + # man-in-the-middle attacks, where an attacker could intercept and manipulate the communication between + # Prometheus and the monitored services. This could lead to unauthorized access to sensitive metrics data, + # compromising the integrity and confidentiality of the information. + # Please use the following options for secure configurations: + # caFile: /etc/metrics-certs/ca.crt + # certFile: /etc/metrics-certs/tls.crt + # keyFile: /etc/metrics-certs/tls.key insecureSkipVerify: true", "- leader_election_role_binding.yaml - # Comment the following 4 lines if you want to disable - # the auth proxy (https://github.com/brancz/kube-rbac-proxy) - # which protects your /metrics endpoint. - - auth_proxy_service.yaml - - auth_proxy_role.yaml - - auth_proxy_role_binding.yaml - - auth_proxy_client_clusterrole.yaml + # The following RBAC configurations are used to protect + # the metrics endpoint with authn/authz. These configurations + # ensure that only authorized users and service accounts + # can access the metrics endpoint. Comment the following + # permissions if you want to disable this protection. + # More info: https://book.kubebuilder.io/reference/metrics.html + - metrics_auth_role.yaml + - metrics_auth_role_binding.yaml + - metrics_reader_role.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: metrics-auth-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: metrics-auth-role subjects: - kind: ServiceAccount name: controller-manager namespace: system", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: metrics-reader rules: - nonResourceURLs: - \"/metrics\" verbs: - get", "apiVersion: apache.org/v1alpha1 kind: Tomcat metadata: name: example-app spec: replicaCount: 2", "{{ .Values.replicaCount }}", "oc get Tomcats --all-namespaces", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: features.operators.openshift.io/disconnected: \"true\" features.operators.openshift.io/fips-compliant: \"false\" features.operators.openshift.io/proxy-aware: \"false\" features.operators.openshift.io/tls-profiles: \"false\" features.operators.openshift.io/token-auth-aws: \"false\" features.operators.openshift.io/token-auth-azure: \"false\" features.operators.openshift.io/token-auth-gcp: \"false\"", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\", \"proxy-aware\"]'", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"OpenShift Container Platform\"]'", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: operators.openshift.io/valid-subscription: '[\"3Scale Commercial License\", \"Red Hat Managed Integration\"]'", "spec: spec: containers: - command: - /manager env: - name: <related_image_environment_variable> 1 value: \"<related_image_reference_with_tag>\" 2", "// deploymentForMemcached returns a memcached Deployment object Spec: corev1.PodSpec{ Containers: []corev1.Container{{ - Image: \"memcached:1.4.36-alpine\", 1 + Image: os.Getenv(\"<related_image_environment_variable>\"), 2 Name: \"memcached\", Command: []string{\"memcached\", \"-m=64\", \"-o\", \"modern\", \"-v\"}, Ports: []corev1.ContainerPort{{", "spec: containers: - name: memcached command: - memcached - -m=64 - -o - modern - -v - image: \"docker.io/memcached:1.4.36-alpine\" 1 + image: \"{{ lookup('env', '<related_image_environment_variable>') }}\" 2 ports: - containerPort: 11211", "- group: demo.example.com version: v1alpha1 kind: Memcached chart: helm-charts/memcached overrideValues: 1 relatedImage: USD{<related_image_environment_variable>} 2", "relatedImage: \"\"", "containers: - name: {{ .Chart.Name }} securityContext: - toYaml {{ .Values.securityContext | nindent 12 }} image: \"{{ .Values.image.pullPolicy }} env: 1 - name: related_image 2 value: \"{{ .Values.relatedImage }}\" 3", "BUNDLE_GEN_FLAGS ?= -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) # USE_IMAGE_DIGESTS defines if images are resolved via tags or digests # You can enable this value if you would like to use SHA Based Digests # To enable set flag to true USE_IMAGE_DIGESTS ?= false ifeq (USD(USE_IMAGE_DIGESTS), true) BUNDLE_GEN_FLAGS += --use-image-digests endif - USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle -q --overwrite --version USD(VERSION) USD(BUNDLE_METADATA_OPTS) 1 + USD(KUSTOMIZE) build config/manifests | operator-sdk generate bundle USD(BUNDLE_GEN_FLAGS) 2", "make bundle USE_IMAGE_DIGESTS=true", "metadata: annotations: operators.openshift.io/infrastructure-features: '[\"disconnected\"]'", "labels: operatorframework.io/arch.<arch>: supported 1 operatorframework.io/os.<os>: supported 2", "labels: operatorframework.io/os.linux: supported", "labels: operatorframework.io/arch.amd64: supported", "labels: operatorframework.io/arch.s390x: supported operatorframework.io/os.zos: supported operatorframework.io/os.linux: supported 1 operatorframework.io/arch.amd64: supported 2", "metadata: annotations: operatorframework.io/suggested-namespace: <namespace> 1", "metadata: annotations: operatorframework.io/suggested-namespace-template: 1 { \"apiVersion\": \"v1\", \"kind\": \"Namespace\", \"metadata\": { \"name\": \"vertical-pod-autoscaler-suggested-template\", \"annotations\": { \"openshift.io/node-selector\": \"\" } } }", "module github.com/example-inc/memcached-operator go 1.19 require ( k8s.io/apimachinery v0.26.0 k8s.io/client-go v0.26.0 sigs.k8s.io/controller-runtime v0.14.1 operator-framework/operator-lib v0.11.0 )", "import ( apiv1 \"github.com/operator-framework/api/pkg/operators/v1\" ) func NewUpgradeable(cl client.Client) (Condition, error) { return NewCondition(cl, \"apiv1.OperatorUpgradeable\") } cond, err := NewUpgradeable(cl);", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: webhook-operator.v0.0.1 spec: customresourcedefinitions: owned: - kind: WebhookTest name: webhooktests.webhook.operators.coreos.io 1 version: v1 install: spec: deployments: - name: webhook-operator-webhook strategy: deployment installModes: - supported: false type: OwnNamespace - supported: false type: SingleNamespace - supported: false type: MultiNamespace - supported: true type: AllNamespaces webhookdefinitions: - type: ValidatingAdmissionWebhook 2 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: vwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /validate-webhook-operators-coreos-io-v1-webhooktest - type: MutatingAdmissionWebhook 3 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook failurePolicy: Fail generateName: mwebhooktest.kb.io rules: - apiGroups: - webhook.operators.coreos.io apiVersions: - v1 operations: - CREATE - UPDATE resources: - webhooktests sideEffects: None webhookPath: /mutate-webhook-operators-coreos-io-v1-webhooktest - type: ConversionWebhook 4 admissionReviewVersions: - v1beta1 - v1 containerPort: 443 targetPort: 4343 deploymentName: webhook-operator-webhook generateName: cwebhooktest.kb.io sideEffects: None webhookPath: /convert conversionCRDs: - webhooktests.webhook.operators.coreos.io 5", "- displayName: MongoDB Standalone group: mongodb.com kind: MongoDbStandalone name: mongodbstandalones.mongodb.com resources: - kind: Service name: '' version: v1 - kind: StatefulSet name: '' version: v1beta2 - kind: Pod name: '' version: v1 - kind: ConfigMap name: '' version: v1 specDescriptors: - description: Credentials for Ops Manager or Cloud Manager. displayName: Credentials path: credentials x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:Secret' - description: Project this deployment belongs to. displayName: Project path: project x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:selector:core:v1:ConfigMap' - description: MongoDB version to be installed. displayName: Version path: version x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:label' statusDescriptors: - description: The status of each of the pods for the MongoDB cluster. displayName: Pod Status path: pods x-descriptors: - 'urn:alm:descriptor:com.tectonic.ui:podStatuses' version: v1 description: >- MongoDB Deployment consisting of only one host. No replication of data.", "required: - name: etcdclusters.etcd.database.coreos.com version: v1beta2 kind: EtcdCluster displayName: etcd Cluster description: Represents a cluster of etcd nodes.", "versions: - name: v1alpha1 served: true storage: false - name: v1beta1 1 served: true storage: true", "customresourcedefinitions: owned: - name: cluster.example.com version: v1beta1 1 kind: cluster displayName: Cluster", "versions: - name: v1alpha1 served: false 1 storage: true", "versions: - name: v1alpha1 served: false storage: false 1 - name: v1beta1 served: true storage: true 2", "versions: - name: v1beta1 served: true storage: true", "metadata: annotations: alm-examples: >- [{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdCluster\",\"metadata\":{\"name\":\"example\",\"namespace\":\"<operator_namespace>\"},\"spec\":{\"size\":3,\"version\":\"3.2.13\"}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdRestore\",\"metadata\":{\"name\":\"example-etcd-cluster\"},\"spec\":{\"etcdCluster\":{\"name\":\"example-etcd-cluster\"},\"backupStorageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}},{\"apiVersion\":\"etcd.database.coreos.com/v1beta2\",\"kind\":\"EtcdBackup\",\"metadata\":{\"name\":\"example-etcd-cluster-backup\"},\"spec\":{\"etcdEndpoints\":[\"<etcd-cluster-endpoints>\"],\"storageType\":\"S3\",\"s3\":{\"path\":\"<full-s3-path>\",\"awsSecret\":\"<aws-secret>\"}}}]", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operators.operatorframework.io/internal-objects: '[\"my.internal.crd1.io\",\"my.internal.crd2.io\"]' 1", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: my-operator-v1.2.3 annotations: operatorframework.io/initialization-resource: |- { \"apiVersion\": \"ocs.openshift.io/v1\", \"kind\": \"StorageCluster\", \"metadata\": { \"name\": \"example-storagecluster\" }, \"spec\": { \"manageNodes\": false, \"monPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"10Gi\" } }, \"storageClassName\": \"gp2\" } }, \"storageDeviceSets\": [ { \"count\": 3, \"dataPVCTemplate\": { \"spec\": { \"accessModes\": [ \"ReadWriteOnce\" ], \"resources\": { \"requests\": { \"storage\": \"1Ti\" } }, \"storageClassName\": \"gp2\", \"volumeMode\": \"Block\" } }, \"name\": \"example-deviceset\", \"placement\": {}, \"portable\": true, \"resources\": {} } ] } }", "make docker-build IMG=<registry>/<user>/<operator_image_name>:<tag>", "make docker-push IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle IMG=<registry>/<user>/<operator_image_name>:<tag>", "make bundle-build BUNDLE_IMG=<registry>/<user>/<bundle_image_name>:<tag>", "docker push <registry>/<user>/<bundle_image_name>:<tag>", "operator-sdk run bundle \\ 1 -n <namespace> \\ 2 <registry>/<user>/<bundle_image_name>:<tag> 3", "make catalog-build CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>", "make catalog-push CATALOG_IMG=<registry>/<user>/<index_image_name>:<tag>", "make bundle-build bundle-push catalog-build catalog-push BUNDLE_IMG=<bundle_image_pull_spec> CATALOG_IMG=<index_image_pull_spec>", "IMAGE_TAG_BASE=quay.io/example/my-operator", "make bundle-build bundle-push catalog-build catalog-push", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: cs-memcached namespace: <operator_namespace> spec: displayName: My Test publisher: Company sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: quay.io/example/memcached-catalog:v0.0.1 2 updateStrategy: registryPoll: interval: 10m", "oc get catalogsource", "NAME DISPLAY TYPE PUBLISHER AGE cs-memcached My Test grpc Company 4h31m", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-test namespace: <operator_namespace> spec: targetNamespaces: - <operator_namespace>", "\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: catalogtest namespace: <catalog_namespace> spec: channel: \"alpha\" installPlanApproval: Manual name: catalog source: cs-memcached sourceNamespace: <operator_namespace> startingCSV: memcached-operator.v0.0.1", "oc get og", "NAME AGE my-test 4h40m", "oc get csv", "NAME DISPLAY VERSION REPLACES PHASE memcached-operator.v0.0.1 Test 0.0.1 Succeeded", "oc get pods", "NAME READY STATUS RESTARTS AGE 9098d908802769fbde8bd45255e69710a9f8420a8f3d814abe88b68f8ervdj6 0/1 Completed 0 4h33m catalog-controller-manager-7fd5b7b987-69s4n 2/2 Running 0 4h32m cs-memcached-7622r 1/1 Running 0 4h33m", "operator-sdk run bundle <registry>/<user>/memcached-operator:v0.0.1", "INFO[0006] Creating a File-Based Catalog of the bundle \"quay.io/demo/memcached-operator:v0.0.1\" INFO[0008] Generated a valid File-Based Catalog INFO[0012] Created registry pod: quay-io-demo-memcached-operator-v1-0-1 INFO[0012] Created CatalogSource: memcached-operator-catalog INFO[0012] OperatorGroup \"operator-sdk-og\" created INFO[0012] Created Subscription: memcached-operator-v0-0-1-sub INFO[0015] Approved InstallPlan install-h9666 for the Subscription: memcached-operator-v0-0-1-sub INFO[0015] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" to reach 'Succeeded' phase INFO[0015] Waiting for ClusterServiceVersion \"\"my-project/memcached-operator.v0.0.1\" to appear INFO[0026] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Pending INFO[0028] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Installing INFO[0059] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.1\" phase: Succeeded INFO[0059] OLM has successfully installed \"memcached-operator.v0.0.1\"", "operator-sdk run bundle-upgrade <registry>/<user>/memcached-operator:v0.0.2", "INFO[0002] Found existing subscription with name memcached-operator-v0-0-1-sub and namespace my-project INFO[0002] Found existing catalog source with name memcached-operator-catalog and namespace my-project INFO[0008] Generated a valid Upgraded File-Based Catalog INFO[0009] Created registry pod: quay-io-demo-memcached-operator-v0-0-2 INFO[0009] Updated catalog source memcached-operator-catalog with address and annotations INFO[0010] Deleted previous registry pod with name \"quay-io-demo-memcached-operator-v0-0-1\" INFO[0041] Approved InstallPlan install-gvcjh for the Subscription: memcached-operator-v0-0-1-sub INFO[0042] Waiting for ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" to reach 'Succeeded' phase INFO[0019] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Pending INFO[0042] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: InstallReady INFO[0043] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Installing INFO[0044] Found ClusterServiceVersion \"my-project/memcached-operator.v0.0.2\" phase: Succeeded INFO[0044] Successfully upgraded to \"memcached-operator.v0.0.2\"", "operator-sdk cleanup memcached-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: annotations: \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<cluster_version>\"}]' 1", "com.redhat.openshift.versions: \"v4.7-v4.9\" 1", "LABEL com.redhat.openshift.versions=\"<versions>\" 1", "spec: securityContext: seccompProfile: type: RuntimeDefault 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL", "spec: securityContext: 1 runAsNonRoot: true containers: - name: <operator_workload_container> securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL", "containers: - name: my-container securityContext: allowPrivilegeEscalation: false capabilities: add: - \"NET_ADMIN\"", "install: spec: clusterPermissions: - rules: - apiGroups: - security.openshift.io resourceNames: - privileged resources: - securitycontextconstraints verbs: - use serviceAccountName: default", "spec: apiservicedefinitions:{} description: The <operator_name> requires a privileged pod security admission label set on the Operator's namespace. The Operator's agents require escalated permissions to restart the node if the node needs remediation.", "operator-sdk scorecard <bundle_dir_or_image> [flags]", "operator-sdk scorecard -h", "./bundle └── tests └── scorecard └── config.yaml", "kind: Configuration apiversion: scorecard.operatorframework.io/v1alpha3 metadata: name: config stages: - parallel: true tests: - image: quay.io/operator-framework/scorecard-test:v1.38.0 entrypoint: - scorecard-test - basic-check-spec labels: suite: basic test: basic-check-spec-test - image: quay.io/operator-framework/scorecard-test:v1.38.0 entrypoint: - scorecard-test - olm-bundle-validation labels: suite: olm test: olm-bundle-validation-test", "make bundle", "operator-sdk scorecard <bundle_dir_or_image>", "{ \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"kind\": \"TestList\", \"items\": [ { \"kind\": \"Test\", \"apiVersion\": \"scorecard.operatorframework.io/v1alpha3\", \"spec\": { \"image\": \"quay.io/operator-framework/scorecard-test:v1.38.0\", \"entrypoint\": [ \"scorecard-test\", \"olm-bundle-validation\" ], \"labels\": { \"suite\": \"olm\", \"test\": \"olm-bundle-validation-test\" } }, \"status\": { \"results\": [ { \"name\": \"olm-bundle-validation\", \"log\": \"time=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Found metadata directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=debug msg=\\\"Getting mediaType info from manifests directory\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Found annotations file\\\" name=bundle-test\\ntime=\\\"2020-06-10T19:02:49Z\\\" level=info msg=\\\"Could not find optional dependencies file\\\" name=bundle-test\\n\", \"state\": \"pass\" } ] } } ] }", "-------------------------------------------------------------------------------- Image: quay.io/operator-framework/scorecard-test:v1.38.0 Entrypoint: [scorecard-test olm-bundle-validation] Labels: \"suite\":\"olm\" \"test\":\"olm-bundle-validation-test\" Results: Name: olm-bundle-validation State: pass Log: time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Found metadata directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=debug msg=\"Getting mediaType info from manifests directory\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Found annotations file\" name=bundle-test time=\"2020-07-15T03:19:02Z\" level=info msg=\"Could not find optional dependencies file\" name=bundle-test", "operator-sdk scorecard <bundle_dir_or_image> -o text --selector=test=basic-check-spec-test", "operator-sdk scorecard <bundle_dir_or_image> -o text --selector=suite=olm", "operator-sdk scorecard <bundle_dir_or_image> -o text --selector='test in (basic-check-spec-test,olm-bundle-validation-test)'", "apiVersion: scorecard.operatorframework.io/v1alpha3 kind: Configuration metadata: name: config stages: - parallel: true 1 tests: - entrypoint: - scorecard-test - basic-check-spec image: quay.io/operator-framework/scorecard-test:v1.38.0 labels: suite: basic test: basic-check-spec-test - entrypoint: - scorecard-test - olm-bundle-validation image: quay.io/operator-framework/scorecard-test:v1.38.0 labels: suite: olm test: olm-bundle-validation-test", "// Copyright 2020 The Operator-SDK Authors // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package main import ( \"encoding/json\" \"fmt\" \"log\" \"os\" scapiv1alpha3 \"github.com/operator-framework/api/pkg/apis/scorecard/v1alpha3\" apimanifests \"github.com/operator-framework/api/pkg/manifests\" ) // This is the custom scorecard test example binary // As with the Redhat scorecard test image, the bundle that is under // test is expected to be mounted so that tests can inspect the // bundle contents as part of their test implementations. // The actual test is to be run is named and that name is passed // as an argument to this binary. This argument mechanism allows // this binary to run various tests all from within a single // test image. const PodBundleRoot = \"/bundle\" func main() { entrypoint := os.Args[1:] if len(entrypoint) == 0 { log.Fatal(\"Test name argument is required\") } // Read the pod's untar'd bundle from a well-known path. cfg, err := apimanifests.GetBundleFromDir(PodBundleRoot) if err != nil { log.Fatal(err.Error()) } var result scapiv1alpha3.TestStatus // Names of the custom tests which would be passed in the // `operator-sdk` command. switch entrypoint[0] { case CustomTest1Name: result = CustomTest1(cfg) case CustomTest2Name: result = CustomTest2(cfg) default: result = printValidTests() } // Convert scapiv1alpha3.TestResult to json. prettyJSON, err := json.MarshalIndent(result, \"\", \" \") if err != nil { log.Fatal(\"Failed to generate json\", err) } fmt.Printf(\"%s\\n\", string(prettyJSON)) } // printValidTests will print out full list of test names to give a hint to the end user on what the valid tests are. func printValidTests() scapiv1alpha3.TestStatus { result := scapiv1alpha3.TestResult{} result.State = scapiv1alpha3.FailState result.Errors = make([]string, 0) result.Suggestions = make([]string, 0) str := fmt.Sprintf(\"Valid tests for this image include: %s %s\", CustomTest1Name, CustomTest2Name) result.Errors = append(result.Errors, str) return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{result}, } } const ( CustomTest1Name = \"customtest1\" CustomTest2Name = \"customtest2\" ) // Define any operator specific custom tests here. // CustomTest1 and CustomTest2 are example test functions. Relevant operator specific // test logic is to be implemented in similarly. func CustomTest1(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest1Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func CustomTest2(bundle *apimanifests.Bundle) scapiv1alpha3.TestStatus { r := scapiv1alpha3.TestResult{} r.Name = CustomTest2Name r.State = scapiv1alpha3.PassState r.Errors = make([]string, 0) r.Suggestions = make([]string, 0) almExamples := bundle.CSV.GetAnnotations()[\"alm-examples\"] if almExamples == \"\" { fmt.Println(\"no alm-examples in the bundle CSV\") } return wrapResult(r) } func wrapResult(r scapiv1alpha3.TestResult) scapiv1alpha3.TestStatus { return scapiv1alpha3.TestStatus{ Results: []scapiv1alpha3.TestResult{r}, } }", "operator-sdk bundle validate <bundle_dir_or_image> <flags>", "./bundle β”œβ”€β”€ manifests β”‚ β”œβ”€β”€ cache.my.domain_memcacheds.yaml β”‚ └── memcached-operator.clusterserviceversion.yaml └── metadata └── annotations.yaml", "INFO[0000] All validation tests have completed successfully", "ERRO[0000] Error: Value cache.example.com/v1alpha1, Kind=Memcached: CRD \"cache.example.com/v1alpha1, Kind=Memcached\" is present in bundle \"\" but not defined in CSV", "WARN[0000] Warning: Value : (memcached-operator.v0.0.1) annotations not found INFO[0000] All validation tests have completed successfully", "operator-sdk bundle validate -h", "operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>", "operator-sdk bundle validate ./bundle", "operator-sdk bundle validate <bundle_registry>/<bundle_image_name>:<tag>", "operator-sdk bundle validate <bundle_dir_or_image> --select-optional <test_label>", "ERRO[0000] Error: Value apiextensions.k8s.io/v1, Kind=CustomResource: unsupported media type registry+v1 for bundle object WARN[0000] Warning: Value k8sevent.v0.0.1: owned CRD \"k8sevents.k8s.k8sevent.com\" has an empty description", "// Simple query nn := types.NamespacedName{ Name: \"cluster\", } infraConfig := &configv1.Infrastructure{} err = crClient.Get(context.Background(), nn, infraConfig) if err != nil { return err } fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"using crclient: %v\\n\", infraConfig.Status.InfrastructureTopology)", "operatorConfigInformer := configinformer.NewSharedInformerFactoryWithOptions(configClient, 2*time.Second) infrastructureLister = operatorConfigInformer.Config().V1().Infrastructures().Lister() infraConfig, err := configClient.ConfigV1().Infrastructures().Get(context.Background(), \"cluster\", metav1.GetOptions{}) if err != nil { return err } // fmt.Printf(\"%v\\n\", infraConfig) fmt.Printf(\"%v\\n\", infraConfig.Status.ControlPlaneTopology) fmt.Printf(\"%v\\n\", infraConfig.Status.InfrastructureTopology)", "import ( \"github.com/operator-framework/operator-sdk/pkg/leader\" ) func main() { err = leader.Become(context.TODO(), \"memcached-operator-lock\") if err != nil { log.Error(err, \"Failed to retry for leader lock\") os.Exit(1) } }", "import ( \"sigs.k8s.io/controller-runtime/pkg/manager\" ) func main() { opts := manager.Options{ LeaderElection: true, LeaderElectionID: \"memcached-operator-lock\" } mgr, err := manager.New(cfg, opts) }", "cfg = Config{ log: logf.Log.WithName(\"prune\"), DryRun: false, Clientset: client, LabelSelector: \"app=<operator_name>\", Resources: []schema.GroupVersionKind{ {Group: \"\", Version: \"\", Kind: PodKind}, }, Namespaces: []string{\"<operator_namespace>\"}, Strategy: StrategyConfig{ Mode: MaxCountStrategy, MaxCountSetting: 1, }, PreDeleteHook: myhook, }", "err := cfg.Execute(ctx)", "packagemanifests/ └── etcd β”œβ”€β”€ 0.0.1 β”‚ β”œβ”€β”€ etcdcluster.crd.yaml β”‚ └── etcdoperator.clusterserviceversion.yaml β”œβ”€β”€ 0.0.2 β”‚ β”œβ”€β”€ etcdbackup.crd.yaml β”‚ β”œβ”€β”€ etcdcluster.crd.yaml β”‚ β”œβ”€β”€ etcdoperator.v0.0.2.clusterserviceversion.yaml β”‚ └── etcdrestore.crd.yaml └── etcd.package.yaml", "bundle/ β”œβ”€β”€ bundle-0.0.1 β”‚ β”œβ”€β”€ bundle.Dockerfile β”‚ β”œβ”€β”€ manifests β”‚ β”‚ β”œβ”€β”€ etcdcluster.crd.yaml β”‚ β”‚ β”œβ”€β”€ etcdoperator.clusterserviceversion.yaml β”‚ β”œβ”€β”€ metadata β”‚ β”‚ └── annotations.yaml β”‚ └── tests β”‚ └── scorecard β”‚ └── config.yaml └── bundle-0.0.2 β”œβ”€β”€ bundle.Dockerfile β”œβ”€β”€ manifests β”‚ β”œβ”€β”€ etcdbackup.crd.yaml β”‚ β”œβ”€β”€ etcdcluster.crd.yaml β”‚ β”œβ”€β”€ etcdoperator.v0.0.2.clusterserviceversion.yaml β”‚ β”œβ”€β”€ etcdrestore.crd.yaml β”œβ”€β”€ metadata β”‚ └── annotations.yaml └── tests └── scorecard └── config.yaml", "operator-sdk pkgman-to-bundle <package_manifests_dir> \\ 1 [--output-dir <directory>] \\ 2 --image-tag-base <image_name_base> 3", "operator-sdk run bundle <bundle_image_name>:<tag>", "INFO[0025] Successfully created registry pod: quay-io-my-etcd-0-9-4 INFO[0025] Created CatalogSource: etcd-catalog INFO[0026] OperatorGroup \"operator-sdk-og\" created INFO[0026] Created Subscription: etcdoperator-v0-9-4-sub INFO[0031] Approved InstallPlan install-5t58z for the Subscription: etcdoperator-v0-9-4-sub INFO[0031] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to reach 'Succeeded' phase INFO[0032] Waiting for ClusterServiceVersion \"default/etcdoperator.v0.9.4\" to appear INFO[0048] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Pending INFO[0049] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Installing INFO[0064] Found ClusterServiceVersion \"default/etcdoperator.v0.9.4\" phase: Succeeded INFO[0065] OLM has successfully installed \"etcdoperator.v0.9.4\"", "operator-sdk <command> [<subcommand>] [<argument>] [<flags>]", "operator-sdk completion bash", "bash completion for operator-sdk -*- shell-script -*- ex: ts=4 sw=4 et filetype=sh", "operator-sdk --version operator-sdk version 0.1.0", "mkdir -p USDGOPATH/src/github.com/example-inc/ cd USDGOPATH/src/github.com/example-inc/ mv memcached-operator old-memcached-operator operator-sdk new memcached-operator --skip-git-init ls memcached-operator old-memcached-operator", "cp -rf old-memcached-operator/.git memcached-operator/.git", "cd memcached-operator operator-sdk add api --api-version=cache.example.com/v1alpha1 --kind=Memcached tree pkg/apis pkg/apis/ β”œβ”€β”€ addtoscheme_cache_v1alpha1.go β”œβ”€β”€ apis.go └── cache └── v1alpha1 β”œβ”€β”€ doc.go β”œβ”€β”€ memcached_types.go β”œβ”€β”€ register.go └── zz_generated.deepcopy.go", "func init() { SchemeBuilder.Register(&Memcached{}, &MemcachedList{})", "sdk.Watch(\"cache.example.com/v1alpha1\", \"Memcached\", \"default\", time.Duration(5)*time.Second)", "operator-sdk add controller --api-version=cache.example.com/v1alpha1 --kind=Memcached tree pkg/controller pkg/controller/ β”œβ”€β”€ add_memcached.go β”œβ”€β”€ controller.go └── memcached └── memcached_controller.go", "import ( cachev1alpha1 \"github.com/example-inc/memcached-operator/pkg/apis/cache/v1alpha1\" ) func add(mgr manager.Manager, r reconcile.Reconciler) error { c, err := controller.New(\"memcached-controller\", mgr, controller.Options{Reconciler: r}) // Watch for changes to the primary resource Memcached err = c.Watch(&source.Kind{Type: &cachev1alpha1.Memcached{}}, &handler.EnqueueRequestForObject{}) // Watch for changes to the secondary resource pods and enqueue reconcile requests for the owner Memcached err = c.Watch(&source.Kind{Type: &corev1.Pod{}}, &handler.EnqueueRequestForOwner{ IsController: true, OwnerType: &cachev1alpha1.Memcached{}, }) }", "// Watch for changes to the primary resource Memcached err = c.Watch(&source.Kind{Type: &cachev1alpha1.Memcached{}}, &handler.EnqueueRequestForObject{}) // Watch for changes to the secondary resource AppService and enqueue reconcile requests for the owner Memcached err = c.Watch(&source.Kind{Type: &appv1alpha1.AppService{}}, &handler.EnqueueRequestForOwner{ IsController: true, OwnerType: &cachev1alpha1.Memcached{}, })", "operator-sdk add controller --api-version=app.example.com/v1alpha1 --kind=AppService", "// Watch for changes to the primary resource AppService err = c.Watch(&source.Kind{Type: &appv1alpha1.AppService{}}, &handler.EnqueueRequestForObject{})", "func (r *ReconcileMemcached) Reconcile(request reconcile.Request) (reconcile.Result, error)", "func (h *Handler) Handle(ctx context.Context, event sdk.Event) error", "import ( apierrors \"k8s.io/apimachinery/pkg/api/errors\" cachev1alpha1 \"github.com/example-inc/memcached-operator/pkg/apis/cache/v1alpha1\" ) func (r *ReconcileMemcached) Reconcile(request reconcile.Request) (reconcile.Result, error) { // Fetch the Memcached instance instance := &cachev1alpha1.Memcached{} err := r.client.Get(context.TODO() request.NamespacedName, instance) if err != nil { if apierrors.IsNotFound(err) { // Request object not found, could have been deleted after reconcile request. // Owned objects are automatically garbage collected. // Return and don't requeue return reconcile.Result{}, nil } // Error reading the object - requeue the request. return reconcile.Result{}, err } // Rest of your reconcile code goes here. }", "reconcilePeriod := 30 * time.Second reconcileResult := reconcile.Result{RequeueAfter: reconcilePeriod} // Update the status err := r.client.Update(context.TODO(), memcached) if err != nil { log.Printf(\"failed to update memcached status: %v\", err) return reconcileResult, err } return reconcileResult, nil", "// Create dep := &appsv1.Deployment{...} err := sdk.Create(dep) // v0.0.1 err := r.client.Create(context.TODO(), dep) // Update err := sdk.Update(dep) // v0.0.1 err := r.client.Update(context.TODO(), dep) // Delete err := sdk.Delete(dep) // v0.0.1 err := r.client.Delete(context.TODO(), dep) // List podList := &corev1.PodList{} labelSelector := labels.SelectorFromSet(labelsForMemcached(memcached.Name)) listOps := &metav1.ListOptions{LabelSelector: labelSelector} err := sdk.List(memcached.Namespace, podList, sdk.WithListOptions(listOps)) // v0.1.0 listOps := &client.ListOptions{Namespace: memcached.Namespace, LabelSelector: labelSelector} err := r.client.List(context.TODO(), listOps, podList) // Get dep := &appsv1.Deployment{APIVersion: \"apps/v1\", Kind: \"Deployment\", Name: name, Namespace: namespace} err := sdk.Get(dep) // v0.1.0 dep := &appsv1.Deployment{} err = r.client.Get(context.TODO(), types.NamespacedName{Name: name, Namespace: namespace}, dep)", "// newReconciler returns a new reconcile.Reconciler func newReconciler(mgr manager.Manager) reconcile.Reconciler { return &ReconcileMemcached{client: mgr.GetClient(), scheme: mgr.GetScheme(), foo: \"bar\"} } // ReconcileMemcached reconciles a Memcached object type ReconcileMemcached struct { client client.Client scheme *runtime.Scheme // Other fields foo string }" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html-single/operators/index
Chapter 10. Ceph performance counters
Chapter 10. Ceph performance counters As a storage administrator, you can gather performance metrics of the Red Hat Ceph Storage cluster. The Ceph performance counters are a collection of internal infrastructure metrics. The collection, aggregation, and graphing of this metric data can be done by an assortment of tools and can be useful for performance analytics. 10.1. Access to Ceph performance counters The performance counters are available through a socket interface for the Ceph Monitors and the OSDs. The socket file for each respective daemon is located under /var/run/ceph , by default. The performance counters are grouped together into collection names. These collections names represent a subsystem or an instance of a subsystem. Here is the full list of the Monitor and the OSD collection name categories with a brief description for each : Monitor Collection Name Categories Cluster Metrics - Displays information about the storage cluster: Monitors, OSDs, Pools, and PGs Level Database Metrics - Displays information about the back-end KeyValueStore database Monitor Metrics - Displays general monitor information Paxos Metrics - Displays information on cluster quorum management Throttle Metrics - Displays the statistics on how the monitor is throttling OSD Collection Name Categories Write Back Throttle Metrics - Displays the statistics on how the write back throttle is tracking unflushed IO Level Database Metrics - Displays information about the back-end KeyValueStore database Objecter Metrics - Displays information on various object-based operations Read and Write Operations Metrics - Displays information on various read and write operations Recovery State Metrics - Displays - Displays latencies on various recovery states OSD Throttle Metrics - Display the statistics on how the OSD is throttling RADOS Gateway Collection Name Categories Object Gateway Client Metrics - Displays statistics on GET and PUT requests Objecter Metrics - Displays information on various object-based operations Object Gateway Throttle Metrics - Display the statistics on how the OSD is throttling 10.2. Display the Ceph performance counters The ceph daemon DAEMON_NAME perf schema command outputs the available metrics. Each metric has an associated bit field value type. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To view the metric's schema: Synatx Note You must run the ceph daemon command from the node running the daemon. Executing ceph daemon DAEMON_NAME perf schema command from the monitor node: Example Executing the ceph daemon DAEMON_NAME perf schema command from the OSD node: Example Table 10.1. The bit field value definitions Bit Meaning 1 Floating point value 2 Unsigned 64-bit integer value 4 Average (Sum + Count) 8 Counter Each value will have bit 1 or 2 set to indicate the type, either a floating point or an integer value. When bit 4 is set, there will be two values to read, a sum and a count. When bit 8 is set, the average for the interval would be the sum delta, since the read, divided by the count delta. Alternatively, dividing the values outright would provide the lifetime average value. Typically these are used to measure latencies, the number of requests and a sum of request latencies. Some bit values are combined, for example 5, 6 and 10. A bit value of 5 is a combination of bit 1 and bit 4. This means the average will be a floating point value. A bit value of 6 is a combination of bit 2 and bit 4. This means the average value will be an integer. A bit value of 10 is a combination of bit 2 and bit 8. This means the counter value will be an integer value. Additional Resources See Average count and sum section in the Red Hat Ceph Storage Administration Guide for more details. 10.3. Dump the Ceph performance counters The ceph daemon .. perf dump command outputs the current values and groups the metrics under the collection name for each subsystem. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To view the current metric data: Syntax Note You must run the ceph daemon command from the node running the daemon. Executing ceph daemon .. perf dump command from the Monitor node: Executing the ceph daemon .. perf dump command from the OSD node: Additional Resources To view a short description of each Monitor metric available, please see the Ceph monitor metrics table . 10.4. Average count and sum All latency numbers have a bit field value of 5. This field contains floating point values for the average count and sum. The avgcount is the number of operations within this range and the sum is the total latency in seconds. When dividing the sum by the avgcount this will provide you with an idea of the latency per operation. Additional Resources To view a short description of each OSD metric available, please see the Ceph OSD table . 10.5. Ceph Monitor metrics Cluster Metrics Table Level Database Metrics Table General Monitor Metrics Table Paxos Metrics Table Throttle Metrics Table Table 10.2. Cluster Metrics Table Collection Name Metric Name Bit Field Value Short Description cluster num_mon 2 Number of monitors num_mon_quorum 2 Number of monitors in quorum num_osd 2 Total number of OSD num_osd_up 2 Number of OSDs that are up num_osd_in 2 Number of OSDs that are in cluster osd_epoch 2 Current epoch of OSD map osd_bytes 2 Total capacity of cluster in bytes osd_bytes_used 2 Number of used bytes on cluster osd_bytes_avail 2 Number of available bytes on cluster num_pool 2 Number of pools num_pg 2 Total number of placement groups num_pg_active_clean 2 Number of placement groups in active+clean state num_pg_active 2 Number of placement groups in active state num_pg_peering 2 Number of placement groups in peering state num_object 2 Total number of objects on cluster num_object_degraded 2 Number of degraded (missing replicas) objects num_object_misplaced 2 Number of misplaced (wrong location in the cluster) objects num_object_unfound 2 Number of unfound objects num_bytes 2 Total number of bytes of all objects num_mds_up 2 Number of MDSs that are up num_mds_in 2 Number of MDS that are in cluster num_mds_failed 2 Number of failed MDS mds_epoch 2 Current epoch of MDS map Table 10.3. Level Database Metrics Table Collection Name Metric Name Bit Field Value Short Description leveldb leveldb_get 10 Gets leveldb_transaction 10 Transactions leveldb_compact 10 Compactions leveldb_compact_range 10 Compactions by range leveldb_compact_queue_merge 10 Mergings of ranges in compaction queue leveldb_compact_queue_len 2 Length of compaction queue Table 10.4. General Monitor Metrics Table Collection Name Metric Name Bit Field Value Short Description mon num_sessions 2 Current number of opened monitor sessions session_add 10 Number of created monitor sessions session_rm 10 Number of remove_session calls in monitor session_trim 10 Number of trimed monitor sessions num_elections 10 Number of elections monitor took part in election_call 10 Number of elections started by monitor election_win 10 Number of elections won by monitor election_lose 10 Number of elections lost by monitor Table 10.5. Paxos Metrics Table Collection Name Metric Name Bit Field Value Short Description paxos start_leader 10 Starts in leader role start_peon 10 Starts in peon role restart 10 Restarts refresh 10 Refreshes refresh_latency 5 Refresh latency begin 10 Started and handled begins begin_keys 6 Keys in transaction on begin begin_bytes 6 Data in transaction on begin begin_latency 5 Latency of begin operation commit 10 Commits commit_keys 6 Keys in transaction on commit commit_bytes 6 Data in transaction on commit commit_latency 5 Commit latency collect 10 Peon collects collect_keys 6 Keys in transaction on peon collect collect_bytes 6 Data in transaction on peon collect collect_latency 5 Peon collect latency collect_uncommitted 10 Uncommitted values in started and handled collects collect_timeout 10 Collect timeouts accept_timeout 10 Accept timeouts lease_ack_timeout 10 Lease acknowledgement timeouts lease_timeout 10 Lease timeouts store_state 10 Store a shared state on disk store_state_keys 6 Keys in transaction in stored state store_state_bytes 6 Data in transaction in stored state store_state_latency 5 Storing state latency share_state 10 Sharings of state share_state_keys 6 Keys in shared state share_state_bytes 6 Data in shared state new_pn 10 New proposal number queries new_pn_latency 5 New proposal number getting latency Table 10.6. Throttle Metrics Table Collection Name Metric Name Bit Field Value Short Description throttle-* val 10 Currently available throttle max 10 Max value for throttle get 10 Gets get_sum 10 Got data get_or_fail_fail 10 Get blocked during get_or_fail get_or_fail_success 10 Successful get during get_or_fail take 10 Takes take_sum 10 Taken data put 10 Puts put_sum 10 Put data wait 5 Waiting latency 10.6. Ceph OSD metrics Write Back Throttle Metrics Table Level Database Metrics Table Objecter Metrics Table Read and Write Operations Metrics Table Recovery State Metrics Table OSD Throttle Metrics Table Table 10.7. Write Back Throttle Metrics Table Collection Name Metric Name Bit Field Value Short Description WBThrottle bytes_dirtied 2 Dirty data bytes_wb 2 Written data ios_dirtied 2 Dirty operations ios_wb 2 Written operations inodes_dirtied 2 Entries waiting for write inodes_wb 2 Written entries Table 10.8. Level Database Metrics Table Collection Name Metric Name Bit Field Value Short Description leveldb leveldb_get 10 Gets leveldb_transaction 10 Transactions leveldb_compact 10 Compactions leveldb_compact_range 10 Compactions by range leveldb_compact_queue_merge 10 Mergings of ranges in compaction queue leveldb_compact_queue_len 2 Length of compaction queue Table 10.9. Objecter Metrics Table Collection Name Metric Name Bit Field Value Short Description objecter op_active 2 Active operations op_laggy 2 Laggy operations op_send 10 Sent operations op_send_bytes 10 Sent data op_resend 10 Resent operations op_ack 10 Commit callbacks op_commit 10 Operation commits op 10 Operation op_r 10 Read operations op_w 10 Write operations op_rmw 10 Read-modify-write operations op_pg 10 PG operation osdop_stat 10 Stat operations osdop_create 10 Create object operations osdop_read 10 Read operations osdop_write 10 Write operations osdop_writefull 10 Write full object operations osdop_append 10 Append operation osdop_zero 10 Set object to zero operations osdop_truncate 10 Truncate object operations osdop_delete 10 Delete object operations osdop_mapext 10 Map extent operations osdop_sparse_read 10 Sparse read operations osdop_clonerange 10 Clone range operations osdop_getxattr 10 Get xattr operations osdop_setxattr 10 Set xattr operations osdop_cmpxattr 10 Xattr comparison operations osdop_rmxattr 10 Remove xattr operations osdop_resetxattrs 10 Reset xattr operations osdop_tmap_up 10 TMAP update operations osdop_tmap_put 10 TMAP put operations osdop_tmap_get 10 TMAP get operations osdop_call 10 Call (execute) operations osdop_watch 10 Watch by object operations osdop_notify 10 Notify about object operations osdop_src_cmpxattr 10 Extended attribute comparison in multi operations osdop_other 10 Other operations linger_active 2 Active lingering operations linger_send 10 Sent lingering operations linger_resend 10 Resent lingering operations linger_ping 10 Sent pings to lingering operations poolop_active 2 Active pool operations poolop_send 10 Sent pool operations poolop_resend 10 Resent pool operations poolstat_active 2 Active get pool stat operations poolstat_send 10 Pool stat operations sent poolstat_resend 10 Resent pool stats statfs_active 2 Statfs operations statfs_send 10 Sent FS stats statfs_resend 10 Resent FS stats command_active 2 Active commands command_send 10 Sent commands command_resend 10 Resent commands map_epoch 2 OSD map epoch map_full 10 Full OSD maps received map_inc 10 Incremental OSD maps received osd_sessions 2 Open sessions osd_session_open 10 Sessions opened osd_session_close 10 Sessions closed osd_laggy 2 Laggy OSD sessions Table 10.10. Read and Write Operations Metrics Table Collection Name Metric Name Bit Field Value Short Description osd op_wip 2 Replication operations currently being processed (primary) op_in_bytes 10 Client operations total write size op_out_bytes 10 Client operations total read size op_latency 5 Latency of client operations (including queue time) op_process_latency 5 Latency of client operations (excluding queue time) op_r 10 Client read operations op_r_out_bytes 10 Client data read op_r_latency 5 Latency of read operation (including queue time) op_r_process_latency 5 Latency of read operation (excluding queue time) op_w 10 Client write operations op_w_in_bytes 10 Client data written op_w_rlat 5 Client write operation readable/applied latency op_w_latency 5 Latency of write operation (including queue time) op_w_process_latency 5 Latency of write operation (excluding queue time) op_rw 10 Client read-modify-write operations op_rw_in_bytes 10 Client read-modify-write operations write in op_rw_out_bytes 10 Client read-modify-write operations read out op_rw_rlat 5 Client read-modify-write operation readable/applied latency op_rw_latency 5 Latency of read-modify-write operation (including queue time) op_rw_process_latency 5 Latency of read-modify-write operation (excluding queue time) subop 10 Suboperations subop_in_bytes 10 Suboperations total size subop_latency 5 Suboperations latency subop_w 10 Replicated writes subop_w_in_bytes 10 Replicated written data size subop_w_latency 5 Replicated writes latency subop_pull 10 Suboperations pull requests subop_pull_latency 5 Suboperations pull latency subop_push 10 Suboperations push messages subop_push_in_bytes 10 Suboperations pushed size subop_push_latency 5 Suboperations push latency pull 10 Pull requests sent push 10 Push messages sent push_out_bytes 10 Pushed size push_in 10 Inbound push messages push_in_bytes 10 Inbound pushed size recovery_ops 10 Started recovery operations loadavg 2 CPU load buffer_bytes 2 Total allocated buffer size numpg 2 Placement groups numpg_primary 2 Placement groups for which this osd is primary numpg_replica 2 Placement groups for which this osd is replica numpg_stray 2 Placement groups ready to be deleted from this osd heartbeat_to_peers 2 Heartbeat (ping) peers we send to heartbeat_from_peers 2 Heartbeat (ping) peers we recv from map_messages 10 OSD map messages map_message_epochs 10 OSD map epochs map_message_epoch_dups 10 OSD map duplicates stat_bytes 2 OSD size stat_bytes_used 2 Used space stat_bytes_avail 2 Available space copyfrom 10 Rados 'copy-from' operations tier_promote 10 Tier promotions tier_flush 10 Tier flushes tier_flush_fail 10 Failed tier flushes tier_try_flush 10 Tier flush attempts tier_try_flush_fail 10 Failed tier flush attempts tier_evict 10 Tier evictions tier_whiteout 10 Tier whiteouts tier_dirty 10 Dirty tier flag set tier_clean 10 Dirty tier flag cleaned tier_delay 10 Tier delays (agent waiting) tier_proxy_read 10 Tier proxy reads agent_wake 10 Tiering agent wake up agent_skip 10 Objects skipped by agent agent_flush 10 Tiering agent flushes agent_evict 10 Tiering agent evictions object_ctx_cache_hit 10 Object context cache hits object_ctx_cache_total 10 Object context cache lookups ceph_cluster_osd_blocklist_count 2 Number of clients blocklisted Table 10.11. Recovery State Metrics Table Collection Name Metric Name Bit Field Value Short Description recoverystate_perf initial_latency 5 Initial recovery state latency started_latency 5 Started recovery state latency reset_latency 5 Reset recovery state latency start_latency 5 Start recovery state latency primary_latency 5 Primary recovery state latency peering_latency 5 Peering recovery state latency backfilling_latency 5 Backfilling recovery state latency waitremotebackfillreserved_latency 5 Wait remote backfill reserved recovery state latency waitlocalbackfillreserved_latency 5 Wait local backfill reserved recovery state latency notbackfilling_latency 5 Notbackfilling recovery state latency repnotrecovering_latency 5 Repnotrecovering recovery state latency repwaitrecoveryreserved_latency 5 Rep wait recovery reserved recovery state latency repwaitbackfillreserved_latency 5 Rep wait backfill reserved recovery state latency RepRecovering_latency 5 RepRecovering recovery state latency activating_latency 5 Activating recovery state latency waitlocalrecoveryreserved_latency 5 Wait local recovery reserved recovery state latency waitremoterecoveryreserved_latency 5 Wait remote recovery reserved recovery state latency recovering_latency 5 Recovering recovery state latency recovered_latency 5 Recovered recovery state latency clean_latency 5 Clean recovery state latency active_latency 5 Active recovery state latency replicaactive_latency 5 Replicaactive recovery state latency stray_latency 5 Stray recovery state latency getinfo_latency 5 Getinfo recovery state latency getlog_latency 5 Getlog recovery state latency waitactingchange_latency 5 Waitactingchange recovery state latency incomplete_latency 5 Incomplete recovery state latency getmissing_latency 5 Getmissing recovery state latency waitupthru_latency 5 Waitupthru recovery state latency Table 10.12. OSD Throttle Metrics Table Collection Name Metric Name Bit Field Value Short Description throttle-* val 10 Currently available throttle max 10 Max value for throttle get 10 Gets get_sum 10 Got data get_or_fail_fail 10 Get blocked during get_or_fail get_or_fail_success 10 Successful get during get_or_fail take 10 Takes take_sum 10 Taken data put 10 Puts put_sum 10 Put data wait 5 Waiting latency 10.7. Ceph Object Gateway metrics Ceph Object Gateway Client Table Objecter Metrics Table Ceph Object Gateway Throttle Metrics Table Table 10.13. Ceph Object Gateway Client Metrics Table Collection Name Metric Name Bit Field Value Short Description client.rgw.<rgw_node_name> req 10 Requests failed_req 10 Aborted requests copy_obj_ops 10 Copy objects copy_obj_bytes 10 Size of copy objects copy_obj_lat 10 Copy object latency del_obj_ops 10 Delete objects del_obj_bytes 10 Size of delete objects del_obj_lat 10 Delete object latency del_bucket_ops 10 Delete Buckets del_bucket_lat 10 Delete bucket latency get 10 Gets get_b 10 Size of gets get_initial_lat 5 Get latency list_obj_ops 10 List objects list_obj_lat 10 List object latency list_buckets_ops 10 List buckets list_buckets_lat 10 List buckets latency put 10 Puts put_b 10 Size of puts put_initial_lat 5 Put latency qlen 2 Queue length qactive 2 Active requests queue cache_hit 10 Cache hits cache_miss 10 Cache miss keystone_token_cache_hit 10 Keystone token cache hits keystone_token_cache_miss 10 Keystone token cache miss Table 10.14. Objecter Metrics Table Collection Name Metric Name Bit Field Value Short Description objecter op_active 2 Active operations op_laggy 2 Laggy operations op_send 10 Sent operations op_send_bytes 10 Sent data op_resend 10 Resent operations op_ack 10 Commit callbacks op_commit 10 Operation commits op 10 Operation op_r 10 Read operations op_w 10 Write operations op_rmw 10 Read-modify-write operations op_pg 10 PG operation osdop_stat 10 Stat operations osdop_create 10 Create object operations osdop_read 10 Read operations osdop_write 10 Write operations osdop_writefull 10 Write full object operations osdop_append 10 Append operation osdop_zero 10 Set object to zero operations osdop_truncate 10 Truncate object operations osdop_delete 10 Delete object operations osdop_mapext 10 Map extent operations osdop_sparse_read 10 Sparse read operations osdop_clonerange 10 Clone range operations osdop_getxattr 10 Get xattr operations osdop_setxattr 10 Set xattr operations osdop_cmpxattr 10 Xattr comparison operations osdop_rmxattr 10 Remove xattr operations osdop_resetxattrs 10 Reset xattr operations osdop_tmap_up 10 TMAP update operations osdop_tmap_put 10 TMAP put operations osdop_tmap_get 10 TMAP get operations osdop_call 10 Call (execute) operations osdop_watch 10 Watch by object operations osdop_notify 10 Notify about object operations osdop_src_cmpxattr 10 Extended attribute comparison in multi operations osdop_other 10 Other operations linger_active 2 Active lingering operations linger_send 10 Sent lingering operations linger_resend 10 Resent lingering operations linger_ping 10 Sent pings to lingering operations poolop_active 2 Active pool operations poolop_send 10 Sent pool operations poolop_resend 10 Resent pool operations poolstat_active 2 Active get pool stat operations poolstat_send 10 Pool stat operations sent poolstat_resend 10 Resent pool stats statfs_active 2 Statfs operations statfs_send 10 Sent FS stats statfs_resend 10 Resent FS stats command_active 2 Active commands command_send 10 Sent commands command_resend 10 Resent commands map_epoch 2 OSD map epoch map_full 10 Full OSD maps received map_inc 10 Incremental OSD maps received osd_sessions 2 Open sessions osd_session_open 10 Sessions opened osd_session_close 10 Sessions closed osd_laggy 2 Laggy OSD sessions Table 10.15. Ceph Object Gateway Throttle Metrics Table Collection Name Metric Name Bit Field Value Short Description throttle-* val 10 Currently available throttle max 10 Max value for throttle get 10 Gets get_sum 10 Got data get_or_fail_fail 10 Get blocked during get_or_fail get_or_fail_success 10 Successful get during get_or_fail take 10 Takes take_sum 10 Taken data put 10 Puts put_sum 10 Put data wait 5 Waiting latency
[ "ceph daemon DAEMON_NAME perf schema", "ceph daemon mon.host01 perf schema", "ceph daemon osd.11 perf schema", "ceph daemon DAEMON_NAME perf dump", "ceph daemon mon.host01 perf dump", "ceph daemon osd.11 perf dump" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/administration_guide/ceph-performance-counters
Chapter 77. Test scenarios (legacy) designer in Business Central
Chapter 77. Test scenarios (legacy) designer in Business Central Red Hat Decision Manager currently supports both the new Test Scenarios designer and the former Test Scenarios (Legacy) designer. The default designer is the new test scenarios designer, which supports testing of both rules and DMN models and provides an enhanced overall user experience with test scenarios. If required, you can continue to use the legacy test scenarios designer, which supports rule-based test scenarios only. 77.1. Creating and running a test scenario (legacy) You can create test scenarios in Business Central to test the functionality of business rule data before deployment. A basic test scenario must have at least the following data: Related data objects GIVEN facts EXPECT results Note The legacy test scenarios designer supports the LocalDate java built-in data type. You can use the LocalDate java built-in data type in the dd-mmm-yyyy date format. For example, you can set this in the 17-Oct-2020 date format. With this data, the test scenario can validate the expected and actual results for that rule instance based on the defined facts. You can also add a CALL METHOD and any available globals to a test scenario, but these scenario settings are optional. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Test Scenarios (Legacy) . Enter an informative Test Scenario name and select the appropriate Package . The package that you specify must be the same package where the required rule assets have been assigned or will be assigned. You can import data objects from any package into the asset's designer. Click Ok to create the test scenario. The new test scenario is now listed in the Test Scenarios panel of the Project Explorer , Click the Data Objects tab to verify that all data objects required for the rules that you want to test are listed. If not, click New item to import the needed data objects from other packages, or create data objects within your package. After all data objects are in place, return to the Model tab of the test scenarios designer and define the GIVEN and EXPECT data for the scenario, based on the available data objects. Figure 77.1. The test scenarios designer The GIVEN section defines the input facts for the test. For example, if an Underage rule in the project declines loan applications for applicants under the age of 21, then the GIVEN facts in the test scenario could be Applicant with age set to some integer less than 21. The EXPECT section defines the expected results based on the GIVEN input facts. That is, GIVEN the input facts, EXPECT these other facts to be valid or entire rules to be activated. For example, with the given facts of an applicant under the age of 21 in the scenario, the EXPECT results could be LoanApplication with approved set to false (as a result of the underage applicant), or could be the activation of the Underage rule as a whole. Optional: Add a CALL METHOD and any globals to the test scenario: CALL METHOD: Use this to invoke a method from another fact when the rule execution is initiated. Click CALL METHOD , select a fact, and click to select the method to invoke. You can invoke any Java class methods (such as methods from an ArrayList) from the Java library or from a JAR that was imported for the project (if applicable). globals: Use this to add any global variables in the project that you want to validate in the test scenario. Click globals to select the variable to be validated, and then in the test scenarios designer, click the global name and define field values to be applied to the global variable. If no global variables are available, then they must be created as new assets in Business Central. Global variables are named objects that are visible to the decision engine but are different from the objects for facts. Changes in the object of a global do not trigger the re-evaluation of rules. Click More at the bottom of the test scenarios designer to add other data blocks to the same scenario file as needed. After you have defined all GIVEN , EXPECT , and other data for the scenario, click Save in the test scenarios designer to save your work. Click Run scenario in the upper-right corner to run this .scenario file, or click Run all scenarios to run all saved .scenario files in the project package (if there are multiple). Although the Run scenario option does not require the individual .scenario file to be saved, the Run all scenarios option does require all .scenario files to be saved. If the test fails, address any problems described in the Alerts message at the bottom of the window, review all components in the scenario, and try again to validate the scenario until the scenario passes. Click Save in the test scenarios designer to save your work after all changes are complete. 77.1.1. Adding GIVEN facts in test scenarios (legacy) The GIVEN section defines input facts for the test. For example, if an Underage rule in the project declines loan applications for applicants under the age of 21, then the GIVEN facts in the test scenario could be Applicant with age set to some integer less than 21. Prerequisites All data objects required for your test scenario have been created or imported and are listed in the Data Objects tab of the Test Scenarios (Legacy) designer. Procedure In the Test Scenarios (Legacy) designer, click GIVEN to open the New input window with the available facts. Figure 77.2. Add GIVEN input to the test scenario The list includes the following options, depending on the data objects available in the Data Objects tab of the test scenarios designer: Insert a new fact: Use this to add a fact and modify its field values. Enter a variable for the fact as the Fact name . Modify an existing fact: (Appears only after another fact has been added.) Use this to specify a previously inserted fact to be modified in the decision engine between executions of the scenario. Delete an existing fact: (Appears only after another fact has been added.) Use this to specify a previously inserted fact to be deleted from the decision engine between executions of the scenario. Activate rule flow group: Use this to specify a rule flow group to be activated so that all rules within that group can be tested. Choose a fact for the desired input option and click Add . For example, set Insert a new fact: to Applicant and enter a or app or any other variable for the Fact name . Click the fact in the test scenarios designer and select the field to be modified. Figure 77.3. Modify a fact field Click the edit icon ( ) and select from the following field values: Literal value: Creates an open field in which you enter a specific literal value. Bound variable: Sets the value of the field to the fact bound to a selected variable. The field type must match the bound variable type. Create new fact: Enables you to create a new fact and assign it as a field value of the parent fact. Then you can click the child fact in the test scenarios designer and likewise assign field values or nest other facts similarly. Continue adding any other GIVEN input data for the scenario and click Save in the test scenarios designer to save your work. 77.1.2. Adding EXPECT results in test scenarios (legacy) The EXPECT section defines the expected results based on the GIVEN input facts. That is, GIVEN the input facts, EXPECT other specified facts to be valid or entire rules to be activated. For example, with the given facts of an applicant under the age of 21 in the scenario, the EXPECT results could be LoanApplication with approved set to false (as a result of the underage applicant), or could be the activation of the Underage rule as a whole. Prerequisites All data objects required for your test scenario have been created or imported and are listed in the Data Objects tab of the Test Scenarios (Legacy) designer. Procedure In the Test Scenarios (Legacy) designer, click EXPECT to open the New expectation window with the available facts. Figure 77.4. Add EXPECT results to the test scenario The list includes the following options, depending on the data in the GIVEN section and the data objects available in the Data Objects tab of the test scenarios designer: Rule: Use this to specify a particular rule in the project that is expected to be activated as a result of the GIVEN input. Type the name of a rule that is expected to be activated or select it from the list of rules, and then in the test scenarios designer, specify the number of times the rule should be activated. Fact value: Use this to select a fact and define values for it that are expected to be valid as a result of the facts defined in the GIVEN section. The facts are listed by the Fact name previously defined for the GIVEN input. Any fact that matches: Use this to validate that at least one fact with the specified values exists as a result of the GIVEN input. Choose a fact for the desired expectation (such as Fact value: application ) and click Add or OK . Click the fact in the test scenarios designer and select the field to be added and modified. Figure 77.5. Modify a fact field Set the field values to what is expected to be valid as a result of the GIVEN input (such as approved | equals | false ). Note In the legacy test scenarios designer, you can use ["value1", "value2"] string format in the EXPECT field to validate the list of strings. Continue adding any other EXPECT input data for the scenario and click Save in the test scenarios designer to save your work. After you have defined and saved all GIVEN , EXPECT , and other data for the scenario, click Run scenario in the upper-right corner to run this .scenario file, or click Run all scenarios to run all saved .scenario files in the project package (if there are multiple). Although the Run scenario option does not require the individual .scenario file to be saved, the Run all scenarios option does require all .scenario files to be saved. If the test fails, address any problems described in the Alerts message at the bottom of the window, review all components in the scenario, and try again to validate the scenario until the scenario passes. Click Save in the test scenarios designer to save your work after all changes are complete.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/test-scenarios-legacy-designer-con
Chapter 7. OperatorCondition [operators.coreos.com/v2]
Chapter 7. OperatorCondition [operators.coreos.com/v2] Description OperatorCondition is a Custom Resource of type OperatorCondition which is used to convey information to OLM about the state of an operator. Type object Required metadata 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorConditionSpec allows an operator to report state to OLM and provides cluster admin with the ability to manually override state reported by the operator. status object OperatorConditionStatus allows OLM to convey which conditions have been observed. 7.1.1. .spec Description OperatorConditionSpec allows an operator to report state to OLM and provides cluster admin with the ability to manually override state reported by the operator. Type object Property Type Description conditions array conditions[] object Condition contains details for one aspect of the current state of this API Resource. deployments array (string) overrides array overrides[] object Condition contains details for one aspect of the current state of this API Resource. serviceAccounts array (string) 7.1.2. .spec.conditions Description Type array 7.1.3. .spec.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 7.1.4. .spec.overrides Description Type array 7.1.5. .spec.overrides[] Description Condition contains details for one aspect of the current state of this API Resource. Type object Required message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 7.1.6. .status Description OperatorConditionStatus allows OLM to convey which conditions have been observed. Type object Property Type Description conditions array conditions[] object Condition contains details for one aspect of the current state of this API Resource. 7.1.7. .status.conditions Description Type array 7.1.8. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 7.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v2/operatorconditions GET : list objects of kind OperatorCondition /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions DELETE : delete collection of OperatorCondition GET : list objects of kind OperatorCondition POST : create an OperatorCondition /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions/{name} DELETE : delete an OperatorCondition GET : read the specified OperatorCondition PATCH : partially update the specified OperatorCondition PUT : replace the specified OperatorCondition /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions/{name}/status GET : read status of the specified OperatorCondition PATCH : partially update status of the specified OperatorCondition PUT : replace status of the specified OperatorCondition 7.2.1. /apis/operators.coreos.com/v2/operatorconditions HTTP method GET Description list objects of kind OperatorCondition Table 7.1. HTTP responses HTTP code Reponse body 200 - OK OperatorConditionList schema 401 - Unauthorized Empty 7.2.2. /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions HTTP method DELETE Description delete collection of OperatorCondition Table 7.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OperatorCondition Table 7.3. HTTP responses HTTP code Reponse body 200 - OK OperatorConditionList schema 401 - Unauthorized Empty HTTP method POST Description create an OperatorCondition Table 7.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.5. Body parameters Parameter Type Description body OperatorCondition schema Table 7.6. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 201 - Created OperatorCondition schema 202 - Accepted OperatorCondition schema 401 - Unauthorized Empty 7.2.3. /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions/{name} Table 7.7. Global path parameters Parameter Type Description name string name of the OperatorCondition HTTP method DELETE Description delete an OperatorCondition Table 7.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OperatorCondition Table 7.10. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OperatorCondition Table 7.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.12. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OperatorCondition Table 7.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.14. Body parameters Parameter Type Description body OperatorCondition schema Table 7.15. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 201 - Created OperatorCondition schema 401 - Unauthorized Empty 7.2.4. /apis/operators.coreos.com/v2/namespaces/{namespace}/operatorconditions/{name}/status Table 7.16. Global path parameters Parameter Type Description name string name of the OperatorCondition HTTP method GET Description read status of the specified OperatorCondition Table 7.17. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OperatorCondition Table 7.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.19. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OperatorCondition Table 7.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.21. Body parameters Parameter Type Description body OperatorCondition schema Table 7.22. HTTP responses HTTP code Reponse body 200 - OK OperatorCondition schema 201 - Created OperatorCondition schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operatorhub_apis/operatorcondition-operators-coreos-com-v2
Chapter 19. Configuring and integrating the RHACS plugin with Red Hat Developer Hub
Chapter 19. Configuring and integrating the RHACS plugin with Red Hat Developer Hub By configuring and integrating the Red Hat Advanced Cluster Security for Kubernetes (RHACS) plugin with Red Hat Developer Hub (RHDH), you can view the security information for your deployments in RHDH. Important Integration of vulnerability findings into the RHDH is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 19.1. Viewing security information in Red Hat Developer Hub By configuring and integrating the Red Hat Advanced Cluster Security for Kubernetes (RHACS) plugin with Red Hat Developer Hub (RHDH), you can access vulnerability data, assess risks, and take proactive security actions without leaving the RHDH environment. Review the upstream plugin progress and details by visiting Community plugins for Backstage . Prerequisites You have enabled the RHACS plugin installation in RHDH. For more information, see Installing dynamic plugins using the Helm chart (RHDH documentation). Procedure Create an app-config.yaml file that contains the proxy and acs stanzas by using the following content: # ... proxy: endpoints: /acs: target: USD{ACS_API_URL} headers: authorization: Bearer USD{ACS_API_KEY} acs: acsUrl: USD{ACS_API_URL} # ... To enable the RHACS plugin, perform the following steps: Navigate to the dynamic plugins configuration file in your RHDH setup. To include the RHACS plugin, add the following content to the configuration file, for example: # ... - package: https://github.com/RedHatInsights/backstage-plugin-advanced-cluster-security/releases/download/v0.1.1/redhatinsights-backstage-plugin-acs-dynamic-0.1.1.tgz integrity: sha256-9JeRK2jN/Jgenf9kHwuvTvwTuVpqrRYsTGL6cpYAzn4= disabled: false pluginConfig: dynamicPlugins: frontend: redhatinsights.backstage-plugin-acs: entityTabs: - path: /acs title: RHACS mountPoint: entity.page.acs mountPoints: - mountPoint: entity.page.acs/cards importName: EntityACSContent config: layout: gridColumnEnd: lg: span 12 md: span 12 xs: span 12 # ... To add annotations for entities in the RHDH catalog, perform the following steps: Note To display the vulnerability data, each component entity in the RHDH catalog must reference the RHACS deployments. The following values are associated with the entities in the RHDH catalog: API Component Domain Group Location Resource System Template User Navigate to the entity configuration file for your service in your RHDH setup. Add the following annotation to the configuration file, for example: apiVersion: backstage.io/v1alpha1 kind: Component metadata: name: test-service annotations: acs/deployment-name: test-deployment-1,test-deployment-2,test-deployment-3 # ... Verification In the RHDH portal, click Catalog . Click an entity and verify that the RHACS tab appears. To view the violations and vulnerability data, click the RHACS tab.
[ "proxy: endpoints: /acs: target: USD{ACS_API_URL} headers: authorization: Bearer USD{ACS_API_KEY} acs: acsUrl: USD{ACS_API_URL}", "- package: https://github.com/RedHatInsights/backstage-plugin-advanced-cluster-security/releases/download/v0.1.1/redhatinsights-backstage-plugin-acs-dynamic-0.1.1.tgz integrity: sha256-9JeRK2jN/Jgenf9kHwuvTvwTuVpqrRYsTGL6cpYAzn4= disabled: false pluginConfig: dynamicPlugins: frontend: redhatinsights.backstage-plugin-acs: entityTabs: - path: /acs title: RHACS mountPoint: entity.page.acs mountPoints: - mountPoint: entity.page.acs/cards importName: EntityACSContent config: layout: gridColumnEnd: lg: span 12 md: span 12 xs: span 12", "apiVersion: backstage.io/v1alpha1 kind: Component metadata: name: test-service annotations: acs/deployment-name: test-deployment-1,test-deployment-2,test-deployment-3" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/configuring/configuring-and-integrating-the-rhacs-plugin-with-red-hat-developer-hub
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/creating_and_consuming_execution_environments/providing-feedback
Appendix B. Template writing reference
Appendix B. Template writing reference Embedded Ruby (ERB) is a tool for generating text files based on templates that combine plain text with Ruby code. Red Hat Satellite uses ERB syntax in the following cases: Provisioning templates For more information, see Creating Provisioning Templates in Provisioning hosts . Remote execution job templates For more information, see Chapter 13, Configuring and setting up remote jobs . Report templates For more information, see Chapter 11, Using report templates to monitor hosts . Templates for partition tables For more information, see Creating Partition Tables in Provisioning hosts . Smart Class Parameters For more information, see Configuring Puppet Smart Class Parameters in Managing configurations by using Puppet integration . This section provides an overview of Satellite-specific macros and variables that can be used in ERB templates along with some usage examples. Note that the default templates provided by Red Hat Satellite ( Hosts > Templates > Provisioning Templates , Hosts > Templates > Job templates , Monitor > Reports > Report Templates ) also provide a good source of ERB syntax examples. When provisioning a host or running a remote job, the code in the ERB is executed and the variables are replaced with the host specific values. This process is referred to as rendering . Satellite Server has the safemode rendering option enabled by default, which prevents any harmful code being executed from templates. B.1. Accessing the template writing reference in the Satellite web UI You can access the template writing reference document in the Satellite web UI. Procedure Log in to the Satellite web UI. In the Satellite web UI, navigate to Administer > About . Click the Templates DSL link in the Support section. B.2. Using autocompletion in templates You can access a list of available macros and usage information in the template editor with the autocompletion option. This works for all templates within Satellite. Procedure In the Satellite web UI, navigate to either Hosts > Templates > Partition tables , Hosts > Templates > Provisioning Templates , or Hosts > Templates > Job templates . Click the settings icon at the top right corner of the template editor and select Autocompletion . Press Ctrl + Space in the template editor to access a list of all available macros. You can narrow down the list of macros by typing in more information about what you are looking for. For example, if you are looking for a method to list the ID of the content source for a host, you can type host and check the list of available macros for content source. A window to the dropdown provides a description of the macro, its usage, and the value it will return. When you find the method you are looking for, hit Enter to input the method. You can also enable Live Autocompletion in the settings menu to view a list of macros that match the pattern whenever you type something. However, this might input macros in unintended places, like package names in a provisioning template. B.3. Writing ERB templates The following tags are the most important and commonly used in ERB templates: <% %> All Ruby code is enclosed within <% %> in an ERB template. The code is executed when the template is rendered. It can contain Ruby control flow structures as well as Satellite-specific macros and variables. For example: Note that this template silently performs an action with a service and returns nothing at the output. <%= %> This provides the same functionality as <% %> but when the template is executed, the code output is inserted into the template. This is useful for variable substitution, for example: Example input: Example rendering: Example input: Example rendering: Note that if you enter an incorrect variable, no output is returned. However, if you try to call a method on an incorrect variable, the following error message returns: Example input: Example rendering: <% -%>, <%= -%> By default, a newline character is inserted after a Ruby block if it is closed at the end of a line: Example input: Example rendering: To change the default behavior, modify the enclosing mark with -%> : Example input: Example rendering: This is used to reduce the number of lines, where Ruby syntax permits, in rendered templates. White spaces in ERB tags are ignored. An example of how this would be used in a report template to remove unnecessary newlines between a FQDN and IP address: Example input: Example rendering: <%# %> Encloses a comment that is ignored during template rendering: Example input: This generates no output. Indentation in ERB templates Because of the varying lengths of the ERB tags, indenting the ERB syntax might seem messy. ERB syntax ignore white space. One method of handling the indentation is to declare the ERB tag at the beginning of each new line and then use white space within the ERB tag to outline the relationships within the syntax, for example: B.4. Troubleshooting ERB templates The Satellite web UI provides two ways to verify the template rendering for a specific host: Directly in the template editor - when editing a template (under Hosts > Templates > Partition tables , Hosts > Templates > Provisioning Templates , or Hosts > Templates > Job templates ), on the Template tab click Preview and select a host from the list. The template then renders in the text field using the selected host's parameters. Preview failures can help to identify issues in your template. At the host's details page - select a host at Hosts > All Hosts and click the Templates tab to list templates associated with the host. Select Review from the list to the selected template to view it's rendered version. B.5. Generic Satellite-specific macros This section lists Satellite-specific macros for ERB templates. You can use the macros listed in the following table across all kinds of templates. Table B.1. Generic macros Name Description indent(n) Indents the block of code by n spaces, useful when using a snippet template that is not indented. foreman_url(kind) Returns the full URL to host-rendered templates of the given kind. For example, templates of the "provision" type usually reside at http://HOST/unattended/provision . snippet(name) Renders the specified snippet template. Useful for nesting provisioning templates. snippets(file) Renders the specified snippet found in the Foreman database, attempts to load it from the unattended/snippets/ directory if it is not found in the database. snippet_if_exists(name) Renders the specified snippet, skips if no snippet with the specified name is found. B.6. Template macros If you want to write custom templates, you can use some of the following macros. Depending on the template type, some of the following macros have different requirements. For more information about the available macros for report templates, in the Satellite web UI, navigate to Monitor > Reports > Report Templates , and click Create Template . In the Create Template window, click the Help tab. For more information about the available macros for job templates, in the Satellite web UI, navigate to Hosts > Templates > Job templates , and click the New Job Template . In the New Job Template window, click the Help tab. input Using the input macro, you can customize the input data that the template can work with. You can define the input name, type, and the options that are available for users. For report templates, you can only use user inputs. When you define a new input and save the template, you can then reference the input in the ERB syntax of the template body. This loads the value from user input cpus . load_hosts Using the load_hosts macro, you can generate a complete list of hosts. Use the load_hosts macro with the each_record macro to load records in batches of 1000 to reduce memory consumption. If you want to filter the list of hosts for the report, you can add the option search: input(' Example_Host ') : In this example, you first create an input that you then use to refine the search criteria that the load_hosts macro retrieves. report_row Using the report_row macro, you can create a formatted report for ease of analysis. The report_row macro requires the report_render macro to generate the output. Example input: Example rendering: You can add extra columns to the report by adding another header. The following example adds IP addresses to the report: Example input: Example rendering: report_render This macro is available only for report templates. Using the report_render macro, you create the output for the report. During the template rendering process, you can select the format that you want for the report. YAML, JSON, HTML, and CSV formats are supported. render_template() This macro is available only for job templates. Using this macro, you can render a specific template. You can also enable and define arguments that you want to pass to the template. truthy Using the truthy macro, you can declare if the value passed is true or false, regardless of whether the value is an integer or boolean or string. This macro helps to avoid confusion when your template contains multiple value types. For example, the boolean value true is not the same as the string value "true" . With this macro, you can declare how you want the template to interpret the value and avoid confusion. You can use truthy to declare values as follows: falsy The falsy macro serves the same purpose as the truthy macro. Using the falsy macro, you can declare if the value passed in is true or false, regardless of whether the value is an integer or boolean or string. You can use falsy to declare values as follows: B.7. Host-specific variables The following variables enable using host data within templates. Note that job templates accept only @host variables. Table B.2. Host-specific variables and macros Name Description @host.architecture The architecture of the host. @host.bond_interfaces Returns an array of all bonded interfaces. See Section B.10, "Parsing arrays" . @host.capabilities The method of system provisioning, can be either build (for example kickstart) or image. @host.certname The SSL certificate name of the host. @host.diskLayout The disk layout of the host. Can be inherited from the operating system. @host.domain The domain of the host. @host.environment Deprecated Use the host_puppet_environment variable instead. The Puppet environment of the host. @host.facts Returns a Ruby hash of facts from Facter. For example to access the 'ipaddress' fact from the output, specify @host.facts['ipaddress']. @host.grub_pass Returns the host's bootloader password. @host.hostgroup The host group of the host. host_enc['parameters'] Returns a Ruby hash containing information on host parameters. For example, use host_enc['parameters']['lifecycle_environment'] to get the lifecycle environment of a host. @host.image_build? Returns true if the host is provisioned using an image. @host.interfaces Contains an array of all available host interfaces including the primary interface. See Section B.10, "Parsing arrays" . @host.interfaces_with_identifier('IDs') Returns array of interfaces with given identifier. You can pass an array of multiple identifiers as an input, for example @host.interfaces_with_identifier(['eth0', 'eth1']). See Section B.10, "Parsing arrays" . @host.ip The IP address of the host. @host.location The location of the host. @host.mac The MAC address of the host. @host.managed_interfaces Returns an array of managed interfaces (excluding BMC and bonded interfaces). See Section B.10, "Parsing arrays" . @host.medium The assigned operating system installation medium. @host.name The full name of the host. @host.operatingsystem.family The operating system family. @host.operatingsystem.major The major version number of the assigned operating system. @host.operatingsystem.minor The minor version number of the assigned operating system. @host.operatingsystem.name The assigned operating system name. @host.operatingsystem.boot_files_uri(medium_provider) Full path to the kernel and initrd, returns an array. @host.os.medium_uri(@host) The URI used for provisioning (path configured in installation media). host_param('parameter_name') Returns the value of the specified host parameter. host_param_false?('parameter_name') Returns false if the specified host parameter evaluates to false. host_param_true?('parameter_name') Returns true if the specified host parameter evaluates to true. @host.primary_interface Returns the primary interface of the host. @host.provider The compute resource provider. @host.provision_interface Returns the provisioning interface of the host. Returns an interface object. @host.ptable The partition table name. @host.puppet_ca_server Deprecated Use the host_puppet_ca_server variable instead. The Puppet CA server the host must use. @host.puppetmaster Deprecated Use the host_puppet_server variable instead. The Puppet server the host must use. @host.pxe_build? Returns true if the host is provisioned using the network or PXE. @host.shortname The short name of the host. @host.sp_ip The IP address of the BMC interface. @host.sp_mac The MAC address of the BMC interface. @host.sp_name The name of the BMC interface. @host.sp_subnet The subnet of the BMC network. @host.subnet.dhcp Returns true if a DHCP proxy is configured for this host. @host.subnet.dns_primary The primary DNS server of the host. @host.subnet.dns_secondary The secondary DNS server of the host. @host.subnet.gateway The gateway of the host. @host.subnet.mask The subnet mask of the host. @host.url_for_boot(:initrd) Full path to the initrd image associated with this host. Not recommended, as it does not interpolate variables. @host.url_for_boot(:kernel) Full path to the kernel associated with this host. Not recommended, as it does not interpolate variables, prefer boot_files_uri. @provisioning_type Equals to 'host' or 'hostgroup' depending on type of provisioning. @static Returns true if the network configuration is static. @template_name Name of the template being rendered. grub_pass Returns a bootloader argument to set the encrypted bootloader password, such as --md5pass=#{@host.grub_pass} . ks_console Returns a string assembled using the port and the baud rate of the host which can be added to a kernel line. For example console=ttyS1,9600 . root_pass Returns the root password configured for the system. The majority of common Ruby methods can be applied on host-specific variables. For example, to extract the last segment of the host's IP address, you can use: B.8. Kickstart-specific variables The following variables are designed to be used within Kickstart provisioning templates. Table B.3. Kickstart-specific variables Name Description @arch The host architecture name, same as @host.architecture.name. @dynamic Returns true if the partition table being used is a %pre script (has the #Dynamic option as the first line of the table). @epel A command which will automatically install the correct version of the epel-release RPM. Use in a %post script. @mediapath The full Kickstart line to provide the URL command. @osver The operating system major version number, same as @host.operatingsystem.major. B.9. Conditional statements In your templates, you might perform different actions depending on which value exists. To achieve this, you can use conditional statements in your ERB syntax. In the following example, the ERB syntax searches for a specific host name and returns an output depending on the value it finds: Example input Example rendering B.10. Parsing arrays While writing or modifying templates, you might encounter variables that return arrays. For example, host variables related to network interfaces, such as @host.interfaces or @host.bond_interfaces , return interface data grouped in an array. To extract a parameter value of a specific interface, use Ruby methods to parse the array. Finding the correct method to parse an array The following procedure is an example that you can use to find the relevant methods to parse arrays in your template. In this example, a report template is used, but the steps are applicable to other templates. To retrieve the NIC of a content host, in this example, using the @host.interfaces variable returns class values that you can then use to find methods to parse the array. Example input: Example rendering: In the Create Template window, click the Help tab and search for the ActiveRecord_Associations_CollectionProxy and Nic::Base classes. For ActiveRecord_Associations_CollectionProxy , in the Allowed methods or members column, you can view the following methods to parse the array: For Nic::Base , in the Allowed methods or members column, you can view the following method to parse the array: To iterate through an interface array, add the relevant methods to the ERB syntax: Example input: Example rendering: B.11. Example template snippets Checking if a host has Puppet and Puppetlabs enabled The following example checks if the host has the Puppet and Puppetlabs repositories enabled: Capturing major and minor versions of a host's operating system The following example shows how to capture the minor and major version of the host's operating system, which can be used for package related decisions: Importing snippets to a template The following example imports the subscription_manager_registration snippet to the template and indents it by four spaces: Conditionally importing a Kickstart snippet The following example imports the kickstart_networking_setup snippet if the host's subnet has the DHCP boot mode enabled: Parsing values from host custom facts You can use the host.facts variable to parse values from a host's facts and custom facts. In this example luks_stat is a custom fact that you can parse in the same manner as dmi::system::serial_number , which is a host fact: In this example, you can customize the Applicable Errata report template to parse for custom information about the kernel version of each host:
[ "<% if @host.operatingsystem.family == \"Redhat\" && @host.operatingsystem.major.to_i > 6 -%> systemctl <%= input(\"action\") %> <%= input(\"service\") %> <% else -%> service <%= input(\"service\") %> <%= input(\"action\") %> <% end -%>", "echo <%= @host.name %>", "host.example.com", "<% server_name = @host.fqdn %> <%= server_name %>", "host.example.com", "<%= @ example_incorrect_variable .fqdn -%>", "undefined method `fqdn' for nil:NilClass", "<%= \"line1\" %> <%= \"line2\" %>", "line1 line2", "<%= \"line1\" -%> <%= \"line2\" %>", "line1line2", "<%= @host.fqdn -%> <%= @host.ip -%>", "host.example.com10.10.181.216", "<%# A comment %>", "<%- load_hosts.each do |host| -%> <%- if host.build? %> <%= host.name %> build is in progress <%- end %> <%- end %>", "<%= input('cpus') %>", "<%- load_hosts().each_record do |host| -%> <%= host.name %>", "<% load_hosts(search: input(' Example_Host ')).each_record do |host| -%> <%= host.name %> <% end -%>", "<%- load_hosts(search: input(' Example_Host ')).each_record do |host| -%> <%- report_row( 'Server FQDN': host.name ) -%> <%- end -%> <%= report_render -%>", "Server FQDN host1.example.com host2.example.com host3.example.com host4.example.com host5.example.com host6.example.com", "<%- load_hosts(search: input('host')).each_record do |host| -%> <%- report_row( 'Server FQDN': host.name, 'IP': host.ip ) -%> <%- end -%> <%= report_render -%>", "Server FQDN,IP host1.example.com , 10.8.30.228 host2.example.com , 10.8.30.227 host3.example.com , 10.8.30.226 host4.example.com , 10.8.30.225 host5.example.com , 10.8.30.224 host6.example.com , 10.8.30.223", "<%= report_render -%>", "truthy?(\"true\") => true truthy?(1) => true truthy?(\"false\") => false truthy?(0) => false", "falsy?(\"true\") => false falsy?(1) => false falsy?(\"false\") => true falsy?(0) => true", "<% @host.ip.split('.').last %>", "<% load_hosts().each_record do |host| -%> <% if @host.name == \" host1.example.com \" -%> <% result=\"positive\" -%> <% else -%> <% result=\"negative\" -%> <% end -%> <%= result -%>", "host1.example.com positive", "<%= @host.interfaces -%>", "<Nic::Base::ActiveRecord_Associations_CollectionProxy:0x00007f734036fbe0>", "[] each find_in_batches first map size to_a", "alias? attached_devices attached_devices_identifiers attached_to bond_options children_mac_addresses domain fqdn identifier inheriting_mac ip ip6 link mac managed? mode mtu nic_delay physical? primary provision shortname subnet subnet6 tag virtual? vlanid", "<% load_hosts().each_record do |host| -%> <% host.interfaces.each do |iface| -%> iface.alias?: <%= iface.alias? %> iface.attached_to: <%= iface.attached_to %> iface.bond_options: <%= iface.bond_options %> iface.children_mac_addresses: <%= iface.children_mac_addresses %> iface.domain: <%= iface.domain %> iface.fqdn: <%= iface.fqdn %> iface.identifier: <%= iface.identifier %> iface.inheriting_mac: <%= iface.inheriting_mac %> iface.ip: <%= iface.ip %> iface.ip6: <%= iface.ip6 %> iface.link: <%= iface.link %> iface.mac: <%= iface.mac %> iface.managed?: <%= iface.managed? %> iface.mode: <%= iface.mode %> iface.mtu: <%= iface.mtu %> iface.physical?: <%= iface.physical? %> iface.primary: <%= iface.primary %> iface.provision: <%= iface.provision %> iface.shortname: <%= iface.shortname %> iface.subnet: <%= iface.subnet %> iface.subnet6: <%= iface.subnet6 %> iface.tag: <%= iface.tag %> iface.virtual?: <%= iface.virtual? %> iface.vlanid: <%= iface.vlanid %> <%- end -%>", "host1.example.com iface.alias?: false iface.attached_to: iface.bond_options: iface.children_mac_addresses: [] iface.domain: iface.fqdn: host1.example.com iface.identifier: ens192 iface.inheriting_mac: 00:50:56:8d:4c:cf iface.ip: 10.10.181.13 iface.ip6: iface.link: true iface.mac: 00:50:56:8d:4c:cf iface.managed?: true iface.mode: balance-rr iface.mtu: iface.physical?: true iface.primary: true iface.provision: true iface.shortname: host1.example.com iface.subnet: iface.subnet6: iface.tag: iface.virtual?: false iface.vlanid:", "<% pm_set = @host.puppetmaster.empty? ? false : true puppet_enabled = pm_set || host_param_true?('force-puppet') puppetlabs_enabled = host_param_true?('enable-puppetlabs-repo') %>", "<% os_major = @host.operatingsystem.major.to_i os_minor = @host.operatingsystem.minor.to_i %> <% if ((os_minor < 2) && (os_major < 14)) -%> <% end -%>", "<%= indent 4 do snippet 'subscription_manager_registration' end %>", "<% subnet = @host.subnet %> <% if subnet.respond_to?(:dhcp_boot_mode?) -%> <%= snippet 'kickstart_networking_setup' %> <% end -%>", "'Serial': host.facts['dmi::system::serial_number'], 'Encrypted': host.facts['luks_stat'],", "<%- report_row( 'Host': host.name, 'Operating System': host.operatingsystem, 'Kernel': host.facts['uname::release'], 'Environment': host.single_lifecycle_environment ? host.single_lifecycle_environment.name : nil, 'Erratum': erratum.errata_id, 'Type': erratum.errata_type, 'Published': erratum.issued, 'Applicable since': erratum.created_at, 'Severity': erratum.severity, 'Packages': erratum.package_names, 'CVEs': erratum.cves, 'Reboot suggested': erratum.reboot_suggested, ) -%>" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/Template_Writing_Reference_managing-hosts
Using single sign-on with JBoss EAP
Using single sign-on with JBoss EAP Red Hat JBoss Enterprise Application Platform 8.0 Guide to using single sign-on to add authentication to applications deployed on JBoss EAP Red Hat Customer Content Services
[ "mvn archetype:generate -DgroupId= USD{group-to-which-your-application-belongs} -DartifactId= USD{name-of-your-application} -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false", "mvn archetype:generate -DgroupId=com.example.app -DartifactId=simple-webapp-example -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false", "cd <name-of-your-application>", "cd simple-webapp-example", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>com.example.app</groupId> <artifactId>simple-webapp-example</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>simple-webapp-example Maven Webapp</name> <!-- FIXME change it to the project's website --> <url>http://www.example.com</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <version.maven.war.plugin>3.4.0</version.maven.war.plugin> </properties> <dependencies> <dependency> <groupId>jakarta.servlet</groupId> <artifactId>jakarta.servlet-api</artifactId> <version>6.0.0</version> <scope>provided</scope> </dependency> </dependencies> <build> <finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>USD{version.maven.war.plugin}</version> </plugin> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>4.2.2.Final</version> </plugin> </plugins> </build> </project>", "mvn install", "[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 0.795 s [INFO] Finished at: 2022-04-28T17:39:48+05:30 [INFO] ------------------------------------------------------------------------", "mkdir -p src/main/java/<path_based_on_artifactID>", "mkdir -p src/main/java/com/example/app", "cd src/main/java/<path_based_on_artifactID>", "cd src/main/java/com/example/app", "package com.example.app; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import jakarta.servlet.ServletException; import jakarta.servlet.annotation.WebServlet; import jakarta.servlet.http.HttpServlet; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; /** * A simple secured HTTP servlet. It returns the user name of obtained * from the logged-in user's Principal. If there is no logged-in user, * it returns the text \"NO AUTHENTICATED USER\". */ @WebServlet(\"/secured\") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { writer.println(\"<html>\"); writer.println(\" <head><title>Secured Servlet</title></head>\"); writer.println(\" <body>\"); writer.println(\" <h1>Secured Servlet</h1>\"); writer.println(\" <p>\"); writer.print(\" Current Principal '\"); Principal user = req.getUserPrincipal(); writer.print(user != null ? user.getName() : \"NO AUTHENTICATED USER\"); writer.print(\"'\"); writer.println(\" </p>\"); writer.println(\" </body>\"); writer.println(\"</html>\"); } } }", "mvn package [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.015 s [INFO] Finished at: 2022-04-28T17:48:53+05:30 [INFO] ------------------------------------------------------------------------", "mvn wildfly:deploy", "Secured Servlet Current Principal 'NO AUTHENTICATED USER'", "<path_to_rhbk> /bin/kc.sh start-dev --http-port <offset-number>", "/home/servers/rhbk-22.0/bin/kc.sh start-dev --http-port 8180", "<login-config> <auth-method>OIDC</auth-method> </login-config>", "{ \"client-id\" : \"customer-portal\", 1 \"provider-url\" : \"http://localhost:8180/realms/demo\", 2 \"ssl-required\" : \"external\", 3 \"credentials\" : { \"secret\" : \"234234-234234-234234\" 4 } }", "<subsystem xmlns=\"urn:wildfly:elytron-oidc-client:1.0\"> <secure-deployment name=\"DEPLOYMENT_RUNTIME_NAME.war\"> 1 <client-id>customer-portal</client-id> 2 <provider-url>http://localhost:8180/realms/demo</provider-url> 3 <ssl-required>external</ssl-required> 4 <credential name=\"secret\" secret=\"0aa31d98-e0aa-404c-b6e0-e771dba1e798\" /> 5 </secure-deployment </subsystem>", "<subsystem xmlns=\"urn:wildfly:elytron-oidc-client:1.0\"> <provider name=\" USD{OpenID_provider_name} \"> <provider-url>http://localhost:8080/realms/demo</provider-url> <ssl-required>external</ssl-required> </provider> <secure-deployment name=\"customer-portal.war\"> 1 <provider> USD{OpenID_provider_name} </provider> <client-id>customer-portal</client-id> <credential name=\"secret\" secret=\"0aa31d98-e0aa-404c-b6e0-e771dba1e798\" /> </secure-deployment> <secure-deployment name=\"product-portal.war\"> 2 <provider> USD{OpenID_provider_name} </provider> <client-id>product-portal</client-id> <credential name=\"secret\" secret=\"0aa31d98-e0aa-404c-b6e0-e771dba1e798\" /> </secure-deployment> </subsystem>", "{ \"realm\": \"example_realm\", \"auth-server-url\": \"http://localhost:8180/\", \"ssl-required\": \"external\", \"resource\": \"jbeap-oidc\", \"public-client\": true, \"confidential-port\": 0 }", "\"provider-url\" : \"http://localhost:8180/realms/example_realm\", \"ssl-required\": \"external\", \"client-id\": \"jbeap-oidc\", \"public-client\": true, \"confidential-port\": 0", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <web-app version=\"2.5\" xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd\" metadata-complete=\"false\"> <security-constraint> <web-resource-collection> <web-resource-name>secured</web-resource-name> <url-pattern>/secured</url-pattern> </web-resource-collection> <auth-constraint> <role-name>Admin</role-name> 1 </auth-constraint> </security-constraint> <security-role> <role-name>*</role-name> </security-role> </web-app>", "<web-app> <login-config> <auth-method>OIDC</auth-method> 1 </login-config> </web-app>", "{ \"provider-url\" : \"http://localhost:8180/realms/example_realm\", \"ssl-required\": \"external\", \"client-id\": \"jbeap-oidc\", \"public-client\": true, \"confidential-port\": 0 }", "/subsystem=elytron-oidc-client/secure-deployment=simple-oidc-example.war/:add(client-id=jbeap-oidc,provider-url=http://localhost:8180/realms/example_realm,public-client=true,ssl-required=external)", "mvn package", "mvn wildfly:deploy", "<login-config> <auth-method>SAML</auth-method> </login-config>", "<keycloak-saml-adapter> <SP entityID=\"\" sslPolicy=\"EXTERNAL\" logoutPage=\"SPECIFY YOUR LOGOUT PAGE!\"> <Keys> <Key signing=\"true\"> <PrivateKeyPem>PRIVATE KEY NOT SET UP OR KNOWN</PrivateKeyPem> <CertificatePem>...</CertificatePem> </Key> </Keys> <IDP entityID=\"idp\" signatureAlgorithm=\"RSA_SHA256\" signatureCanonicalizationMethod=\"http://www.w3.org/2001/10/xml-exc-c14n#\"> <SingleSignOnService signRequest=\"true\" validateResponseSignature=\"true\" validateAssertionSignature=\"false\" requestBinding=\"POST\" bindingUrl=\"http://localhost:8180/realms/example_saml_realm/protocol/saml\"/> <SingleLogoutService signRequest=\"true\" signResponse=\"true\" validateRequestSignature=\"true\" validateResponseSignature=\"true\" requestBinding=\"POST\" responseBinding=\"POST\" postBindingUrl=\"http://localhost:8180/realms/example_saml_realm/protocol/saml\" redirectBindingUrl=\"http://localhost:8180/realms/example_saml_realm/protocol/saml\"/> </IDP> </SP> </keycloak-saml-adapter>", "2023-05-17 19:54:31,586 WARN [org.keycloak.events] (executor-thread-0) type=LOGIN_ERROR, realmId=eba0f106-389f-4216-a676-05fcd0c0c72e, clientId=null, userId=null, ipAddress=127.0.0.1, error=client_not_found, reason=Cannot_match_source_hash", "Can't reset to root in the middle of the path @72", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <web-app version=\"2.5\" xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd\" metadata-complete=\"false\"> <security-constraint> <web-resource-collection> <web-resource-name>secured</web-resource-name> <url-pattern>/secured</url-pattern> </web-resource-collection> <auth-constraint> <role-name>Admin</role-name> 1 </auth-constraint> </security-constraint> <security-role> <role-name>*</role-name> </security-role> </web-app>", "<web-app> <login-config> <auth-method>SAML</auth-method> 1 </login-config> </web-app>", "<keycloak-saml-adapter> <SP entityID=\"\" sslPolicy=\"EXTERNAL\" logoutPage=\"SPECIFY YOUR LOGOUT PAGE!\"> <Keys> <Key signing=\"true\"> <PrivateKeyPem>PRIVATE KEY NOT SET UP OR KNOWN</PrivateKeyPem> <CertificatePem>...</CertificatePem> </Key> </Keys> <IDP entityID=\"idp\" signatureAlgorithm=\"RSA_SHA256\" signatureCanonicalizationMethod=\"http://www.w3.org/2001/10/xml-exc-c14n#\"> <SingleSignOnService signRequest=\"true\" validateResponseSignature=\"true\" validateAssertionSignature=\"false\" requestBinding=\"POST\" bindingUrl=\"http://localhost:8180/realms/example_saml_realm/protocol/saml\"/> <SingleLogoutService signRequest=\"true\" signResponse=\"true\" validateRequestSignature=\"true\" validateResponseSignature=\"true\" requestBinding=\"POST\" responseBinding=\"POST\" postBindingUrl=\"http://localhost:8180/realms/example_saml_realm/protocol/saml\" redirectBindingUrl=\"http://localhost:8180/realms/example_saml_realm/protocol/saml\"/> </IDP> </SP> </keycloak-saml-adapter>", "/subsystem=keycloak-saml/secure-deployment=YOUR-WAR.war/:add /subsystem=keycloak-saml/secure-deployment=YOUR-WAR.war/SP=\"http://localhost:8080/simple-webapp-example/\"/:add(sslPolicy=EXTERNAL,logoutPage=\"SPECIFY YOUR LOGOUT PAGE!\" /subsystem=keycloak-saml/secure-deployment=YOUR-WAR.war/SP=\"http://localhost:8080/simple-webapp-example/\"/Key=KEY1:add(signing=true, PrivateKeyPem=\"...\", CertificatePem=\"...\") /subsystem=keycloak-saml/secure-deployment=YOUR-WAR.war/SP=\"http://localhost:8080/simple-webapp-example/\"/IDP=idp/:add( SingleSignOnService={ signRequest=true, validateResponseSignature=true, validateAssertionSignature=false, requestBinding=POST, bindingUrl=http://localhost:8180/realms/example-saml-realm/protocol/saml}, SingleLogoutService={ signRequest=true, signResponse=true, validateRequestSignature=true, validateResponseSignature=true, requestBinding=POST, responseBinding=POST, postBindingUrl=http://localhost:8180/realms/example-saml-realm/protocol/saml, redirectBindingUrl=http://localhost:8180/realms/example-saml-realm/protocol/saml} ) /subsystem=keycloak-saml/secure-deployment=YOUR-WAR.war/SP=\"http://localhost:8080/simple-webapp-example/\"/IDP=idp/:write-attribute(name=signatureAlgorithm,value=RSA_SHA256) /subsystem=keycloak-saml/secure-deployment=YOUR-WAR.war/SP=\"http://localhost:8080/simple-webapp-example/\"/IDP=idp/:write-attribute(name=signatureCanonicalizationMethod,value=http://www.w3.org/2001/10/xml-exc-c14n#)", "/subsystem=keycloak-saml/secure-deployment=YOUR-WAR.war/SP=\"\"/:add(sslPolicy=EXTERNAL,logoutPage=\"SPECIFY YOUR LOGOUT PAGE!\"", "<EAP_HOME> /bin/jboss-cli.sh -c --file=<path_to_the_file>/keycloak-saml-subsystem.cli", "mvn wildfly:deploy", "/subsystem=elytron/virtual-security-domain= <deployment_name> :add()", "/subsystem=elytron/virtual-security-domain=simple-ear-example.ear:add()", "@SecurityDomain(\" <deployment_name> \")", "@SecurityDomain(\"simple-ear-example.ear\") @Remote(RemoteHello.class) @Stateless public class RemoteHelloBean implements RemoteHello { @Resource private SessionContext context; @Override public String whoAmI() { return context.getCallerPrincipal().getName(); } }", "mvn wildfly:deploy", "/subsystem=elytron/virtual-security-domain= <deployment_name> :add(outflow-security-domains=[ <domain_to_propagate_to> ])", "/subsystem=elytron/virtual-security-domain=simple-ear-example.ear:add(outflow-security-domains=[exampleEJBSecurityDomain])", "/subsystem=elytron/security-domain= <security_domain_name> :write-attribute(name=trusted-virtual-security-domains,value=[ <deployment_name> ])", "/subsystem=elytron/security-domain=exampleEJBSecurityDomain:write-attribute(name=trusted-virtual-security-domains,value=[simple-ear-example.ear])", "reload", "mvn wildfly:deploy", "/subsystem=elytron-oidc-client/provider=keycloak:add(provider-url= <OIDC_provider_URL> )", "/subsystem=elytron-oidc-client/provider=keycloak:add(provider-url=http://localhost:8180/realms/example_jboss_infra)", "/subsystem=elytron-oidc-client/secure-deployment=wildfly-management:add(provider= <OIDC_provider_name> ,client-id= <OIDC_client_name> ,principal-attribute= <attribute_to_use_as_principal> ,bearer-only=true,ssl-required= <internal_or_external> )", "/subsystem=elytron-oidc-client/secure-deployment=wildfly-management:add(provider=keycloak,client-id=jboss-management,principal-attribute=preferred_username,bearer-only=true,ssl-required=EXTERNAL)", "/core-service=management/access=authorization:write-attribute(name=provider,value=rbac) /core-service=management/access=authorization:write-attribute(name=use-identity-roles,value=true)", "/subsystem=elytron-oidc-client/secure-server=wildfly-console:add(provider= <OIDC_provider_name> ,client-id= <OIDC_client_name> ,public-client=true)", "/subsystem=elytron-oidc-client/secure-server=wildfly-console:add(provider=keycloak,client-id=jboss-console,public-client=true)" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html-single/using_single_sign-on_with_jboss_eap/index
Chapter 8. Configuring your Logging deployment
Chapter 8. Configuring your Logging deployment 8.1. Configuring CPU and memory limits for logging components You can configure both the CPU and memory limits for each of the logging components as needed. 8.1.1. Configuring CPU and memory limits The logging components allow for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging ... spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: "gp2" size: "200G" redundancyPolicy: "SingleRedundancy" visualization: type: "kibana" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd 1 Specify the CPU and memory limits and requests for the log store as needed. For Elasticsearch, you must adjust both the request value and the limit value. 2 3 Specify the CPU and memory limits and requests for the log visualizer as needed. 4 Specify the CPU and memory limits and requests for the log collector as needed.
[ "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi type: fluentd" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/logging/configuring-your-logging-deployment
Chapter 2. Creating a filtered AWS integration
Chapter 2. Creating a filtered AWS integration Note If you created an unfiltered AWS integration, do not complete the following steps. Your AWS integration is already complete. If you are using RHEL metering, after you integrate your data with cost management, go to Adding RHEL metering to an AWS integration to finish configuring your integration for RHEL metering. AWS is a third-party product and its UI and documentation can change. The instructions for configuring third-party integrations are correct at the time of publishing. For the most up-to-date information, see the AWS documentation . To share a subset of your billing data with Red Hat, you can configure a function script in AWS. This script will filter your billing data and export it to object storage so that cost management can then access and read the filtered data. Add your AWS integration to cost management from the Integrations page . Prerequisites You must have a Red Hat Hybrid Cloud Console service account . To add data integrations to cost management, you must have a Red Hat account with Cloud Administrator permissions. 2.1. Adding an AWS account as an integration Add an AWS integration so that cost management can process your AWS Cost and Usage Reports . You can configure cost management to filter the data that you send. Procedure From Red Hat Hybrid Cloud Console , click Settings > Integrations . On the Settings page, click Create Integration Cloud to enter the Add a cloud integration wizard. On the Select integration type step, select Amazon Web Services . Click . Enter a name for the integration and click . On the Select configuration step, select Manual configuration . Do not select the recommended configuration mode when you set up cost management integrations. The recommendation is for other workflows. In the Select application step, select Cost management . Click . 2.2. Creating an AWS S3 bucket to store your Athena billing data Create an Amazon S3 bucket to store Athena billing reports. Procedure Log in to your AWS account . In the AWS Billing Console , create a data export that will be delivered to your S3 bucket. Specify the following values and accept the defaults for any other values: Export type : Legacy CUR export Report name : <rh_cost_report> (save this name, you will use it later) Additional report details : Include resource IDs S3 bucket : Select an S3 bucket that you configured previously or create a new bucket and accept the default settings. Time granularity : Hourly Enable report data integration for : Amazon Athena which is required for lambda queries Compression type : Parquet Report path prefix : cost Note For more details about configuration, see the AWS Billing and Cost Management documentation. 2.3. Creating a bucket to store filtered data reporting To share your filtered data with Red Hat, you must create a second bucket to store the data. Procedure In your AWS account , navigate to Configure S3 Bucket and click Configure . Create a bucket and apply the default policy. Click Save . In the cost management Create an integration wizard: On the Create storage step, paste the name of your S3 bucket and select the region that it was created in. Click . On the Create cost and usage report step, select I wish to manually customize the CUR sent to Cost Management . Click . 2.4. Activating AWS tags Tags can help you organize your AWS resources in cost management. Activate your tags in AWS and then give cost management permissions to import them automatically. In the AWS Billing console : Click Cost Allocation Tags . Select the tags that you want to use in cost management. Click Activate . If your organization is converting systems from CentOS 7 to RHEL and using hourly billing, activate the com_redhat_rhel tag for your systems. If you are tagging instances of RHEL that you want to meter in AWS, select Include RHEL usage . Return to the Red Hat Hybrid Cloud Console Create an integration wizard and select Include RHEL usage . For more information about tagging, see Adding tags to an AWS resource . 2.5. Configuring an IAM policy to enable account access for AWS Cost and Usage Reports Cost management needs your AWS Cost and Usage Reports to display data. To provide access to only your stored information, create an Identity and Access Management (IAM) policy and role in AWS. In the cost management Add a cloud integration wizard: On the Tags, aliases, and organizational units step, select any additional data points that you want to include: Select Include AWS account aliases to display an AWS account alias rather than an account number. In the step of the wizard, this selection will populate iam:ListAccountAliases in your IAM JSON policy. Select Include AWS organization units if you are using consolidated billing rather than the account ID. In the step of the wizard, this selection will populate _organization:List*_ and _organizations:Describe*_ in your IAM JSON policy. Click Copy the IAM JSON policy that is generated based on your selections. In the AWS Identity and Access Management console : Create a new IAM policy for the S3 bucket that you configured. Select the JSON tab and enter the IAM JSON policy that you copied from the Red Hat Hybrid Cloud Console Add a cloud integration wizard. Example IAM JSON policy + { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:Get*", "s3:List*" ], "Resource": [ "arn:aws:s3:::<your_bucket_name>", 1 "arn:aws:s3:::<your_bucket_name>/*" ] }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": [ "s3:ListBucket", "cur:DescribeReportDefinitions" ], "Resource": "*" } ] } , create a new IAM role: Select Another AWS account as the type of trusted entity. Enter 589173575009 for the Account ID to give Red Hat Hybrid Cloud Console read access to the AWS account's cost data. In the cost management Add a cloud integration wizard: Click . Copy your External ID from the Create IAM role step. In the AWS Identity and Access Management console : Enter your External ID . Attach the IAM policy that you configured. Enter a role name and description. In Roles , open the summary screen for the role that you created. Copy the Role ARN . It is a string that starts with arn:aws: . In the cost management Add a cloud integration wizard: Click Enter your Role ARN and click . Review the details of your cloud integration and click Add . steps To customize your AWS data export, return to AWS and configure Athena and Lambda to filter your reports. 2.6. Enabling account access for Athena Create an IAM policy and role for cost management to use. This configuration provides access to the stored information and nothing else. Procedure From the AWS Identity and Access Management (IAM) console, create an IAM policy for the Athena Lambda functions you will configure. Select the JSON tab and paste the following content in the JSON policy text box: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "athena:*" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "glue:CreateDatabase", "glue:DeleteDatabase", "glue:GetDatabase", "glue:GetDatabases", "glue:UpdateDatabase", "glue:CreateTable", "glue:DeleteTable", "glue:BatchDeleteTable", "glue:UpdateTable", "glue:GetTable", "glue:GetTables", "glue:BatchCreatePartition", "glue:CreatePartition", "glue:DeletePartition", "glue:BatchDeletePartition", "glue:UpdatePartition", "glue:GetPartition", "glue:GetPartitions", "glue:BatchGetPartition" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload", "s3:CreateBucket", "s3:PutObject", "s3:PutBucketPublicAccessBlock" ], "Resource": [ "arn:aws:s3:::CHANGE-ME*" 1 ] }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::CHANGE-ME*" 2 ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListAllMyBuckets" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "sns:ListTopics", "sns:GetTopicAttributes" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "cloudwatch:PutMetricAlarm", "cloudwatch:DescribeAlarms", "cloudwatch:DeleteAlarms", "cloudwatch:GetMetricData" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "lakeformation:GetDataAccess" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "logs:*" ], "Resource": "*" } ] } 1 1 1 2 Replace CHANGE-ME* in both locations with the ARN for the S3 bucket you configured in step 2.2. Name the policy and complete the creation of the policy. Keep the AWS IAM console open because you will need it for the step. In the AWS IAM console, create a new IAM role: For the type of trusted entity, select AWS service . Select Lambda. Attach the IAM policy you just configured. Enter a role name and description and finish creating the role. Store your login information in AWS Secrets Manager and add it to the role you created. Select Secret type: Other type of secret . Create a key for your Red Hat Hybrid Cloud Console client_id . Create a key for your Red Hat Hybrid Cloud Console client_secret . Add the values for your user name and password to the appropriate key. Click Continue , then name and store your secret. Update the role you created for your Lambda functions. Include the following code to reference the secret stored in AWS Secrets Manager: { "Sid": "VisualEditor3", "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], "Resource": "*" } 2.6.1. Configuring Athena for report generation Configuring Athena to provide a filtered data export for cost management. The following configuration only provides access to additional stored information. It does not provide access to anything else: Procedure In the AWS S3 console, go to the S3 bucket you configured in step 2.2. Then, go to the crawler-cfn.yml file, which is in the path created by your data export you configured. For example: {bucket-name}/{S3_path_prefix}/{export_name}/crawler-cfn.yml . Copy the Object URL for the crawler-cfn.yml . From Cloudformation in the AWS console, create a stack with new resources: Choose an existing template. Select Specify Template . Select Template Source: Amazon S3 URL . Paste the object URL you copied before. Enter a name and click . Click I acknowledge that AWS Cloudformation might create IAM resources and then click Submit . 2.6.2. Building an Athena query Create an Athena query that queries the data export for your Red Hat expenses and creates a report of your filtered expenses. You might need just the query included with the example script, for example, if you are filtering for Red Hat spending. If you need something more advanced, create a custom query. If you are using RHEL metering, you must adjust the query to return data that is specific to your RHEL subscriptions. The following steps will guide you through creating a RHEL subscription query. Example Athena query for Red Hat spend SELECT * FROM <your_export_name> WHERE ( bill_billing_entity = 'AWS Marketplace' AND line_item_legal_entity like '%Red Hat%' ) OR ( line_item_legal_entity like '%Amazon Web Services%' AND line_item_line_item_description like '%Red Hat%' ) OR ( line_item_legal_entity like '%Amazon Web Services%' AND line_item_line_item_description like '%RHEL%' ) OR ( line_item_legal_entity like '%AWS%' AND line_item_line_item_description like '%Red Hat%' ) OR ( line_item_legal_entity like '%AWS%' AND line_item_line_item_description like '%RHEL%' ) OR ( line_item_legal_entity like '%AWS%' AND product_product_name like '%Red Hat%' ) OR ( line_item_legal_entity like '%Amazon Web Services%' AND product_product_name like '%Red Hat%' ) AND year = '2024' AND month = '07' In your AWS account : Go to Amazon Athena from the editor tab. From the Data source menu, select AwsDataCatalog . From the Database menu, select your data export. Your data export name is prepended with athenacurcfn_ followed by your data export name. For example, {your_export_name} . Paste the following example query into the Query field. Replace the your_export_name value with your data export name. SELECT column_name FROM information_schema.columns WHERE table_name = '<your_export_name>' AND column_name LIKE 'resource_tags_%'; Click Run . The results of this query returns all the tag related columns for your data set. Copy the tag column that matches the column used for your RHEL tags. Paste in the following example query. Replace the your_export_name , the tags column copied in the step before, and the year and month you want to query. The result returns EC2 instances appropriately tagged for RHEL subscriptions. Copy and save this query for use in the future Lambda function. SELECT * FROM <your_export_name> WHERE ( line_item_product_code = 'AmazonEC2' AND strpos(lower(<rhel_tag_column_name>), 'com_redhat_rhel') > 0 ) AND year = '<year>' AND month = '<month>' 2.6.3. Creating a Lambda function for Athena You must create a Lambda function that queries the data export for your Red Hat related expenses and creates a report of your filtered expenses. Procedure In the AWS console, go to Lambda and click Create function . Click Author from scratch . Enter a name your function. From the Runtime menu, select the latest version of Python available. From the Architecture menu, select x86_64 . Under Permissions select the Athena role you created. To add the query you built as part of the Lambda function, click Create function to save your progress. From the function Code tab, paste this script . Update the following lines: your_integration_external_id Enter the integration UUID you copied in the Enabling account access for cost and usage consumption step. bucket Enter the name of the S3 bucket you created to store filtered reports during the Creating a bucket for storing filtered data reporting step. database Enter the database name used in the Building your Athena query step. export_name Enter the name of your data export from when you created an AWS S3 bucket for storing your cost data. Update the default query with your custom one by replacing the where clause, for example: # Athena query query = f"SELECT * FROM {database}.{export_name} WHERE (line_item_product_code = 'AmazonEC2' AND strpos(lower(<rhel_tag_column_name>), 'com_redhat_rhel') > 0) AND year = '{year}' AND month = '{month}'" Click Deploy to test the function. 2.6.4. Creating a Lambda function to post the report files You must create a second Lambda function to post your filtered reports in a bucket that Red Hat can access. Procedure Go to Lambda in the AWS console and click Create function . Click Author from scratch . Enter a name your function. From the Runtime menu, select the latest version of Python available. Select x86_64 as the Architecture. Under Permissions select the Athena role you created. Click Create function . Paste this script into the function and replace the following lines: secret_name = "CHANGEME" Enter your secret name. bucket = "<your_S3_Bucket_Name>" Enter the name of the S3 bucket you created to store filtered reports during the Creating a bucket for storing filtered data reporting step. Click Deploy to test the function. 2.7. Creating event bridge schedules You must trigger the Lambda functions you created by scheduling an AmazonEventBridge. Procedure Create two AmazonEventBridge schedules to trigger each of the functions that you created. You must trigger these functions at different cadences so that the Athena query is completed before it sends the reports: Add a name and description. In the Group field, select Default . In the Occurrence field, select Recurring schedule . In the Type field, select Chron-based . Set the cron-based schedules 12 hours apart. The following example triggers the function at 9AM and 9PM, 0 9 * * ? * and 0 21 * * ? * . Set a flexible time window. Click . Set the Target detail to AWS Lambda invoke to associate this schedule with the Lambda function: Select the Lambda function you created before. Click . Enable the schedule: Configure the retry logic. Ignore the encryption. Set the permissions to Create new role on the fly . Click . Review your selections and click Create . 2.8. Creating additional cloud functions to collect finalized data AWS sends final reports for the last month at the start of the following month. Send these finalized reports to Cost management, which will analyze the extra information. Procedure Create Athena query for the Lambda function: Create a function for querying Athena. Select Author from scratch . Select the Python runtime. Select the x86_64 architecture. Select the role created before for permissions. Click Create . Click the Code tab to add a script to collect the finalized data. Copy the Athena query function and add it to the query. Update the <integration_uuid> with the integration_uuid from the integration you created on console.redhat.com , which you can find by going to the the Integrations page and clicking your integration. Update the BUCKET and DATABASE variables with the bucket name and databases you created. Then, update export_name with the name of the data export Athena query you created before. Remove the comment from the following code: # last_month = now.replace(day=1) - timedelta(days=1) # year = last_month.strftime("%Y") # month = last_month.strftime("%m") # day = last_month.strftime("%d") # file_name = 'finalized-data.json' Click Deploy . Then click Test to see the execution results. Create a Lambda function to post the report files to cost management: Select Author from scratch . Name your function. Select the Python runtime. Select the x86_64 architecture. Select the role created before for permissions. Click Create . Click the Code tab to add a script to post the finalized data. Copy the post function and add it to the query. Update the secret_name with the name of your secret in AWS Secrets Manager. Update the bucket with the bucket name you created. Remove the comment from the following code: # file_name = 'finalized_data.json' Click Deploy . Then click Test to see the execution results. Create an EventBridge schedule to trigger the two functions. For more information, see Section 2.7, "Creating event bridge schedules" . Set the EventBridge schedule to run one time a month on or after the 15th of the month because your AWS bill for the earlier period is final by that date. For example, (0 9 15 * ? *) and (0 21 15 * ? *) . After completing these steps, cost management will begin collecting Cost and Usage data from your AWS account and any linked AWS accounts. Note The data can take a few days to populate before it shows on the cost management dashboard.
[ "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"s3:Get*\", \"s3:List*\" ], \"Resource\": [ \"arn:aws:s3:::<your_bucket_name>\", 1 \"arn:aws:s3:::<your_bucket_name>/*\" ] }, { \"Sid\": \"VisualEditor1\", \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"cur:DescribeReportDefinitions\" ], \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"athena:*\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"glue:CreateDatabase\", \"glue:DeleteDatabase\", \"glue:GetDatabase\", \"glue:GetDatabases\", \"glue:UpdateDatabase\", \"glue:CreateTable\", \"glue:DeleteTable\", \"glue:BatchDeleteTable\", \"glue:UpdateTable\", \"glue:GetTable\", \"glue:GetTables\", \"glue:BatchCreatePartition\", \"glue:CreatePartition\", \"glue:DeletePartition\", \"glue:BatchDeletePartition\", \"glue:UpdatePartition\", \"glue:GetPartition\", \"glue:GetPartitions\", \"glue:BatchGetPartition\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\", \"s3:ListBucketMultipartUploads\", \"s3:ListMultipartUploadParts\", \"s3:AbortMultipartUpload\", \"s3:CreateBucket\", \"s3:PutObject\", \"s3:PutBucketPublicAccessBlock\" ], \"Resource\": [ \"arn:aws:s3:::CHANGE-ME*\" 1 ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:ListBucket\" ], \"Resource\": [ \"arn:aws:s3:::CHANGE-ME*\" 2 ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListAllMyBuckets\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"sns:ListTopics\", \"sns:GetTopicAttributes\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"cloudwatch:PutMetricAlarm\", \"cloudwatch:DescribeAlarms\", \"cloudwatch:DeleteAlarms\", \"cloudwatch:GetMetricData\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"lakeformation:GetDataAccess\" ], \"Resource\": [ \"*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"logs:*\" ], \"Resource\": \"*\" } ] }", "{ \"Sid\": \"VisualEditor3\", \"Effect\": \"Allow\", \"Action\": [ \"secretsmanager:GetSecretValue\", \"secretsmanager:DescribeSecret\" ], \"Resource\": \"*\" }", "SELECT * FROM <your_export_name> WHERE ( bill_billing_entity = 'AWS Marketplace' AND line_item_legal_entity like '%Red Hat%' ) OR ( line_item_legal_entity like '%Amazon Web Services%' AND line_item_line_item_description like '%Red Hat%' ) OR ( line_item_legal_entity like '%Amazon Web Services%' AND line_item_line_item_description like '%RHEL%' ) OR ( line_item_legal_entity like '%AWS%' AND line_item_line_item_description like '%Red Hat%' ) OR ( line_item_legal_entity like '%AWS%' AND line_item_line_item_description like '%RHEL%' ) OR ( line_item_legal_entity like '%AWS%' AND product_product_name like '%Red Hat%' ) OR ( line_item_legal_entity like '%Amazon Web Services%' AND product_product_name like '%Red Hat%' ) AND year = '2024' AND month = '07'", "SELECT column_name FROM information_schema.columns WHERE table_name = '<your_export_name>' AND column_name LIKE 'resource_tags_%';", "SELECT * FROM <your_export_name> WHERE ( line_item_product_code = 'AmazonEC2' AND strpos(lower(<rhel_tag_column_name>), 'com_redhat_rhel') > 0 ) AND year = '<year>' AND month = '<month>'", "Athena query query = f\"SELECT * FROM {database}.{export_name} WHERE (line_item_product_code = 'AmazonEC2' AND strpos(lower(<rhel_tag_column_name>), 'com_redhat_rhel') > 0) AND year = '{year}' AND month = '{month}'\"", "last_month = now.replace(day=1) - timedelta(days=1) year = last_month.strftime(\"%Y\") month = last_month.strftime(\"%m\") day = last_month.strftime(\"%d\") file_name = 'finalized-data.json'", "file_name = 'finalized_data.json'" ]
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_amazon_web_services_aws_data_into_cost_management/assembly-adding-filtered-aws-int
14.5.23. Creating a Virtual Machine XML Dump (Configuration File)
14.5.23. Creating a Virtual Machine XML Dump (Configuration File) Output a guest virtual machine's XML configuration file with virsh : This command outputs the guest virtual machine's XML configuration file to standard out ( stdout ). You can save the data by piping the output to a file. An example of piping the output to a file called guest.xml : This file guest.xml can recreate the guest virtual machine (refer to Section 14.6, "Editing a Guest Virtual Machine's configuration file" . You can edit this XML configuration file to configure additional devices or to deploy additional guest virtual machines. An example of virsh dumpxml output: Note that the <shareable/> flag is set. This indicates the device is expected to be shared between domains (assuming the hypervisor and OS support this), which means that caching should be deactivated for that device.
[ "virsh dumpxml {guest-id, guestname or uuid}", "virsh dumpxml GuestID > guest.xml", "virsh dumpxml guest1-rhel6-64 <domain type='kvm'> <name>guest1-rhel6-64</name> <uuid>b8d7388a-bbf2-db3a-e962-b97ca6e514bd</uuid> <memory>2097152</memory> <currentMemory>2097152</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='rhel6.2.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='threads'/> <source file='/home/guest-images/guest1-rhel6-64.img'/> <target dev='vda' bus='virtio'/> <shareable/< <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <interface type='bridge'> <mac address='52:54:00:b9:35:a9'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <sound model='ich6'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='cirrus' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> </domain>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-domain_commands-creating_a_virtual_machine_xml_dump_configuration_file
Chapter 2. Uploading current system data to Insights
Chapter 2. Uploading current system data to Insights Whether you are using the compliance service to view system compliance status, remediate issues, or report status to stakeholders, upload current data from your systems to see the most up-to-date information. Procedure Run the following command on each system to upload current data to Insights for Red Hat Enterprise Linux: [root@server ~]# insights-client --compliance
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_compliance_service_reports/assembly-compl-uploading-current-data-systems
Schedule and quota APIs
Schedule and quota APIs OpenShift Container Platform 4.16 Reference guide for schedule and quota APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/schedule_and_quota_apis/index
Chapter 2. Comprehensive guide to getting started with Red Hat OpenShift Service on AWS
Chapter 2. Comprehensive guide to getting started with Red Hat OpenShift Service on AWS Note If you are looking for a quickstart guide for ROSA, see Red Hat OpenShift Service on AWS quickstart guide . Follow this getting started document to create a Red Hat OpenShift Service on AWS (ROSA) cluster, grant user access, deploy your first application, and learn how to revoke user access and delete your cluster. You can create a ROSA cluster either with or without the AWS Security Token Service (STS). The procedures in this document enable you to create a cluster that uses AWS STS. For more information about using AWS STS with ROSA clusters, see Using the AWS Security Token Service . 2.1. Prerequisites You reviewed the introduction to Red Hat OpenShift Service on AWS (ROSA) , and the documentation on ROSA architecture models and architecture concepts . You have read the documentation on limits and scalability and the guidelines for planning your environment . You have reviewed the detailed AWS prerequisites for ROSA with STS . You have the AWS service quotas that are required to run a ROSA cluster . 2.2. Setting up the environment Before you create a Red Hat OpenShift Service on AWS (ROSA) cluster, you must set up your environment by completing the following tasks: Verify ROSA prerequisites against your AWS and Red Hat accounts. Install and configure the required command line interface (CLI) tools. Verify the configuration of the CLI tools. You can follow the procedures in this section to complete these setup requirements. 2.2.1. Verifying ROSA prerequisites Use the steps in this procedure to enable Red Hat OpenShift Service on AWS (ROSA) in your AWS account. Prerequisites You have a Red Hat account. You have an AWS account. Note Consider using a dedicated AWS account to run production clusters. If you are using AWS Organizations, you can use an AWS account within your organization or create a new one . Procedure Sign in to the AWS Management Console . Navigate to the ROSA service . Click Get started . The Verify ROSA prerequisites page opens. Under ROSA enablement , ensure that a green check mark and You previously enabled ROSA are displayed. If not, follow these steps: Select the checkbox beside I agree to share my contact information with Red Hat . Click Enable ROSA . After a short wait, a green check mark and You enabled ROSA message are displayed. Under Service Quotas , ensure that a green check and Your quotas meet the requirements for ROSA are displayed. If you see Your quotas don't meet the minimum requirements , take note of the quota type and the minimum listed in the error message. See Amazon's documentation on requesting a quota increase for guidance. It may take several hours for Amazon to approve your quota request. Under ELB service-linked role , ensure that a green check mark and AWSServiceRoleForElasticLoadBalancing already exists are displayed. Click Continue to Red Hat . The Get started with Red Hat OpenShift Service on AWS (ROSA) page opens in a new tab. You have already completed Step 1 on this page, and can now continue with Step 2. Additional resources Troubleshoot ROSA enablement errors 2.2.2. Installing and configuring the required CLI tools Several command line interface (CLI) tools are required to deploy and work with your cluster. Prerequisites You have an AWS account. You have a Red Hat account. Procedure Log in to your Red Hat and AWS accounts to access the download page for each required tool. Log in to your Red Hat account at console.redhat.com . Log in to your AWS account at aws.amazon.com . Install and configure the latest AWS CLI ( aws ). Install the AWS CLI by following the AWS Command Line Interface documentation appropriate for your workstation. Configure the AWS CLI by specifying your aws_access_key_id , aws_secret_access_key , and region in the .aws/credentials file. For more information, see AWS Configuration basics in the AWS documentation. Note You can optionally use the AWS_DEFAULT_REGION environment variable to set the default AWS region. Query the AWS API to verify if the AWS CLI is installed and configured correctly: USD aws sts get-caller-identity --output text Example output <aws_account_id> arn:aws:iam::<aws_account_id>:user/<username> <aws_user_id> Install and configure the latest ROSA CLI ( rosa ). Navigate to Downloads . Find Red Hat OpenShift Service on AWS command line interface (`rosa) in the list of tools and click Download . The rosa-linux.tar.gz file is downloaded to your default download location. Extract the rosa binary file from the downloaded archive. The following example extracts the binary from a Linux tar archive: USD tar xvf rosa-linux.tar.gz Move the rosa binary file to a directory in your execution path. In the following example, the /usr/local/bin directory is included in the path of the user: USD sudo mv rosa /usr/local/bin/rosa Verify that the ROSA CLI is installed correctly by querying the rosa version: USD rosa version Example output 1.2.47 Your ROSA CLI is up to date. Log in to the ROSA CLI using an offline access token. Run the login command: USD rosa login Example output To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here: Navigate to the URL listed in the command output to view your offline access token. Enter the offline access token at the command line prompt to log in. ? Copy the token and paste it here: ******************* [full token length omitted] Note In the future you can specify the offline access token by using the --token="<offline_access_token>" argument when you run the rosa login command. Verify that you are logged in and confirm that your credentials are correct before proceeding: USD rosa whoami Example output AWS Account ID: <aws_account_number> AWS Default Region: us-east-1 AWS ARN: arn:aws:iam::<aws_account_number>:user/<aws_user_name> OCM API: https://api.openshift.com OCM Account ID: <red_hat_account_id> OCM Account Name: Your Name OCM Account Username: [email protected] OCM Account Email: [email protected] OCM Organization ID: <org_id> OCM Organization Name: Your organization OCM Organization External ID: <external_org_id> Install and configure the latest OpenShift CLI ( oc ). Use the ROSA CLI to download the oc CLI. The following command downloads the latest version of the CLI to the current working directory: USD rosa download openshift-client Extract the oc binary file from the downloaded archive. The following example extracts the files from a Linux tar archive: USD tar xvf openshift-client-linux.tar.gz Move the oc binary to a directory in your execution path. In the following example, the /usr/local/bin directory is included in the path of the user: USD sudo mv oc /usr/local/bin/oc Verify that the oc CLI is installed correctly: USD rosa verify openshift-client Example output I: Verifying whether OpenShift command-line tool is available... I: Current OpenShift Client Version: 4.17.3 2.3. Creating a ROSA cluster with STS Choose from one of the following methods to deploy a Red Hat OpenShift Service on AWS (ROSA) cluster that uses the AWS Security Token Service (STS). In each scenario, you can deploy your cluster by using Red Hat OpenShift Cluster Manager or the ROSA CLI ( rosa ): Creating a ROSA cluster with STS using the default options : You can create a ROSA cluster with STS quickly by using the default options and automatic STS resource creation. Creating a ROSA cluster with STS using customizations : You can create a ROSA cluster with STS using customizations. You can also choose between the auto and manual modes when creating the required STS resources. Additional resources For detailed steps to deploy a ROSA cluster without STS, see Creating a ROSA cluster without AWS STS and Creating an AWS PrivateLink cluster on ROSA . For information about the account-wide IAM roles and policies that are required for ROSA deployments that use STS, see Account-wide IAM role and policy reference . For details about using the auto and manual modes to create the required STS resources, see Understanding the auto and manual deployment modes . For information about the update life cycle for ROSA, see Red Hat OpenShift Service on AWS update life cycle . 2.4. Creating a cluster administrator user for quick cluster access Before configuring an identity provider, you can create a user with cluster-admin privileges for immediate access to your Red Hat OpenShift Service on AWS (ROSA) cluster. Note The cluster administrator user is useful when you need quick access to a newly deployed cluster. However, consider configuring an identity provider and granting cluster administrator privileges to the identity provider users as required. For more information about setting up an identity provider for your ROSA cluster, see Configuring an identity provider and granting cluster access . Prerequisites You have an AWS account. You installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , on your workstation. You logged in to your Red Hat account using the ROSA CLI ( rosa ). You created a ROSA cluster. Procedure Create a cluster administrator user: USD rosa create admin --cluster=<cluster_name> 1 1 Replace <cluster_name> with the name of your cluster. Example output W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information. I: Admin account has been added to cluster '<cluster_name>'. I: Please securely store this generated password. If you lose this password you can delete and recreate the cluster admin user. I: To login, run the following command: oc login https://api.example-cluster.wxyz.p1.openshiftapps.com:6443 --username cluster-admin --password d7Rca-Ba4jy-YeXhs-WU42J I: It may take up to a minute for the account to become active. Note It might take approximately one minute for the cluster-admin user to become active. Log in to the cluster through the CLI: Run the command provided in the output of the preceding step to log in: USD oc login <api_url> --username cluster-admin --password <cluster_admin_password> 1 1 Replace <api_url> and <cluster_admin_password> with the API URL and cluster administrator password for your environment. Verify if you are logged in to the ROSA cluster as the cluster-admin user: USD oc whoami Example output cluster-admin Additional resource For steps to log in to the ROSA web console, see Accessing a cluster through the web console 2.5. Configuring an identity provider and granting cluster access Red Hat OpenShift Service on AWS (ROSA) includes a built-in OAuth server. After your ROSA cluster is created, you must configure OAuth to use an identity provider. You can then add members to your configured identity provider to grant them access to your cluster. You can also grant the identity provider users with cluster-admin or dedicated-admin privileges as required. 2.5.1. Configuring an identity provider You can configure different identity provider types for your Red Hat OpenShift Service on AWS (ROSA) cluster. Supported types include GitHub, GitHub Enterprise, GitLab, Google, LDAP, OpenID Connect and htpasswd identity providers. Important The htpasswd identity provider option is included only to enable the creation of a single, static administration user. htpasswd is not supported as a general-use identity provider for Red Hat OpenShift Service on AWS. The following procedure configures a GitHub identity provider as an example. Prerequisites You have an AWS account. You installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , on your workstation. You logged in to your Red Hat account using the ROSA CLI ( rosa ). You created a ROSA cluster. You have a GitHub user account. Procedure Go to github.com and log in to your GitHub account. If you do not have an existing GitHub organization to use for identity provisioning for your ROSA cluster, create one. Follow the steps in the GitHub documentation . Configure a GitHub identity provider for your cluster that is restricted to the members of your GitHub organization. Configure an identity provider using the interactive mode: USD rosa create idp --cluster=<cluster_name> --interactive 1 1 Replace <cluster_name> with the name of your cluster. Example output I: Interactive mode enabled. Any optional fields can be left empty and a default will be selected. ? Type of identity provider: github ? Identity provider name: github-1 ? Restrict to members of: organizations ? GitHub organizations: <github_org_name> 1 ? To use GitHub as an identity provider, you must first register the application: - Open the following URL: https://github.com/organizations/<github_org_name>/settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps.<cluster_name>/<random_string>.p1.openshiftapps.com%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=<cluster_name>&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps.<cluster_name>/<random_string>.p1.openshiftapps.com - Click on 'Register application' ... 1 Replace <github_org_name> with the name of your GitHub organization. Follow the URL in the output and select Register application to register a new OAuth application in your GitHub organization. By registering the application, you enable the OAuth server that is built into ROSA to authenticate members of your GitHub organization into your cluster. Note The fields in the Register a new OAuth application GitHub form are automatically filled with the required values through the URL defined by the ROSA CLI. Use the information from your GitHub OAuth application page to populate the remaining rosa create idp interactive prompts. Continued example output ... ? Client ID: <github_client_id> 1 ? Client Secret: [? for help] <github_client_secret> 2 ? GitHub Enterprise Hostname (optional): ? Mapping method: claim 3 I: Configuring IDP for cluster '<cluster_name>' I: Identity Provider 'github-1' has been created. It will take up to 1 minute for this configuration to be enabled. To add cluster administrators, see 'rosa grant user --help'. To login into the console, open https://console-openshift-console.apps.<cluster_name>.<random_string>.p1.openshiftapps.com and click on github-1. 1 Replace <github_client_id> with the client ID for your GitHub OAuth application. 2 Replace <github_client_secret> with a client secret for your GitHub OAuth application. 3 Specify claim as the mapping method. Note It might take approximately two minutes for the identity provider configuration to become active. If you have configured a cluster-admin user, you can watch the OAuth pods redeploy with the updated configuration by running oc get pods -n openshift-authentication --watch . Enter the following command to verify that the identity provider has been configured correctly: USD rosa list idps --cluster=<cluster_name> Example output NAME TYPE AUTH URL github-1 GitHub https://oauth-openshift.apps.<cluster_name>.<random_string>.p1.openshiftapps.com/oauth2callback/github-1 Additional resource For detailed steps to configure each of the supported identity provider types, see Configuring identity providers for STS 2.5.2. Granting user access to a cluster You can grant a user access to your Red Hat OpenShift Service on AWS (ROSA) cluster by adding them to your configured identity provider. You can configure different types of identity providers for your ROSA cluster. The following example procedure adds a user to a GitHub organization that is configured for identity provision to the cluster. Prerequisites You have an AWS account. You installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , on your workstation. You logged in to your Red Hat account using the ROSA CLI ( rosa ). You created a ROSA cluster. You have a GitHub user account. You have configured a GitHub identity provider for your cluster. Procedure Navigate to github.com and log in to your GitHub account. Invite users that require access to the ROSA cluster to your GitHub organization. Follow the steps in Inviting users to join your organization in the GitHub documentation. 2.5.3. Granting administrator privileges to a user After you have added a user to your configured identity provider, you can grant the user cluster-admin or dedicated-admin privileges for your Red Hat OpenShift Service on AWS (ROSA) cluster. Prerequisites You have an AWS account. You installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , on your workstation. You logged in to your Red Hat account using the ROSA CLI ( rosa ). You created a ROSA cluster. You have configured a GitHub identity provider for your cluster and added identity provider users. Procedure To configure cluster-admin privileges for an identity provider user: Grant the user cluster-admin privileges: USD rosa grant user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 1 1 Replace <idp_user_name> and <cluster_name> with the name of the identity provider user and your cluster name. Example output I: Granted role 'cluster-admins' to user '<idp_user_name>' on cluster '<cluster_name>' Verify if the user is listed as a member of the cluster-admins group: USD rosa list users --cluster=<cluster_name> Example output ID GROUPS <idp_user_name> cluster-admins To configure dedicated-admin privileges for an identity provider user: Grant the user dedicated-admin privileges: USD rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name> Example output I: Granted role 'dedicated-admins' to user '<idp_user_name>' on cluster '<cluster_name>' Verify if the user is listed as a member of the dedicated-admins group: USD rosa list users --cluster=<cluster_name> Example output ID GROUPS <idp_user_name> dedicated-admins Additional resources Cluster administration role Customer administrator user 2.6. Accessing a cluster through the web console After you have created a cluster administrator user or added a user to your configured identity provider, you can log into your Red Hat OpenShift Service on AWS (ROSA) cluster through the web console. Prerequisites You have an AWS account. You installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , on your workstation. You logged in to your Red Hat account using the ROSA CLI ( rosa ). You created a ROSA cluster. You have created a cluster administrator user or added your user account to the configured identity provider. Procedure Obtain the console URL for your cluster: USD rosa describe cluster -c <cluster_name> | grep Console 1 1 Replace <cluster_name> with the name of your cluster. Example output Console URL: https://console-openshift-console.apps.example-cluster.wxyz.p1.openshiftapps.com Go to the console URL in the output of the preceding step and log in. If you created a cluster-admin user, log in by using the provided credentials. If you configured an identity provider for your cluster, select the identity provider name in the Log in with... dialog and complete any authorization requests that are presented by your provider. 2.7. Deploying an application from the Developer Catalog From the Red Hat OpenShift Service on AWS web console, you can deploy a test application from the Developer Catalog and expose it with a route. Prerequisites You logged in to the Red Hat Hybrid Cloud Console . You created a Red Hat OpenShift Service on AWS cluster. You configured an identity provider for your cluster. You added your user account to the configured identity provider. Procedure Go to the Cluster List page in OpenShift Cluster Manager . Click the options icon (...) to the cluster you want to view. Click Open console . Your cluster console opens in a new browser window. Log in to your Red Hat account with your configured identity provider credentials. In the Administrator perspective, select Home Projects Create Project . Enter a name for your project and optionally add a Display Name and Description . Click Create to create the project. Switch to the Developer perspective and select +Add . Verify that the selected Project is the one that you just created. In the Developer Catalog dialog, select All services . In the Developer Catalog page, select Languages JavaScript from the menu. Click Node.js , and then click Create to open the Create Source-to-Image application page. Note You might need to click Clear All Filters to display the Node.js option. In the Git section, click Try sample . Add a unique name in the Name field. The value will be used to name the associated resources. Confirm that Deployment and Create a route are selected. Click Create to deploy the application. It will take a few minutes for the pods to deploy. Optional: Check the status of the pods in the Topology pane by selecting your Node.js app and reviewing its sidebar. You must wait for the nodejs build to complete and for the nodejs pod to be in a Running state before continuing. When the deployment is complete, click the route URL for the application, which has a format similar to the following: A new tab in your browser opens with a message similar to the following: Optional: Delete the application and clean up the resources that you created: In the Administrator perspective, navigate to Home Projects . Click the action menu for your project and select Delete Project . 2.8. Revoking administrator privileges and user access You can revoke cluster-admin or dedicated-admin privileges from a user by using the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . To revoke cluster access from a user, you must remove the user from your configured identity provider. Follow the procedures in this section to revoke administrator privileges or cluster access from a user. 2.8.1. Revoking administrator privileges from a user Follow the steps in this section to revoke cluster-admin or dedicated-admin privileges from a user. Prerequisites You installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , on your workstation. You logged in to your Red Hat account using the ROSA CLI ( rosa ). You created a ROSA cluster. You have configured a GitHub identity provider for your cluster and added an identity provider user. You granted cluster-admin or dedicated-admin privileges to a user. Procedure To revoke cluster-admin privileges from an identity provider user: Revoke the cluster-admin privilege: USD rosa revoke user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 1 1 Replace <idp_user_name> and <cluster_name> with the name of the identity provider user and your cluster name. Example output ? Are you sure you want to revoke role cluster-admins from user <idp_user_name> in cluster <cluster_name>? Yes I: Revoked role 'cluster-admins' from user '<idp_user_name>' on cluster '<cluster_name>' Verify that the user is not listed as a member of the cluster-admins group: USD rosa list users --cluster=<cluster_name> Example output W: There are no users configured for cluster '<cluster_name>' To revoke dedicated-admin privileges from an identity provider user: Revoke the dedicated-admin privilege: USD rosa revoke user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name> Example output ? Are you sure you want to revoke role dedicated-admins from user <idp_user_name> in cluster <cluster_name>? Yes I: Revoked role 'dedicated-admins' from user '<idp_user_name>' on cluster '<cluster_name>' Verify that the user is not listed as a member of the dedicated-admins group: USD rosa list users --cluster=<cluster_name> Example output W: There are no users configured for cluster '<cluster_name>' 2.8.2. Revoking user access to a cluster You can revoke cluster access for an identity provider user by removing them from your configured identity provider. You can configure different types of identity providers for your ROSA cluster. The following example procedure revokes cluster access for a member of a GitHub organization that is configured for identity provision to the cluster. Prerequisites You have a ROSA cluster. You have a GitHub user account. You have configured a GitHub identity provider for your cluster and added an identity provider user. Procedure Navigate to github.com and log in to your GitHub account. Remove the user from your GitHub organization. Follow the steps in Removing a member from your organization in the GitHub documentation. 2.9. Deleting a ROSA cluster and the AWS STS resources You can delete a ROSA cluster that uses the AWS Security Token Service (STS) by using the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . You can also use the ROSA CLI to delete the AWS Identity and Access Management (IAM) account-wide roles, the cluster-specific Operator roles, and the OpenID Connect (OIDC) provider. To delete the account-wide inline and Operator policies, you can use the AWS IAM Console. Important Account-wide IAM roles and policies might be used by other ROSA clusters in the same AWS account. You must only remove the resources if they are not required by other clusters. Prerequisites You installed and configured the latest Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , on your workstation. You logged in to your Red Hat account using the ROSA CLI ( rosa ). You created a ROSA cluster. Procedure Delete a cluster and watch the logs, replacing <cluster_name> with the name or ID of your cluster: USD rosa delete cluster --cluster=<cluster_name> --watch Important You must wait for the cluster deletion to complete before you remove the IAM roles, policies, and OIDC provider. The account-wide roles are required to delete the resources created by the installer. The cluster-specific Operator roles are required to clean-up the resources created by the OpenShift Operators. The Operators use the OIDC provider to authenticate. Delete the OIDC provider that the cluster Operators use to authenticate: USD rosa delete oidc-provider -c <cluster_id> --mode auto 1 1 Replace <cluster_id> with the ID of the cluster. Note You can use the -y option to automatically answer yes to the prompts. Delete the cluster-specific Operator IAM roles: USD rosa delete operator-roles -c <cluster_id> --mode auto 1 1 Replace <cluster_id> with the ID of the cluster. Delete the account-wide roles: Important Account-wide IAM roles and policies might be used by other ROSA clusters in the same AWS account. You must only remove the resources if they are not required by other clusters. USD rosa delete account-roles --prefix <prefix> --mode auto 1 1 You must include the --<prefix> argument. Replace <prefix> with the prefix of the account-wide roles to delete. If you did not specify a custom prefix when you created the account-wide roles, specify the default prefix, ManagedOpenShift . Delete the account-wide inline and Operator IAM policies that you created for ROSA deployments that use STS: Log in to the AWS IAM Console . Navigate to Access management Policies and select the checkbox for one of the account-wide policies. With the policy selected, click on Actions Delete to open the delete policy dialog. Enter the policy name to confirm the deletion and select Delete to delete the policy. Repeat this step to delete each of the account-wide inline and Operator policies for the cluster. 2.10. steps Adding services to a cluster using the OpenShift Cluster Manager console Managing compute nodes Configuring the monitoring stack 2.11. Additional resources For more information about setting up accounts and ROSA clusters using AWS STS, see Understanding the ROSA with STS deployment workflow For more information about setting up accounts and ROSA clusters without using AWS STS, see Understanding the ROSA deployment workflow For more information about upgrading your cluster, see Upgrading ROSA Classic clusters
[ "aws sts get-caller-identity --output text", "<aws_account_id> arn:aws:iam::<aws_account_id>:user/<username> <aws_user_id>", "tar xvf rosa-linux.tar.gz", "sudo mv rosa /usr/local/bin/rosa", "rosa version", "1.2.47 Your ROSA CLI is up to date.", "rosa login", "To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here:", "? Copy the token and paste it here: ******************* [full token length omitted]", "rosa whoami", "AWS Account ID: <aws_account_number> AWS Default Region: us-east-1 AWS ARN: arn:aws:iam::<aws_account_number>:user/<aws_user_name> OCM API: https://api.openshift.com OCM Account ID: <red_hat_account_id> OCM Account Name: Your Name OCM Account Username: [email protected] OCM Account Email: [email protected] OCM Organization ID: <org_id> OCM Organization Name: Your organization OCM Organization External ID: <external_org_id>", "rosa download openshift-client", "tar xvf openshift-client-linux.tar.gz", "sudo mv oc /usr/local/bin/oc", "rosa verify openshift-client", "I: Verifying whether OpenShift command-line tool is available I: Current OpenShift Client Version: 4.17.3", "rosa create admin --cluster=<cluster_name> 1", "W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information. I: Admin account has been added to cluster '<cluster_name>'. I: Please securely store this generated password. If you lose this password you can delete and recreate the cluster admin user. I: To login, run the following command: oc login https://api.example-cluster.wxyz.p1.openshiftapps.com:6443 --username cluster-admin --password d7Rca-Ba4jy-YeXhs-WU42J I: It may take up to a minute for the account to become active.", "oc login <api_url> --username cluster-admin --password <cluster_admin_password> 1", "oc whoami", "cluster-admin", "rosa create idp --cluster=<cluster_name> --interactive 1", "I: Interactive mode enabled. Any optional fields can be left empty and a default will be selected. ? Type of identity provider: github ? Identity provider name: github-1 ? Restrict to members of: organizations ? GitHub organizations: <github_org_name> 1 ? To use GitHub as an identity provider, you must first register the application: - Open the following URL: https://github.com/organizations/<github_org_name>/settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps.<cluster_name>/<random_string>.p1.openshiftapps.com%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=<cluster_name>&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps.<cluster_name>/<random_string>.p1.openshiftapps.com - Click on 'Register application'", "? Client ID: <github_client_id> 1 ? Client Secret: [? for help] <github_client_secret> 2 ? GitHub Enterprise Hostname (optional): ? Mapping method: claim 3 I: Configuring IDP for cluster '<cluster_name>' I: Identity Provider 'github-1' has been created. It will take up to 1 minute for this configuration to be enabled. To add cluster administrators, see 'rosa grant user --help'. To login into the console, open https://console-openshift-console.apps.<cluster_name>.<random_string>.p1.openshiftapps.com and click on github-1.", "rosa list idps --cluster=<cluster_name>", "NAME TYPE AUTH URL github-1 GitHub https://oauth-openshift.apps.<cluster_name>.<random_string>.p1.openshiftapps.com/oauth2callback/github-1", "rosa grant user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 1", "I: Granted role 'cluster-admins' to user '<idp_user_name>' on cluster '<cluster_name>'", "rosa list users --cluster=<cluster_name>", "ID GROUPS <idp_user_name> cluster-admins", "rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>", "I: Granted role 'dedicated-admins' to user '<idp_user_name>' on cluster '<cluster_name>'", "rosa list users --cluster=<cluster_name>", "ID GROUPS <idp_user_name> dedicated-admins", "rosa describe cluster -c <cluster_name> | grep Console 1", "Console URL: https://console-openshift-console.apps.example-cluster.wxyz.p1.openshiftapps.com", "https://nodejs-<project>.<cluster_name>.<hash>.<region>.openshiftapps.com/", "Welcome to your Node.js application on OpenShift", "rosa revoke user cluster-admin --user=<idp_user_name> --cluster=<cluster_name> 1", "? Are you sure you want to revoke role cluster-admins from user <idp_user_name> in cluster <cluster_name>? Yes I: Revoked role 'cluster-admins' from user '<idp_user_name>' on cluster '<cluster_name>'", "rosa list users --cluster=<cluster_name>", "W: There are no users configured for cluster '<cluster_name>'", "rosa revoke user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>", "? Are you sure you want to revoke role dedicated-admins from user <idp_user_name> in cluster <cluster_name>? Yes I: Revoked role 'dedicated-admins' from user '<idp_user_name>' on cluster '<cluster_name>'", "rosa list users --cluster=<cluster_name>", "W: There are no users configured for cluster '<cluster_name>'", "rosa delete cluster --cluster=<cluster_name> --watch", "rosa delete oidc-provider -c <cluster_id> --mode auto 1", "rosa delete operator-roles -c <cluster_id> --mode auto 1", "rosa delete account-roles --prefix <prefix> --mode auto 1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/getting_started/rosa-getting-started
Chapter 17. Creating test entries
Chapter 17. Creating test entries The dsctl ldifgen command creates LDIF files with different types of test entries. For example, you can use this LDIF file to populate a test instance or a sub-tree to test the performance of Directory Server with the example entries. 17.1. Overview of testing entries you can create You can pass one of the following entry type arguments to dsctl ldifgen : users : Creates an LDIF file that contains user entries. groups : Creates an LDIF file that contains static group and member entries. cos-def : Creates an LDIF file that either contains a classic pointer or an indirect Class of Service (CoS) definition. cos-template : Creates an LDIF file that contains a CoS template. roles : Creates an LDIF file that contains managed, filtered, or indirect role entries. mod-load : Creates an LDIF file that contains modify operations. Use the ldapmodify utility to load the file into the directory. nested : Creates an LDIF file that contains heavily nested entries in a cascading or fractal tree design. Note The dsctl ldifgen command creates only the LDIF file. To load the file into your Directory Server instance, use the: ldapmodify utility after you created an LDIF file using the mod-load option ldapadd utility for all other options Except for the nested entry type, if you do not provide any command line options, the dsctl ldifgen command uses an interactive mode: # dsctl instance_name ldifgen entry_type 17.2. Creating an LDIF file with example user entries Use the dsctl ldifgen users command to create an LDIF file with example user entries. Procedure For example, to create an LDIF file named /tmp/users.ldif that adds 100,000 generic users to the dc=example,dc=com suffix, enter: # dsctl instance_name ldifgen users --suffix " dc=example,dc=com " --number 100000 --generic --ldif-file= /tmp/users.ldif Note that the command creates the following organizational units (OU) and randomly assigns the users to these OUs: ou=accounting ou=product development ou=product testing ou=human resources ou=payroll ou=people ou=groups For further details and other options you can use to create the LDIF file, enter: # dsctl instance_name ldifgen users --help Optional: Add the test entries to the directory: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x -c -f /tmp/users.ldif 17.3. Creating an LDIF file with example group entries Use the dsctl ldifgen groups command to create an LDIF file with example user entries. Procedure For example, to create an LDIF file named /tmp/groups.ldif that adds 500 groups to the ou=groups,dc=example,dc=com entry, and each group has 100 members, enter: # dsctl instance_name ldifgen groups --number 500 --suffix " dc=example,dc=com " --parent " ou=groups,dc=example,dc=com " --num-members 100 --create-members --member-parent " ou=People,dc=example,dc=com " --ldif-file /tmp/groups.ldif example_group__ Note that the command also creates LDIF statements to add the user entries in ou=People,dc=example,dc=com . For further details and other options you can use to create the LDIF file, enter: # dsctl instance_name ldifgen groups --help Optional: Add the test entries to the directory: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x -c -f /tmp/groups.ldif 17.4. Creating an LDIF file with an example CoS definition Use the dsctl ldifgen cos-def command to create an LDIF file with a Class of Service (CoS) definition. Procedure For example, to create an LDIF file named /tmp/cos-definition.ldif that adds a classic CoS definition to the ou=cos-definitions,dc=example,dc=com entry, enter: # dsctl instance_name ldifgen cos-def Postal_Def --type classic --parent " ou=cos definitions,dc=example,dc=com " --cos-specifier businessCatagory --cos-template " cn=sales,cn=classicCoS,dc=example,dc=com " --cos-attr postalcode telephonenumber --ldif-file /tmp/cos-definition.ldif For further details and other options you can use to create the LDIF file, enter: # dsctl instance_name ldifgen cos-def --help Optional: Add the test entries to the directory: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x -c -f /tmp/cos-definition.ldif 17.5. Creating an LDIF file with example modification statements Use the dsctl ldifgen mod-load command to create an LDIF file that contains update operations. Procedure For example, to create an LDIF file named /tmp/modifications.ldif : # dsctl instance_name ldifgen mod-load --num-users 1000 --create-users --parent=" ou=People,dc=example,dc=com " --mod-attrs=" sn " --add-users 10 --modrdn-users 100 --del-users 100 --delete-users --ldif-file= /tmp/modifications.ldif This command creates a file named /tmp/modifications.ldif file with the statements that do the following: Create an LDIF file with 1000 ADD operations to create user entries in ou=People,dc=example,dc=com . Modify all entries by changing their sn attributes. Add additional 10 user entries. Perform 100 MODRDN operations. Delete 100 entries Delete all remaining entries at the end For further details and other options you can use to create the LDIF file, enter: # dsctl instance_name ldifgen mod-load --help Optional: Add the test entries to the directory: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x -c -f /tmp/modifications.ldif 17.6. Creating an LDIF file with nested example entries Use the dsctl ldifgen nested command to create an LDIF file that contains a heavily nested cascading fractal structure. Procedure For example, to create an LDIF file named /tmp/nested.ldif that adds 600 users in total in different organizational units (OU) under the dc=example,dc=com entry, with each OU containing a maximum number of 100 users, enter: # dsctl instance_name ldifgen nested --num-users 600 --node-limit 100 --suffix " dc=example,dc=com " --ldif-file /tmp/nested.ldif For further details and other options you can use to create the LDIF file, enter: # dsctl instance_name ldifgen nested --help Optional: Add the test entries to the directory: # ldapadd -D " cn=Directory Manager " -W -H ldap://server.example.com -x -c -f /tmp/nested.ldif
[ "dsctl instance_name ldifgen entry_type", "dsctl instance_name ldifgen users --suffix \" dc=example,dc=com \" --number 100000 --generic --ldif-file= /tmp/users.ldif", "dsctl instance_name ldifgen users --help", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x -c -f /tmp/users.ldif", "dsctl instance_name ldifgen groups --number 500 --suffix \" dc=example,dc=com \" --parent \" ou=groups,dc=example,dc=com \" --num-members 100 --create-members --member-parent \" ou=People,dc=example,dc=com \" --ldif-file /tmp/groups.ldif example_group__", "dsctl instance_name ldifgen groups --help", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x -c -f /tmp/groups.ldif", "dsctl instance_name ldifgen cos-def Postal_Def --type classic --parent \" ou=cos definitions,dc=example,dc=com \" --cos-specifier businessCatagory --cos-template \" cn=sales,cn=classicCoS,dc=example,dc=com \" --cos-attr postalcode telephonenumber --ldif-file /tmp/cos-definition.ldif", "dsctl instance_name ldifgen cos-def --help", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x -c -f /tmp/cos-definition.ldif", "dsctl instance_name ldifgen mod-load --num-users 1000 --create-users --parent=\" ou=People,dc=example,dc=com \" --mod-attrs=\" sn \" --add-users 10 --modrdn-users 100 --del-users 100 --delete-users --ldif-file= /tmp/modifications.ldif", "dsctl instance_name ldifgen mod-load --help", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x -c -f /tmp/modifications.ldif", "dsctl instance_name ldifgen nested --num-users 600 --node-limit 100 --suffix \" dc=example,dc=com \" --ldif-file /tmp/nested.ldif", "dsctl instance_name ldifgen nested --help", "ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x -c -f /tmp/nested.ldif" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/installing_red_hat_directory_server/assembly_creating-test-entries_installing-rhds
Chapter 12. Hardware enablement
Chapter 12. Hardware enablement The following chapters contain the most notable changes to hardware enablement between RHEL 8 and RHEL 9. 12.1. Unmaintained hardware support The following devices (drivers, adapters) are available but are no longer tested or updated on a routine basis in Red Hat Enterprise Linux 9. Red Hat may fix serious bugs, including security bugs, at its discretion. These devices should no longer be used in production, and it is possible they will be disabled in the major release. PCI device IDs are in the format of vendor:device:subvendor:subdevice . If no device ID is listed, all devices associated with the corresponding driver are unmaintained. To check the PCI IDs of the hardware on your system, run the lspci -nn command. Device ID Driver Device name arp_tables bnx2 QLogic BCM5706/5708/5709/5716 Driver 0x10df:0xe220 be2net Emulex Corporation: OneConnect NIC (Lancer) dl2k e1000 Intel(R) PRO/1000 Network Driver hpsa Hewlett-Packard Company: Smart Array Controllers hdlc_fr hfi1 ip_set ip_tables ip6_tables 0x10df:0x0724 lpfc Emulex Corporation: OneConnect FCoE Initiator (Skyhawk) 0x10df:0xe200 lpfc Emulex Corporation: LPe15000/LPe16000 Series 8Gb/16Gb Fibre Channel Adapter 0x10df:0xf011 lpfc Emulex Corporation: Saturn: LightPulse Fibre Channel Host Adapter 0x10df:0xf015 lpfc Emulex Corporation: Saturn: LightPulse Fibre Channel Host Adapter 0x10df:0xf100 lpfc Emulex Corporation: LPe12000 Series 8Gb Fibre Channel Adapter 0x10df:0xfc40 lpfc Emulex Corporation: Saturn-X: LightPulse Fibre Channel Host Adapter 0x1000:0x0071 megaraid_sas Broadcom / LSI: MR SAS HBA 2004 0x1000:0x0073 megaraid_sas Broadcom / LSI: MegaRAID SAS 2008 [Falcon] 0x1000:0x0079 megaraid_sas Broadcom / LSI: MegaRAID SAS 2108 [Liberator] 0x1000:0x005b megaraid_sas Broadcom / LSI: MegaRAID SAS 2208 [Thunderbolt] 0x1000:0x006E mpt3sas Broadcom / LSI: SAS2308 PCI-Express Fusion-MPT SAS-2 0x1000:0x0080 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0081 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0082 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0083 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0084 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0085 mpt3sas Broadcom / LSI: SAS2208 PCI-Express Fusion-MPT SAS-2 0x1000:0x0086 mpt3sas Broadcom / LSI: SAS2308 PCI-Express Fusion-MPT SAS-2 0x1000:0x0087 mpt3sas Broadcom / LSI: SAS2308 PCI-Express Fusion-MPT SAS-2 mptbase Fusion MPT SAS Host driver mptsas Fusion MPT SAS Host driver mptscsih Fusion MPT SCSI Host driver mptspi Fusion MPT SAS Host driver myri10ge Myricom 10G driver (10GbE) netxen_nic QLogic/NetXen (1/10) GbE Intelligent Ethernet Driver 0x177d:0xa01e nicpf Cavium ThunderX NIC PF driver 0x177d:0xa034 nicvf Cavium ThunderX NIC VF driver 0x177d:0x0011 nicvf Cavium ThunderX NIC VF driver nft_compat nvmet-fc NVMe/Fabrics FC target driver nvmet-tcp NVMe/TCP target driver nfp 0x1077:0x2031 qla2xxx QLogic Corp.: ISP8324-based 16Gb Fibre Channel to PCI Express Adapter 0x1077:0x2532 qla2xxx QLogic Corp.: ISP2532-based 8Gb Fibre Channel to PCI Express HBA 0x1077:0x8031 qla2xxx QLogic Corp.: 8300 Series 10GbE Converged Network Adapter (FCoE) qla3xxx QLogic ISP3XXX Network Driver v2.03.00-k5 rdma_rxe 0x1924:0x0803 sfc Solarflare Communications: SFC9020 10G Ethernet Controller 0x1924:0x0813 sfc Solarflare Communications: SFL9021 10GBASE-T Ethernet Controller siw usnic_verbs vmw_pvrdma 12.2. Removed hardware support The following devices (drivers, adapters) have been removed from RHEL 9. PCI device IDs are in the format of vendor:device:subvendor:subdevice . If no device ID is listed, all devices associated with the corresponding driver are unmaintained. To check the PCI IDs of the hardware on your system, run the lspci -nn command. Device ID Driver Device name HNS-RoCE HNS GE/10GE/25GE/50GE/100GE RDMA Network Controller liquidio Cavium LiquidIO Intelligent Server Adapter Driver liquidio_vf Cavium LiquidIO Intelligent Server Adapter Virtual Function Driver aarch64:Ampere:Potenza Ampere eMAG aarch64:APM:Potenza Applied Micro X-Gene ppc64le:ibm:4d:* Power8 ppc64le:ibm:4b:* Power8E ppc64le:ibm:4c:* Power8NVL s390x:ibm:2964:* z13 s390x:ibm:2965:* z13s v4l/dvb television and video capture devices
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/assembly_hardware-enablement_considerations-in-adopting-rhel-9
Chapter 10. Refreshing external user groups for Identity Management or AD
Chapter 10. Refreshing external user groups for Identity Management or AD External user groups based on Identity Management or AD are refreshed only when a group member logs in to Satellite. It is not possible to alter user membership of external user groups in the Satellite web UI, such changes are overwritten on the group refresh.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_authentication_for_red_hat_satellite_users/refreshing_external_user_groups_for_freeipa_or_ad_authentication
Chapter 15. KIE Server system properties
Chapter 15. KIE Server system properties KIE Server accepts the following system properties (bootstrap switches) to configure the behavior of the server: Table 15.1. System properties for disabling KIE Server extensions Property Values Default Description org.drools.server.ext.disabled true , false false If set to true , disables the Business Rule Management (BRM) support (for example, rules support). org.optaplanner.server.ext.disabled true , false false If set to true , disables the Red Hat build of OptaPlanner support. org.kie.prometheus.server.ext.disabled true , false true If set to true , disables the Prometheus Server extension. org.kie.scenariosimulation.server.ext.disabled true , false true If set to true , disables the Test scenario Server extension. org.kie.dmn.server.ext.disabled true , false false If set to true , disables the KIE Server DMN support. org.kie.swagger.server.ext.disabled true , false false If set to true , disables the KIE Server swagger documentation support Note Some Process Automation Manager controller properties listed in the following table are marked as required. Set these properties when you create or remove KIE Server containers in Business Central. If you use KIE Server separately without any interaction with Business Central, you do not need to set the required properties. Table 15.2. System properties required for Process Automation Manager controller Property Values Default Description org.kie.server.id String N/A An arbitrary ID to be assigned to the server. If a headless Process Automation Manager controller is configured outside of Business Central, this is the ID under which the server connects to the headless Process Automation Manager controller to fetch the KIE container configurations. If not provided, the ID is automatically generated. org.kie.server.user String kieserver The user name used to connect with KIE Server from the Process Automation Manager controller, required when running in managed mode. Set this property in Business Central system properties. Set this property when using a Process Automation Manager controller. org.kie.server.pwd String kieserver1! The password used to connect with KIE Server from the Process Automation Manager controller, required when running in managed mode. Set this property in Business Central system properties. Set this property when using a Process Automation Manager controller. org.kie.server.token String N/A A property that enables you to use token-based authentication between the Process Automation Manager controller and KIE Server instead of the basic user name and password authentication. The Process Automation Manager controller sends the token as a parameter in the request header. The server requires long-lived access tokens because the tokens are not refreshed. org.kie.server.location URL N/A The URL of the KIE Server instance used by the Process Automation Manager controller to call back on this server, for example, http://localhost:8230/kie-server/services/rest/server . Setting this property is required when using a Process Automation Manager controller. org.kie.server.controller Comma-separated list N/A A comma-separated list of URLs to the Process Automation Manager controller REST endpoints, for example, http://localhost:8080/business-central/rest/controller . Setting this property is required when using a Process Automation Manager controller. org.kie.server.controller.user String kieserver The user name to connect to the Process Automation Manager controller REST API. Setting this property is required when using a Process Automation Manager controller. org.kie.server.controller.pwd String kieserver1! The password to connect to the Process Automation Manager controller REST API. Setting this property is required when using a Process Automation Manager controller. org.kie.server.controller.token String N/A A property that enables you to use token-based authentication between KIE Server and the Process Automation Manager controller instead of the basic user name and password authentication. The server sends the token as a parameter in the request header. The server requires long-lived access tokens because the tokens are not refreshed. org.kie.server.controller.connect Long 10000 The waiting time in milliseconds between repeated attempts to connect KIE Server to the Process Automation Manager controller when the server starts. Table 15.3. System properties for loading keystore Property Values Default Description kie.keystore.keyStoreURL URL N/A The URL is used to load a Java Cryptography Extension KeyStore (JCEKS). For example, file:///home/kie/keystores/keystore.jceks . kie.keystore.keyStorePwd String N/A The password is used for the JCEKS. kie.keystore.key.server.alias String N/A The alias name of the key for REST services where the password is stored. kie.keystore.key.server.pwd String N/A The password of an alias for REST services. kie.keystore.key.ctrl.alias String N/A The alias of the key for default REST Process Automation Manager controller. kie.keystore.key.ctrl.pwd String N/A The password of an alias for default REST Process Automation Manager controller. Table 15.4. System properties for retrying committing transactions Property Values Default Description org.kie.optlock.retries Integer 5 This property describes how many times the process engine retries a transaction before failing permanently. org.kie.optlock.delay Integer 50 The delay time before the first retry, in milliseconds. org.kie.optlock.delayFactor Integer 4 The multiplier for increasing the delay time for each subsequent retry. With the default values, the process engine waits 50 milliseconds before the first retry, 200 milliseconds before the second retry, 800 milliseconds before the third retry, and so on. Table 15.5. Other system properties Property Values Default Description kie.maven.settings.custom Path N/A The location of a custom settings.xml file for Maven configuration. kie.server.jms.queues.response String queue/KIE.SERVER.RESPONSE The response queue JNDI name for JMS. org.drools.server.filter.classes true , false false When set to true , the Drools KIE Server extension accepts custom classes annotated by the XmlRootElement or Remotable annotations only. org.kie.server.domain String N/A The JAAS LoginContext domain used to authenticate users when using JMS. org.kie.server.repo Path . The location where KIE Server state files are stored. org.kie.server.sync.deploy true , false false A property that instructs KIE Server to hold the deployment until the Process Automation Manager controller provides the container deployment configuration. This property only affects servers running in managed mode. The following options are available: * false : The connection to the Process Automation Manager controller is asynchronous. The application starts, connects to the Process Automation Manager controller, and once successful, deploys the containers. The application accepts requests even before the containers are available. * true : The deployment of the server application joins the Process Automation Manager controller connection thread with the main deployment and awaits its completion. This option can lead to a potential deadlock in case more applications are on the same server. Use only one application on one server instance. org.kie.server.startup.strategy ControllerBasedStartupStrategy , LocalContainersStartupStrategy ControllerBasedStartupStrategy The Startup strategy of KIE Server used to control the KIE containers that are deployed and the order in which they are deployed. org.kie.server.mgmt.api.disabled true , false false When set to true , disables KIE Server management API. org.kie.server.xstream.enabled.packages Java packages like org.kie.example . You can also specify wildcard expressions like org.kie.example.* . N/A A property that specifies additional packages to allowlist for marshalling using XStream. org.kie.store.services.class String org.drools.persistence.jpa.KnowledgeStoreServiceImpl Fully qualified name of the class that implements KieStoreServices that are responsible for bootstrapping KieSession instances. org.kie.server.strict.id.format true , false false While using JSON marshalling, if the property is set to true , it will always return a response in the proper JSON format. For example, if the original response contains only a single number, then the response is wrapped in a JSON format. For example, {"value" : 1} . org.kie.server.json.customObjectDeserializerCNFEBehavior IGNORE , WARN , EXCEPTION IGNORE While using JSON unmarshalling, when a class in a payload is not found, the behavior can be changed using this property as follows: If the property is set to IGNORE , the payload is converted to a HashMap If the property is set to WARN , the payload is converted to a HashMap and a warning is logged If the property is set to EXCEPTION , KIE Server throws an exception org.kie.server.strict.jaxb.format true , false false When the value of this property is set to true , KIE Server validates the data type of the data in the REST API payload. For example, if a data field has the number data type and contains something other than a number, you will receive an error.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/kie-server-system-properties-ref_execution-server
Chapter 4. Practical tutorial: Using the Dashboard Builder for the first time
Chapter 4. Practical tutorial: Using the Dashboard Builder for the first time Click the Create Workspace icon to create new workspace properties. Give the workplace a name and then click Create Workspace . Click on the new workspace to edit the details. Set the User Home Search to be current page and press save to set the URL, From the tree of workspaces on the left, expand to Workspaces/workspace and click on Pages (which will be empty on the right). On the far right, click on the Page icon to create new page Enter the page name and press Create New Page . Click on Workspace (found in the top drop-down list). Select the page you just created. To add a tree menu panel click on the puzzle icon to create a panel, expand Navigation to select the Tree menu, drag the Create Panel to the page (you will see the page locations appear) and drop it on one of the left-hand location. To add a log-off panel, click on the puzzle icon to create a panel, expand Navigation to select the Tree menu and drag the Logout Panel from the page to the top right-hand side. To add a graphic panel, click on the puzzle icon to create a panel , expand Dashboard to select Key Performance Indicator (KPI) and drag the Create Panel to one of the central locations. The dialog box will appear, asking you to choose an instance. (You should see your data provider). Go into the KPI editor and enter the KPI name and select the graph type. There is no Save button to complete this page. Simply close the window to complete the task. Enter page name and press create new page.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/using_the_dashboard_builder/practical_tutorial_using_the_dashboard_builder_for_the_first_time
Chapter 13. Managing OpenID Connect and SAML Clients
Chapter 13. Managing OpenID Connect and SAML Clients Clients are entities that can request authentication of a user. Clients come in two forms. The first type of client is an application that wants to participate in single sign-on. These clients just want Red Hat build of Keycloak to provide security for them. The other type of client is one that is requesting an access token so that it can invoke other services on behalf of the authenticated user. This section discusses various aspects around configuring clients and various ways to do it. 13.1. Managing OpenID Connect clients OpenID Connect is the recommended protocol to secure applications. It was designed from the ground up to be web friendly and it works best with HTML5/JavaScript applications. 13.1.1. Creating an OpenID Connect client To protect an application that uses the OpenID connect protocol, you create a client. Procedure Click Clients in the menu. Click Create client . Create client Leave Client type set to OpenID Connect . Enter a Client ID. This ID is an alphanumeric string that is used in OIDC requests and in the Red Hat build of Keycloak database to identify the client. Supply a Name for the client. If you plan to localize this name, set up a replacement string value. For example, a string value such as USD{myapp}. See the Server Developer Guide for more information. Click Save . This action creates the client and bring you to the Settings tab, where you can perform Basic configuration . 13.1.2. Basic configuration The Settings tab includes many options to configure this client. Settings tab 13.1.2.1. General Settings Client ID The alphanumeric ID string that is used in OIDC requests and in the Red Hat build of Keycloak database to identify the client. Name The name for the client in Red Hat build of Keycloak UI screen. To localize the name, set up a replacement string value. For example, a string value such as USD{myapp}. See the Server Developer Guide for more information. Description The description of the client. This setting can also be localized. Always Display in Console Always list this client in the Account Console even if this user does not have an active session. 13.1.2.2. Access Settings Root URL If Red Hat build of Keycloak uses any configured relative URLs, this value is prepended to them. Home URL Provides the default URL for when the auth server needs to redirect or link back to the client. Valid Redirect URIs Required field. Enter a URL pattern and click + to add and - to remove existing URLs and click Save . Exact (case sensitive) string matching is used to compare valid redirect URIs. You can use wildcards at the end of the URL pattern. For example http://host.com/path/* . To avoid security issues, if the passed redirect URI contains the userinfo part or its path manages access to parent directory ( /../ ) no wildcard comparison is performed but the standard and secure exact string matching. The full wildcard * valid redirect URI can also be configured to allow any http or https redirect URI. Please do not use it in production environments. Exclusive redirect URI patterns are typically more secure. See Unspecific Redirect URIs for more information. Web Origins Enter a URL pattern and click + to add and - to remove existing URLs. Click Save. This option handles Cross-Origin Resource Sharing (CORS) . If browser JavaScript attempts an AJAX HTTP request to a server whose domain is different from the one that the JavaScript code came from, the request must use CORS. The server must handle CORS requests, otherwise the browser will not display or allow the request to be processed. This protocol protects against XSS, CSRF, and other JavaScript-based attacks. Domain URLs listed here are embedded within the access token sent to the client application. The client application uses this information to decide whether to allow a CORS request to be invoked on it. Only Red Hat build of Keycloak client adapters support this feature. See Securing applications and Services guide for more information. Admin URL Callback endpoint for a client. The server uses this URL to make callbacks like pushing revocation policies, performing backchannel logout, and other administrative operations. For Red Hat build of Keycloak servlet adapters, this URL can be the root URL of the servlet application. For more information, see Securing applications and Services guide . 13.1.2.3. Capability Config Client authentication The type of OIDC client. ON For server-side clients that perform browser logins and require client secrets when making an Access Token Request. This setting should be used for server-side applications. OFF For client-side clients that perform browser logins. As it is not possible to ensure that secrets can be kept safe with client-side clients, it is important to restrict access by configuring correct redirect URIs. Authorization Enables or disables fine-grained authorization support for this client. Standard Flow If enabled, this client can use the OIDC Authorization Code Flow . Direct Access Grants If enabled, this client can use the OIDC Direct Access Grants . Implicit Flow If enabled, this client can use the OIDC Implicit Flow . Service account roles If enabled, this client can authenticate to Red Hat build of Keycloak and retrieve access token dedicated to this client. In terms of OAuth2 specification, this enables support of Client Credentials Grant for this client. Auth 2.0 Device Authorization Grant If enabled, this client can use the OIDC Device Authorization Grant . OIDC CIBA Grant If enabled, this client can use the OIDC Client Initiated Backchannel Authentication Grant . 13.1.2.4. Login settings Login theme A theme to use for login, OTP, grant registration, and forgotten password pages. Consent required If enabled, users have to consent to client access. For client-side clients that perform browser logins. As it is not possible to ensure that secrets can be kept safe with client-side clients, it is important to restrict access by configuring correct redirect URIs. Display client on screen This switch applies if Consent Required is Off . Off The consent screen will contain only the consents corresponding to configured client scopes. On There will be also one item on the consent screen about this client itself. Client consent screen text Applies if Consent required and Display client on screen are enabled. Contains the text that will be on the consent screen about permissions for this client. 13.1.2.5. Logout settings Front channel logout If Front Channel Logout is enabled, the application should be able to log out users through the front channel as per OpenID Connect Front-Channel Logout specification. If enabled, you should also provide the Front-Channel Logout URL . Front-channel logout URL URL that will be used by Red Hat build of Keycloak to send logout requests to clients through the front-channel. Backchannel logout URL URL that will cause the client to log itself out when a logout request is sent to this realm (via end_session_endpoint). If omitted, no logout requests are sent to the client. Backchannel logout session required Specifies whether a session ID Claim is included in the Logout Token when the Backchannel Logout URL is used. Backchannel logout revoke offline sessions Specifies whether a revoke_offline_access event is included in the Logout Token when the Backchannel Logout URL is used. Red Hat build of Keycloak will revoke offline sessions when receiving a Logout Token with this event. 13.1.3. Advanced configuration After completing the fields on the Settings tab, you can use the other tabs to perform advanced configuration. 13.1.3.1. Advanced tab When you click the Advanced tab, additional fields are displayed. For details on a specific field, click the question mark icon for that field. However, certain fields are described in detail in this section. 13.1.3.2. Fine grain OpenID Connect configuration Logo URL URL that references a logo for the Client application. Policy URL URL that the Relying Party Client provides to the End-User to read about how the profile data will be used. Terms of Service URL URL that the Relying Party Client provides to the End-User to read about the Relying Party's terms of service. Signed and Encrypted ID Token Support Red Hat build of Keycloak can encrypt ID tokens according to the Json Web Encryption (JWE) specification. The administrator determines if ID tokens are encrypted for each client. The key used for encrypting the ID token is the Content Encryption Key (CEK). Red Hat build of Keycloak and a client must negotiate which CEK is used and how it is delivered. The method used to determine the CEK is the Key Management Mode. The Key Management Mode that Red Hat build of Keycloak supports is Key Encryption. In Key Encryption: The client generates an asymmetric cryptographic key pair. The public key is used to encrypt the CEK. Red Hat build of Keycloak generates a CEK per ID token Red Hat build of Keycloak encrypts the ID token using this generated CEK Red Hat build of Keycloak encrypts the CEK using the client's public key. The client decrypts this encrypted CEK using their private key The client decrypts the ID token using the decrypted CEK. No party, other than the client, can decrypt the ID token. The client must pass its public key for encrypting CEK to Red Hat build of Keycloak. Red Hat build of Keycloak supports downloading public keys from a URL provided by the client. The client must provide public keys according to the Json Web Keys (JWK) specification. The procedure is: Open the client's Keys tab. Toggle JWKS URL to ON. Input the client's public key URL in the JWKS URL textbox. Key Encryption's algorithms are defined in the Json Web Algorithm (JWA) specification. Red Hat build of Keycloak supports: RSAES-PKCS1-v1_5(RSA1_5) RSAES OAEP using default parameters (RSA-OAEP) RSAES OAEP 256 using SHA-256 and MFG1 (RSA-OAEP-256) The procedure to select the algorithm is: Open the client's Advanced tab. Open Fine Grain OpenID Connect Configuration . Select the algorithm from ID Token Encryption Content Encryption Algorithm pulldown menu. 13.1.3.3. OpenID Connect Compatibility Modes This section exists for backward compatibility. Click the question mark icons for details on each field. OAuth 2.0 Mutual TLS Certificate Bound Access Tokens Enabled Mutual TLS binds an access token and a refresh token together with a client certificate, which is exchanged during a TLS handshake. This binding prevents an attacker from using stolen tokens. This type of token is a holder-of-key token. Unlike bearer tokens, the recipient of a holder-of-key token can verify if the sender of the token is legitimate. If this setting is on, the workflow is: A token request is sent to the token endpoint in an authorization code flow or hybrid flow. Red Hat build of Keycloak requests a client certificate. Red Hat build of Keycloak receives the client certificate. Red Hat build of Keycloak successfully verifies the client certificate. If verification fails, Red Hat build of Keycloak rejects the token. In the following cases, Red Hat build of Keycloak will verify the client sending the access token or the refresh token: A token refresh request is sent to the token endpoint with a holder-of-key refresh token. A UserInfo request is sent to UserInfo endpoint with a holder-of-key access token. A logout request is sent to non-OIDC compliant Red Hat build of Keycloak proprietary Logout endpoint with a holder-of-key refresh token. See Mutual TLS Client Certificate Bound Access Tokens in the OAuth 2.0 Mutual TLS Client Authentication and Certificate Bound Access Tokens for more details. Note Red Hat build of Keycloak client adapters do not support holder-of-key token verification. Red Hat build of Keycloak adapters treat access and refresh tokens as bearer tokens. OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer (DPoP) DPoP binds an access token and a refresh token together with the public part of a client's key pair. This binding prevents an attacker from using stolen tokens. This type of token is a holder-of-key token. Unlike bearer tokens, the recipient of a holder-of-key token can verify if the sender of the token is legitimate. If the client switch OAuth 2.0 DPoP Bound Access Tokens Enabled is on, the workflow is: A token request is sent to the token endpoint in an authorization code flow or hybrid flow. Red Hat build of Keycloak requests a DPoP proof. Red Hat build of Keycloak receives the DPoP proof. Red Hat build of Keycloak successfully verifies the DPoP proof. If verification fails, Red Hat build of Keycloak rejects the token. If the switch OAuth 2.0 DPoP Bound Access Tokens Enabled is off, the client can still send DPoP proof in the token request. In that case, Red Hat build of Keycloak will verify DPoP proof and will add the thumbprint to the token. But if the switch is off, DPoP binding is not enforced by the Red Hat build of Keycloak server for this client. It is recommended to have this switch on if you want to make sure that particular client always uses DPoP binding. In the following cases, Red Hat build of Keycloak will verify the client sending the access token or the refresh token: A token refresh request is sent to the token endpoint with a holder-of-key refresh token. This verification is done only for public clients as described in the DPoP specification. For confidential clients, the verification is not done as client authentication with proper client credentials is in place to ensure that request comes from the legitimate client. For public clients, both access tokens and refresh tokens are DPoP bound. For confidential clients, only access tokens are DPoP bound. A UserInfo request is sent to UserInfo endpoint with a holder-of-key access token. A logout request is sent to a non-OIDC compliant Red Hat build of Keycloak proprietary logout endpoint Logout endpoint with a holder-of-key refresh token. This verification is done only for public clients as described above. See OAuth 2.0 Demonstrating Proof of Possession (DPoP) for more details. Note Red Hat build of Keycloak client adapters do not support DPoP holder-of-key token verification. Red Hat build of Keycloak adapters treat access and refresh tokens as bearer tokens. Note DPoP is Technology Preview and is not fully supported. This feature is disabled by default. To enable start the server with --features=preview or --features=dpop Advanced Settings for OIDC The Advanced Settings for OpenID Connect allows you to configure overrides at the client level for session and token timeouts . Configuration Description Access Token Lifespan The value overrides the realm option with same name. Client Session Idle The value overrides the realm option with same name. The value should be shorter than the global SSO Session Idle . Client Session Max The value overrides the realm option with same name. The value should be shorter than the global SSO Session Max . Client Offline Session Idle This setting allows you to configure a shorter offline session idle timeout for the client. The timeout is amount of time the session remains idle before Red Hat build of Keycloak revokes its offline token. If not set, realm Offline Session Idle is used. Client Offline Session Max This setting allows you to configure a shorter offline session max lifespan for the client. The lifespan is the maximum time before Red Hat build of Keycloak revokes the corresponding offline token. This option needs Offline Session Max Limited enabled globally in the realm, and defaults to Offline Session Max . Proof Key for Code Exchange Code Challenge Method If an attacker steals an authorization code of a legitimate client, Proof Key for Code Exchange (PKCE) prevents the attacker from receiving the tokens that apply to the code. An administrator can select one of these options: (blank) Red Hat build of Keycloak does not apply PKCE unless the client sends appropriate PKCE parameters to Red Hat build of Keycloaks authorization endpoint. S256 Red Hat build of Keycloak applies to the client PKCE whose code challenge method is S256. plain Red Hat build of Keycloak applies to the client PKCE whose code challenge method is plain. See RFC 7636 Proof Key for Code Exchange by OAuth Public Clients for more details. ACR to Level of Authentication (LoA) Mapping In the advanced settings of a client, you can define which Authentication Context Class Reference (ACR) value is mapped to which Level of Authentication (LoA) . This mapping can be specified also at the realm as mentioned in the ACR to LoA Mapping . A best practice is to configure this mapping at the realm level, which allows to share the same settings across multiple clients. The Default ACR Values can be used to specify the default values when the login request is sent from this client to Red Hat build of Keycloak without acr_values parameter and without a claims parameter that has an acr claim attached. See official OIDC dynamic client registration specification . Warning Note that default ACR values are used as the default level, however it cannot be reliably used to enforce login with the particular level. For example, assume that you configure the Default ACR Values to level 2. Then by default, users will be required to authenticate with level 2. However when the user explicitly attaches the parameter into login request such as acr_values=1 , then the level 1 will be used. As a result, if the client really requires level 2, the client is encouraged to check the presence of the acr claim inside ID Token and double-check that it contains the requested level 2. For further details see Step-up Authentication and the official OIDC specification . 13.1.4. Confidential client credentials If the Client authentication of the client is set to ON , the credentials of the client must be configured under the Credentials tab. Credentials tab The Client Authenticator drop-down list specifies the type of credential to use for your client. Client ID and Secret This choice is the default setting. The secret is automatically generated. Click Regenerate to recreate the secret if necessary. Signed JWT Signed JWT is "Signed JSON Web Token". When choosing this credential type you will have to also generate a private key and certificate for the client in the tab Keys . The private key will be used to sign the JWT, while the certificate is used by the server to verify the signature. Keys tab Click on the Generate new keys button to start this process. Generate keys Select the archive format you want to use. Enter a key password . Enter a store password . Click Generate . When you generate the keys, Red Hat build of Keycloak will store the certificate and you download the private key and certificate for your client. You can also generate keys using an external tool and then import the client's certificate by clicking Import Certificate . Import certificate Select the archive format of the certificate. Enter the store password. Select the certificate file by clicking Import File . Click Import . Importing a certificate is unnecessary if you click Use JWKS URL . In this case, you can provide the URL where the public key is published in JWK format. With this option, if the key is ever changed, Red Hat build of Keycloak reimports the key. If you are using a client secured by Red Hat build of Keycloak adapter, you can configure the JWKS URL in this format, assuming that https://myhost.com/myapp is the root URL of your client application: https://myhost.com/myapp/k_jwks See Server Developer Guide for more details. Signed JWT with Client Secret If you select this option, you can use a JWT signed by client secret instead of the private key. The client secret will be used to sign the JWT by the client. X509 Certificate Red Hat build of Keycloak will validate if the client uses proper X509 certificate during the TLS Handshake. X509 certificate The validator also checks the Subject DN field of the certificate with a configured regexp validation expression. For some use cases, it is sufficient to accept all certificates. In that case, you can use (.*?)(?:USD) expression. Two ways exist for Red Hat build of Keycloak to obtain the Client ID from the request: The client_id parameter in the query (described in Section 2.2 of the OAuth 2.0 Specification ). Supply client_id as a form parameter. 13.1.5. Client Secret Rotation Important Please note that Client Secret Rotation support is in development. Use this feature experimentally. For a client with Confidential Client authentication Red Hat build of Keycloak supports the functionality of rotating client secrets through Client Policies . The client secrets rotation policy provides greater security in order to alleviate problems such as secret leakage. Once enabled, Red Hat build of Keycloak supports up to two concurrently active secrets for each client. The policy manages rotations according to the following settings: Secret expiration: [seconds] - When the secret is rotated, this is the expiration of time of the new secret. The amount, in seconds , added to the secret creation date. Calculated at policy execution time. Rotated secret expiration: [seconds] - When the secret is rotated, this value is the remaining expiration time for the old secret. This value should be always smaller than Secret expiration. When the value is 0, the old secret will be immediately removed during client rotation. The amount, in seconds , added to the secret rotation date. Calculated at policy execution time. Remaining expiration time for rotation during update: [seconds] - Time period when an update to a dynamic client should perform client secret rotation. Calculated at policy execution time. When a client secret rotation occurs, a new main secret is generated and the old client main secret becomes the secondary secret with a new expiration date. 13.1.5.1. Rules for client secret rotation Rotations do not occur automatically or through a background process. In order to perform the rotation, an update action is required on the client, either through the Red Hat build of Keycloak Admin Console through the function of Regenerate Secret , in the client's credentials tab or Admin REST API. When invoking a client update action, secret rotation occurs according to the rules: When the value of Secret expiration is less than the current date. During dynamic client registration client-update request, the client secret will be automatically rotated if the value of Remaining expiration time for rotation during update match the period between the current date and the Secret expiration . Additionally it is possible through Admin REST API to force a client secret rotation at any time. Note During the creation of new clients, if the client secret rotation policy is active, the behavior will be applied automatically. Warning To apply the secret rotation behavior to an existing client, update that client after you define the policy so that the behavior is applied. 13.1.6. Creating an OIDC Client Secret Rotation Policy The following is an example of defining a secret rotation policy: Procedure Click Realm Settings in the menu. Click Client Policies tab. On the Profiles page, click Create client profile . Create a profile Enter any name for Name . Enter a description that helps you identify the purpose of the profile for Description . Click Save . This action creates the profile and enables you to configure executors. Click Add executor to configure an executor for this profile. Create a profile executor Select secret-rotation for Executor Type . Enter the maximum duration time of each secret, in seconds, for Secret Expiration . Enter the maximum duration time of each rotated secret, in seconds, for Rotated Secret Expiration . Warning Remember that the Rotated Secret Expiration value must always be less than Secret Expiration . Enter the amount of time, in seconds, after which any update action will update the client for Remain Expiration Time . Click Add . In the example above: Each secret is valid for one week. The rotated secret expires after two days. The window for updating dynamic clients starts one day before the secret expires. Return to the Client Policies tab. Click Policies . Click Create client policy . Create the Client Secret Rotation Policy Enter any name for Name . Enter a description that helps you identify the purpose of the policy for Description . Click Save . This action creates the policy and enables you to associate policies with profiles. It also allows you to configure the conditions for policy execution. Under Conditions, click Add condition . Create the Client Secret Rotation Policy Condition To apply the behavior to all confidential clients select client-access-type in the Condition Type field Note To apply to a specific group of clients, another approach would be to select the client-roles type in the Condition Type field. In this way, you could create specific roles and assign a custom rotation configuration to each role. Add confidential to the field Client Access Type . Click Add . Back in the policy setting, under Client Profiles , click Add client profile and then select Weekly Client Secret Rotation Profile from the list and then click Add . Client Secret Rotation Policy Note To apply the secret rotation behavior to an existing client, follow the following steps: Using the Admin Console Click Clients in the menu. Click a client. Click the Credentials tab. Click Re-generate of the client secret. Using client REST services it can be executed in two ways: Through an update operation on a client Through the regenerate client secret endpoint 13.1.7. Using a service account Each OIDC client has a built-in service account . Use this service account to obtain an access token. Procedure Click Clients in the menu. Select your client. Click the Settings tab. Toggle Client authentication to On . Select Service accounts roles . Click Save . Configure your client credentials . Click the Scope tab. Verify that you have roles or toggle Full Scope Allowed to ON . Click the Service Account Roles tab Configure the roles available to this service account for your client. Roles from access tokens are the intersection of: Role scope mappings of a client combined with the role scope mappings inherited from linked client scopes. Service account roles. The REST URL to invoke is /realms/{realm-name}/protocol/openid-connect/token . This URL must be invoked as a POST request and requires that you post the client credentials with the request. By default, client credentials are represented by the clientId and clientSecret of the client in the Authorization: Basic header but you can also authenticate the client with a signed JWT assertion or any other custom mechanism for client authentication. You also need to set the grant_type parameter to "client_credentials" as per the OAuth2 specification. For example, the POST invocation to retrieve a service account can look like this: The response would be similar to this Access Token Response from the OAuth 2.0 specification. Only the access token is returned by default. No refresh token is returned and no user session is created on the Red Hat build of Keycloak side upon successful authentication by default. Due to the lack of a refresh token, re-authentication is required when the access token expires. However, this situation does not mean any additional overhead for the Red Hat build of Keycloak server because sessions are not created by default. In this situation, logout is unnecessary. However, issued access tokens can be revoked by sending requests to the OAuth2 Revocation Endpoint as described in the OpenID Connect Endpoints section. Additional resources For more details, see Client Credentials Grant . 13.1.8. Audience support Typically, the environment where Red Hat build of Keycloak is deployed consists of a set of confidential or public client applications that use Red Hat build of Keycloak for authentication. Services ( Resource Servers in the OAuth 2 specification ) are also available that serve requests from client applications and provide resources to these applications. These services require an Access token (Bearer token) to be sent to them to authenticate a request. This token is obtained by the frontend application upon login to Red Hat build of Keycloak. In the environment where trust among services is low, you may encounter this scenario: A frontend client application requires authentication against Red Hat build of Keycloak. Red Hat build of Keycloak authenticates a user. Red Hat build of Keycloak issues a token to the application. The application uses the token to invoke an untrusted service. The untrusted service returns the response to the application. However, it keeps the applications token. The untrusted service then invokes a trusted service using the applications token. This results in broken security as the untrusted service misuses the token to access other services on behalf of the client application. This scenario is unlikely in environments with a high level of trust between services but not in environments where trust is low. In some environments, this workflow may be correct as the untrusted service may have to retrieve data from a trusted service to return data to the original client application. An unlimited audience is useful when a high level of trust exists between services. Otherwise, the audience should be limited. You can limit the audience and, at the same time, allow untrusted services to retrieve data from trusted services. In this case, ensure that the untrusted service and the trusted service are added as audiences to the token. To prevent any misuse of the access token, limit the audience on the token and configure your services to verify the audience on the token. The flow will change as follows: A frontend application authenticates against Red Hat build of Keycloak. Red Hat build of Keycloak authenticates a user. Red Hat build of Keycloak issues a token to the application. The application knows that it will need to invoke an untrusted service so it places scope=<untrusted service> in the authentication request sent to Red Hat build of Keycloak (see Client Scopes section for more details about the scope parameter). The token issued to the application contains a reference to the untrusted service in its audience ( "audience": [ "<untrusted service>" ] ) which declares that the client uses this access token to invoke the untrusted service. The untrusted service invokes a trusted service with the token. Invocation is not successful because the trusted service checks the audience on the token and find that its audience is only for the untrusted service. This behavior is expected and security is not broken. If the client wants to invoke the trusted service later, it must obtain another token by reissuing the SSO login with scope=<trusted service> . The returned token will then contain the trusted service as an audience: "audience": [ "<trusted service>" ] Use this value to invoke the <trusted service> . 13.1.8.1. Setup When setting up audience checking: Ensure that services are configured to check audience on the access token sent to them. This may be done in a way specific to your client OIDC adapter, which you are using to secure your OIDC client application. Ensure that access tokens issued by Red Hat build of Keycloak contain all necessary audiences. Audiences can be added using the client roles as described in the section or hardcoded. See Hardcoded audience . 13.1.8.2. Automatically add audience An Audience Resolve protocol mapper is defined in the default client scope roles . The mapper checks for clients that have at least one client role available for the current token. The client ID of each client is then added as an audience, which is useful if your service clients rely on client roles. Service client could be usually a client without any flows enabled, which may not have any tokens issued directly to itself. It represents an OAuth 2 Resource Server . For example, for a service client and a confidential client, you can use the access token issued for the confidential client to invoke the service client REST service. The service client will be automatically added as an audience to the access token issued for the confidential client if the following are true: The service client has any client roles defined on itself. Target user has at least one of those client roles assigned. Confidential client has the role scope mappings for the assigned role. Note If you want to ensure that the audience is not added automatically, do not configure role scope mappings directly on the confidential client. Instead, you can create a dedicated client scope that contains the role scope mappings for the client roles of your dedicated client scope. Assuming that the client scope is added as an optional client scope to the confidential client, the client roles and the audience will be added to the token if explicitly requested by the scope=<trusted service> parameter. Note The frontend client itself is not automatically added to the access token audience, therefore allowing easy differentiation between the access token and the ID token, since the access token will not contain the client for which the token is issued as an audience. If you need the client itself as an audience, see the hardcoded audience option. However, using the same client as both frontend and REST service is not recommended. 13.1.8.3. Hardcoded audience When your service relies on realm roles or does not rely on the roles in the token at all, it can be useful to use a hardcoded audience. A hardcoded audience is a protocol mapper, that will add the client ID of the specified service client as an audience to the token. You can use any custom value, for example a URL, if you want to use a different audience than the client ID. You can add the protocol mapper directly to the frontend client. If the protocol mapper is added directly, the audience will always be added as well. For more control over the protocol mapper, you can create the protocol mapper on the dedicated client scope, which will be called for example good-service . Audience protocol mapper From the Client details tab of the good-service client, you can generate the adapter configuration and confirm that verify-token-audience is set to true . This action forces the adapter to verify the audience if you use this configuration. You need to ensure that the confidential client is able to request good-service as an audience in its tokens. On the confidential client: Click the Client Scopes tab. Assign good-service as an optional (or default) client scope. See Client Scopes Linking section for more details. You can optionally Evaluate Client Scopes and generate an example access token. good-service will be added to the audience of the generated access token if good-service is included in the scope parameter, when you assigned it as an optional client scope. In your confidential client application, ensure that the scope parameter is used. The value good-service must be included when you want to issue the token for accessing good-service . See: Keycloak JavaScript adapter in the securing apps section if your application uses the javascript adapter. Note Both the Audience and Audience Resolve protocol mappers add the audiences to the access token only, by default. The ID Token typically contains only a single audience, the client ID for which the token was issued, a requirement of the OpenID Connect specification. However, the access token does not necessarily have the client ID, which was the token issued for, unless the audience mappers added it. 13.2. Creating a SAML client Red Hat build of Keycloak supports SAML 2.0 for registered applications. POST and Redirect bindings are supported. You can choose to require client signature validation. You can have the server sign and/or encrypt responses as well. Procedure Click Clients in the menu. Click Create client to go to the Create client page. Set Client type to SAML . Create client Enter the Client ID of the client. This is often a URL and is the expected issuer value in SAML requests sent by the application. Click Save . This action creates the client and brings you to the Settings tab. The following sections describe each setting on this tab. 13.2.1. Settings tab The Settings tab includes many options to configure this client. Client settings 13.2.1.1. General settings Client ID The alphanumeric ID string that is used in OIDC requests and in the Red Hat build of Keycloak database to identify the client. This value must match the issuer value sent with AuthNRequests. Red Hat build of Keycloak pulls the issuer from the Authn SAML request and match it to a client by this value. Name The name for the client in a Red Hat build of Keycloak UI screen. To localize the name, set up a replacement string value. For example, a string value such as USD{myapp}. See the Server Developer Guide for more information. Description The description of the client. This setting can also be localized. Always Display in Console Always list this client in the Account Console even if this user does not have an active session. 13.2.1.2. Access Settings Root URL When Red Hat build of Keycloak uses a configured relative URL, this value is prepended to the URL. Home URL If Red Hat build of Keycloak needs to link to a client, this URL is used. Valid Redirect URIs Enter a URL pattern and click the + sign to add. Click the - sign to remove. Click Save to save these changes. Wildcards values are allowed only at the end of a URL. For example, http://host.com/*USDUSD . This field is used when the exact SAML endpoints are not registered and Red Hat build of Keycloak pulls the Assertion Consumer URL from a request. IDP-Initiated SSO URL name URL fragment name to reference client when you want to do IDP Initiated SSO. Leaving this empty will disable IDP Initiated SSO. The URL you will reference from your browser will be: server-root /realms/{realm}/protocol/saml/clients/{client-url-name} IDP Initiated SSO Relay State Relay state you want to send with SAML request when you want to do IDP Initiated SSO. Master SAML Processing URL This URL is used for all SAML requests and the response is directed to the SP. It is used as the Assertion Consumer Service URL and the Single Logout Service URL. If login requests contain the Assertion Consumer Service URL then those login requests will take precedence. This URL must be validated by a registered Valid Redirect URI pattern. 13.2.1.3. SAML capabilities Name ID Format The Name ID Format for the subject. This format is used if no name ID policy is specified in a request, or if the Force Name ID Format attribute is set to ON. Force Name ID Format If a request has a name ID policy, ignore it and use the value configured in the Admin Console under Name ID Format . Force POST Binding By default, Red Hat build of Keycloak responds using the initial SAML binding of the original request. By enabling Force POST Binding , Red Hat build of Keycloak responds using the SAML POST binding even if the original request used the redirect binding. Force artifact binding If enabled, response messages are returned to the client through the SAML ARTIFACT binding system. Include AuthnStatement SAML login responses may specify the authentication method used, such as password, as well as timestamps of the login and the session expiration. Include AuthnStatement is enabled by default, so that the AuthnStatement element will be included in login responses. Setting this to OFF prevents clients from determining the maximum session length, which can create client sessions that do not expire. Include OneTimeUse Condition If enable, a OneTimeUse Condition is included in login responses. Optimize REDIRECT signing key lookup When set to ON, the SAML protocol messages include the Red Hat build of Keycloak native extension. This extension contains a hint with the signing key ID. The SP uses the extension for signature validation instead of attempting to validate the signature using keys. This option applies to REDIRECT bindings where the signature is transferred in query parameters and this information is not found in the signature information. This is contrary to POST binding messages where key ID is always included in document signature. This option is used when Red Hat build of Keycloak server and adapter provide the IDP and SP. This option is only relevant when Sign Documents is set to ON. 13.2.1.4. Signature and Encryption Sign Documents When set to ON, Red Hat build of Keycloak signs the document using the realms private key. Sign Assertions The assertion is signed and embedded in the SAML XML Auth response. Signature Algorithm The algorithm used in signing SAML documents. Note that SHA1 based algorithms are deprecated and may be removed in a future release. We recommend the use of some more secure algorithm instead of *_SHA1 . Also, with *_SHA1 algorithms, verifying signatures do not work if the SAML client runs on Java 17 or higher. SAML Signature Key Name Signed SAML documents sent using POST binding contain the identification of the signing key in the KeyName element. This action can be controlled by the SAML Signature Key Name option. This option controls the contents of the Keyname . KEY_ID The KeyName contains the key ID. This option is the default option. CERT_SUBJECT The KeyName contains the subject from the certificate corresponding to the realm key. This option is expected by Microsoft Active Directory Federation Services. NONE The KeyName hint is completely omitted from the SAML message. Canonicalization Method The canonicalization method for XML signatures. 13.2.1.5. Login settings Login theme A theme to use for login, OTP, grant registration, and forgotten password pages. Consent required If enabled, users have to consent to client access. For client-side clients that perform browser logins. As it is not possible to ensure that secrets can be kept safe with client-side clients, it is important to restrict access by configuring correct redirect URIs. Display client on screen This switch applies if Consent Required is Off . Off The consent screen will contain only the consents corresponding to configured client scopes. On There will be also one item on the consent screen about this client itself. Client consent screen text Applies if Consent required and Display client on screen are enabled. Contains the text that will be on the consent screen about permissions for this client. 13.2.1.6. Logout settings Front channel logout If Front Channel Logout is enabled, the application requires a browser redirect to perform a logout. For example, the application may require a cookie to be reset which could only be done via a redirect. If Front Channel Logout is disabled, Red Hat build of Keycloak invokes a background SAML request to log out of the application. 13.2.2. Keys tab Encrypt Assertions Encrypts the assertions in SAML documents with the realms private key. The AES algorithm uses a key size of 128 bits. Client Signature Required If Client Signature Required is enabled, documents coming from a client are expected to be signed. Red Hat build of Keycloak will validate this signature using the client public key or cert set up in the Keys tab. Allow ECP Flow If true, this application is allowed to use SAML ECP profile for authentication. 13.2.3. Advanced tab This tab has many fields for specific situations. Some fields are covered in other topics. For details on other fields, click the question mark icon. 13.2.3.1. Fine Grain SAML Endpoint Configuration Logo URL URL that references a logo for the Client application. Policy URL URL that the Relying Party Client provides to the End-User to read about how the profile data will be used. Terms of Service URL URL that the Relying Party Client provides to the End-User to read about the Relying Party's terms of service. Assertion Consumer Service POST Binding URL POST Binding URL for the Assertion Consumer Service. Assertion Consumer Service Redirect Binding URL Redirect Binding URL for the Assertion Consumer Service. Logout Service POST Binding URL POST Binding URL for the Logout Service. Logout Service Redirect Binding URL Redirect Binding URL for the Logout Service. Logout Service Artifact Binding URL Artifact Binding URL for the Logout Service. When set together with the Force Artifact Binding option, Artifact binding is forced for both login and logout flows. Artifact binding is not used for logout unless this property is set. Logout Service SOAP Binding URL Redirect Binding URL for the Logout Service. Only applicable if back channel logout is used. Artifact Binding URL URL to send the HTTP artifact messages to. Artifact Resolution Service URL of the client SOAP endpoint where to send the ArtifactResolve messages to. 13.2.4. IDP Initiated login IDP Initiated Login is a feature that allows you to set up an endpoint on the Red Hat build of Keycloak server that will log you into a specific application/client. In the Settings tab for your client, you need to specify the IDP Initiated SSO URL Name . This is a simple string with no whitespace in it. After this you can reference your client at the following URL: root/realms/{realm}/protocol/saml/clients/{url-name} The IDP initiated login implementation prefers POST over REDIRECT binding (check saml bindings for more information). Therefore the final binding and SP URL are selected in the following way: If the specific Assertion Consumer Service POST Binding URL is defined (inside Fine Grain SAML Endpoint Configuration section of the client settings) POST binding is used through that URL. If the general Master SAML Processing URL is specified then POST binding is used again throughout this general URL. As the last resort, if the Assertion Consumer Service Redirect Binding URL is configured (inside Fine Grain SAML Endpoint Configuration ) REDIRECT binding is used with this URL. If your client requires a special relay state, you can also configure this on the Settings tab in the IDP Initiated SSO Relay State field. Alternatively, browsers can specify the relay state in a RelayState query parameter, i.e. root/realms/{realm}/protocol/saml/clients/{url-name}?RelayState=thestate . When using identity brokering , it is possible to set up an IDP Initiated Login for a client from an external IDP. The actual client is set up for IDP Initiated Login at broker IDP as described above. The external IDP has to set up the client for application IDP Initiated Login that will point to a special URL pointing to the broker and representing IDP Initiated Login endpoint for a selected client at the brokering IDP. This means that in client settings at the external IDP: IDP Initiated SSO URL Name is set to a name that will be published as IDP Initiated Login initial point, Assertion Consumer Service POST Binding URL in the Fine Grain SAML Endpoint Configuration section has to be set to the following URL: broker-root/realms/{broker-realm}/broker/{idp-name}/endpoint/clients/{client-id} , where: broker-root is base broker URL broker-realm is name of the realm at broker where external IDP is declared idp-name is name of the external IDP at broker client-id is the value of IDP Initiated SSO URL Name attribute of the SAML client defined at broker. It is this client, which will be made available for IDP Initiated Login from the external IDP. Please note that you can import basic client settings from the brokering IDP into client settings of the external IDP - just use SP Descriptor available from the settings of the identity provider in the brokering IDP, and add clients/ client-id to the endpoint URL. 13.2.5. Using an entity descriptor to create a client Instead of registering a SAML 2.0 client manually, you can import the client using a standard SAML Entity Descriptor XML file. The Client page includes an Import client option. Add client Procedure Click Browse . Load the file that contains the XML entity descriptor information. Review the information to ensure everything is set up correctly. Some SAML client adapters, such as mod-auth-mellon , need the XML Entity Descriptor for the IDP. You can find this descriptor by going to this URL: where realm is the realm of your client. 13.3. Client links To link from one client to another, Red Hat build of Keycloak provides a redirect endpoint: /realms/realm_name/clients/{client-id}/redirect . If a client accesses this endpoint using a HTTP GET request, Red Hat build of Keycloak returns the configured base URL for the provided Client and Realm in the form of an HTTP 307 (Temporary Redirect) in the response's Location header. As a result of this, a client needs only to know the Realm name and the Client ID to link to them. This indirection avoids hard-coding client base URLs. As an example, given the realm master and the client-id account : This URL temporarily redirects to: http://host:port/realms/master/account 13.4. OIDC token and SAML assertion mappings Applications receiving ID tokens, access tokens, or SAML assertions may require different roles and user metadata. You can use Red Hat build of Keycloak to: Hardcode roles, claims and custom attributes. Pull user metadata into a token or assertion. Rename roles. You perform these actions in the Mappers tab in the Admin Console. Mappers tab New clients do not have built-in mappers, but they can inherit some mappers from client scopes. See the client scopes section for more details. Protocol mappers map items (such as an email address, for example) to a specific claim in the identity and access token. The function of a mapper should be self-explanatory from its name. You add pre-configured mappers by clicking Add Builtin . Each mapper has a set of common settings. Additional settings are available, depending on the mapper type. Click Edit to a mapper to access the configuration screen to adjust these settings. Mapper config Details on each option can be viewed by hovering over its tooltip. You can use most OIDC mappers to control where the claim gets placed. You opt to include or exclude the claim from the id and access tokens by adjusting the Add to ID token and Add to access token switches. You can add mapper types as follows: Procedure Go to the Mappers tab. Click Configure a new mapper . Add mapper Select a Mapper Type from the list box. 13.4.1. Priority order Mapper implementations have priority order . Priority order is not the configuration property of the mapper. It is the property of the concrete implementation of the mapper. Mappers are sorted by the order in the list of mappers. The changes in the token or assertion are applied in that order with the lowest applying first. Therefore, the implementations that are dependent on other implementations are processed in the necessary order. For example, to compute the roles which will be included with a token: Resolve audiences based on those roles. Process a JavaScript script that uses the roles and audiences already available in the token. 13.4.2. OIDC user session note mappers User session details are defined using mappers and are automatically included when you use or enable a feature on a client. Click Add builtin to include session details. Impersonated user sessions provide the following details: IMPERSONATOR_ID : The ID of an impersonating user. IMPERSONATOR_USERNAME : The username of an impersonating user. Service account sessions provide the following details: clientId : The client ID of the service account. client_id : The client ID of the service account. clientAddress : The remote host IP of the service account's authenticated device. clientHost : The remote host name of the service account's authenticated device. 13.4.3. Script mapper Use the Script Mapper to map claims to tokens by running user-defined JavaScript code. For more details about deploying scripts to the server, see JavaScript Providers . When scripts deploy, you should be able to select the deployed scripts from the list of available mappers. 13.4.4. Pairwise subject identifier mapper Subject claim sub is mapped by default by Subject (sub) protocol mapper in the default client scope basic . To use a pairwise subject identifier by using a protocol mapper such as Pairwise subject identifier , you can remove the Subject (sub) protocol mapper from the basic client scope. However it is not strictly needed as the Subject (sub) protocol mapper is executed before the Pairwise subject identifier mapper and hence the pairwise value will override the value added by the Subject mapper. This is due to the priority of the Subject mapper. So the only advantage of removing the built-in Subject (sub) mapper might be to save a little bit of performance by avoiding the use of the protocol mapper, which may not have any effect. 13.4.5. Using lightweight access token The access token in Red Hat build of Keycloak contains sensitive information, including Personal Identifiable Information (PII). Therefore, if the resource server does not want to disclose this type of information to third party entities such as clients, Red Hat build of Keycloak supports lightweight access tokens that remove PII from access tokens. Further, when the resource server acquires the PII removed from the access token, it can acquire the PII by sending the access token to Red Hat build of Keycloak's token introspection endpoint. Information that cannot be removed from a lightweight access token Protocol mappers can controls which information is put onto an access token and the lightweight access token use the protocol mappers. Therefore, the following information cannot be removed from the lightweight access. exp , iat , jti , iss , typ , azp , sid , scope , cnf Using a lightweight access token in Red Hat build of Keycloak By applying use-lightweight-access-token executor of client policies to a client, the client can receive a lightweight access token instead of an access token. The lightweight access token contains a claim controlled by a protocol mapper where its setting Add to lightweight access token (default OFF) is turned ON. Also, by turning ON its setting Add to token introspection of the protocol mapper, the client can obtain the claim by sending the access token to Red Hat build of Keycloak's token introspection endpoint. Introspection endpoint In some cases, it might be useful to trigger the token introspection endpoint with the HTTP header Accept: application/jwt instead of Accept: application/json , which can be useful especially for lightweight access tokens. See the details of Token Introspection endpoint in the securing apps section. 13.5. Generating client adapter config Red Hat build of Keycloak can generate configuration files that you can use to install a client adapter in your application's deployment environment. A number of adapter types are supported for OIDC and SAML. Click on the Action menu and select the Download adapter config option Select the Format Option you want configuration generated for. All Red Hat build of Keycloak client adapters for OIDC and SAML are supported. The mod-auth-mellon Apache HTTPD adapter for SAML is supported as well as standard SAML entity descriptor files. 13.6. Client scopes Use Red Hat build of Keycloak to define a shared client configuration in an entity called a client scope . A client scope configures protocol mappers and role scope mappings for multiple clients. Client scopes also support the OAuth 2 scope parameter. Client applications use this parameter to request claims or roles in the access token, depending on the requirement of the application. To create a client scope, follow these steps: Click Client Scopes in the menu. Client scopes list Click Create . Name your client scope. Click Save . A client scope has similar tabs to regular clients. You can define protocol mappers and role scope mappings . These mappings can be inherited by other clients and are configured to inherit from this client scope. 13.6.1. Protocol When you create a client scope, choose the Protocol . Clients linked in the same scope must have the same protocol. Each realm has a set of pre-defined built-in client scopes in the menu. SAML protocol: The role_list . This scope contains one protocol mapper for the roles list in the SAML assertion. OpenID Connect protocol: Several client scopes are available: roles This scope is not defined in the OpenID Connect specification and is not added automatically to the scope claim in the access token. This scope has mappers, which are used to add the roles of the user to the access token and add audiences for clients that have at least one client role. These mappers are described in more detail in the Audience section . web-origins This scope is also not defined in the OpenID Connect specification and not added to the scope claiming the access token. This scope is used to add allowed web origins to the access token allowed-origins claim. microprofile-jwt This scope handles claims defined in the MicroProfile/JWT Auth Specification . This scope defines a user property mapper for the upn claim and a realm role mapper for the groups claim. These mappers can be changed so different properties can be used to create the MicroProfile/JWT specific claims. offline_access This scope is used in cases when clients need to obtain offline tokens. More details on offline tokens is available in the Offline Access section and in the OpenID Connect specification . profile email address phone The client scopes profile , email , address and phone are defined in the OpenID Connect specification . These scopes do not have any role scope mappings defined but they do have protocol mappers defined. These mappers correspond to the claims defined in the OpenID Connect specification. For example, when you open the phone client scope and open the Mappers tab, you will see the protocol mappers which correspond to the claims defined in the specification for the scope phone . Client scope mappers When the phone client scope is linked to a client, the client automatically inherits all the protocol mappers defined in the phone client scope. Access tokens issued for this client contain the phone number information about the user, assuming that the user has a defined phone number. Built-in client scopes contain the protocol mappers as defined in the specification. You are free to edit client scopes and create, update, or remove any protocol mappers or role scope mappings. 13.6.2. Consent related settings Client scopes contain options related to the consent screen. Those options are useful if the linked client if Consent Required is enabled on the client. Display On Consent Screen If Display On Consent Screen is enabled, and the scope is added to a client that requires consent, the text specified in Consent Screen Text will be displayed on the consent screen. This text is shown when the user is authenticated and before the user is redirected from Red Hat build of Keycloak to the client. If Display On Consent Screen is disabled, this client scope will not be displayed on the consent screen. Consent Screen Text The text displayed on the consent screen when this client scope is added to a client when consent required defaults to the name of client scope. The value for this text can be customised by specifying a substitution variable with USD{var-name} strings. The customised value is configured within the property files in your theme. See the Server Developer Guide for more information on customisation. 13.6.3. Link client scope with the client Linking between a client scope and a client is configured in the Client Scopes tab of the client. Two ways of linking between client scope and client are available. Default Client Scopes This setting is applicable to the OpenID Connect and SAML clients. Default client scopes are applied when issuing OpenID Connect tokens or SAML assertions for a client. The client will inherit Protocol Mappers and Role Scope Mappings that are defined on the client scope. For the OpenID Connect Protocol, the Mappers and Role Scope Mappings are always applied, regardless of the value used for the scope parameter in the OpenID Connect authorization request. Optional Client Scopes This setting is applicable only for OpenID Connect clients. Optional client scopes are applied when issuing tokens for this client but only when requested by the scope parameter in the OpenID Connect authorization request. 13.6.3.1. Example For this example, assume the client has profile and email linked as default client scopes, and phone and address linked as optional client scopes. The client uses the value of the scope parameter when sending a request to the OpenID Connect authorization endpoint. scope=openid phone The scope parameter contains the string, with the scope values divided by spaces. The value openid is the meta-value used for all OpenID Connect requests. The token will contain mappers and role scope mappings from the default client scopes profile and email as well as phone , an optional client scope requested by the scope parameter. 13.6.4. Evaluating Client Scopes The Mappers tab contains the protocol mappers and the Scope tab contains the role scope mappings declared for this client. They do not contain the mappers and scope mappings inherited from client scopes. It is possible to see the effective protocol mappers (that is the protocol mappers defined on the client itself as well as inherited from the linked client scopes) and the effective role scope mappings used when generating a token for a client. Procedure Click the Client Scopes tab for the client. Open the sub-tab Evaluate . Select the optional client scopes that you want to apply. This will also show you the value of the scope parameter. This parameter needs to be sent from the application to the Red Hat build of Keycloak OpenID Connect authorization endpoint. Evaluating client scopes Note To send a custom value for a scope parameter from your application, see the Keycloak JavaScript adapter in the securing apps section, for javascript adapters. All examples are generated for the particular user and issued for the particular client, with the specified value of the scope parameter. The examples include all of the claims and role mappings used. 13.6.5. Client scopes permissions When issuing tokens to a user, the client scope applies only if the user is permitted to use it. When a client scope does not have any role scope mappings defined, each user is permitted to use this client scope. However, when a client scope has role scope mappings defined, the user must be a member of at least one of the roles. There must be an intersection between the user roles and the roles of the client scope. Composite roles are factored into evaluating this intersection. If a user is not permitted to use the client scope, no protocol mappers or role scope mappings will be used when generating tokens. The client scope will not appear in the scope value in the token. 13.6.6. Realm default client scopes Use Realm Default Client Scopes to define sets of client scopes that are automatically linked to newly created clients. Procedure Click the Client Scopes tab for the client. Click Default Client Scopes . From here, select the client scopes that you want to add as Default Client Scopes to newly created clients and Optional Client Scopes . Default client scopes When a client is created, you can unlink the default client scopes, if needed. This is similar to removing Default Roles . 13.6.7. Scopes explained Client scope Client scopes are entities in Red Hat build of Keycloak that are configured at the realm level and can be linked to clients. Client scopes are referenced by their name when a request is sent to the Red Hat build of Keycloak authorization endpoint with a corresponding value of the scope parameter. See the client scopes linking section for more details. Role scope mapping This is available under the Scope tab of a client or client scope. Use Role scope mapping to limit the roles that can be used in the access tokens. See the Role Scope Mappings section for more details. 13.7. Client Policies To make it easy to secure client applications, it is beneficial to realize the following points in a unified way. Setting policies on what configuration a client can have Validation of client configurations Conformance to a required security standards and profiles such as Financial-grade API (FAPI) and OAuth 2.1 To realize these points in a unified way, Client Policies concept is introduced. 13.7.1. Use-cases Client Policies realize the following points mentioned as follows. Setting policies on what configuration a client can have Configuration settings on the client can be enforced by client policies during client creation/update, but also during OpenID Connect requests to Red Hat build of Keycloak server, which are related to particular client. Red Hat build of Keycloak supports similar thing also through the Client Registration Policies described in the Client registration service in the Securing applications and Services guide . However, Client Registration Policies can only cover OIDC Dynamic Client Registration. Client Policies cover not only what Client Registration Policies can do, but other client registration and configuration ways. The current plans are for Client Registration to be replaced by Client Policies. Validation of client configurations Red Hat build of Keycloak supports validation whether the client follows settings like Proof Key for Code Exchange, Request Object Signing Algorithm, Holder-of-Key Token, and so on some endpoints like Authorization Endpoint, Token Endpoint, and so on. These can be specified by each setting item (on Admin Console, switch, pull-down menu and so on). To make the client application secure, the administrator needs to set many settings in the appropriate way, which makes it difficult for the administrator to secure the client application. Client Policies can do these validation of client configurations mentioned just above and they can also be used to autoconfigure some client configuration switches to meet the advanced security requirements. In the future, individual client configuration settings may be replaced by Client Policies directly performing required validations. Conformance to a required security standards and profiles such as FAPI and OAuth 2.1 The Global client profiles are client profiles pre-configured in Red Hat build of Keycloak by default. They are pre-configured to be compliant with standard security profiles like FAPI and OAuth 2.1 in the securing apps section, which makes it easy for the administrator to secure their client application to be compliant with the particular security profile. At this moment, Red Hat build of Keycloak has global profiles for the support of FAPI and OAuth 2.1 specifications. The administrator will just need to configure the client policies to specify which clients should be compliant with the FAPI and OAuth 2.1. The administrator can configure client profiles and client policies, so that Red Hat build of Keycloak clients can be easily made compliant with various other security profiles like SPA, Native App, Open Banking and so on. 13.7.2. Protocol The client policy concept is independent of any specific protocol. Red Hat build of Keycloak currently supports especially client profiles for the OpenID Connect (OIDC) protocol , but there is also a client profile available for the SAML protocol . 13.7.3. Architecture Client Policies consists of the four building blocks: Condition, Executor, Profile and Policy. 13.7.3.1. Condition A condition determines to which client a policy is adopted and when it is adopted. Some conditions are checked at the time of client create/update when some other conditions are checked during client requests (OIDC Authorization request, Token endpoint request and so on). The condition checks whether one specified criteria is satisfied. For example, some condition checks whether the access type of the client is confidential. The condition can not be used solely by itself. It can be used in a policy that is described afterwards. A condition can be configurable the same as other configurable providers. What can be configured depends on each condition's nature. The following conditions are provided: The way of creating/updating a client Dynamic Client Registration (Anonymous or Authenticated with Initial access token or Registration access token) Admin REST API (Admin Console and so on) So for example when creating a client, a condition can be configured to evaluate to true when this client is created by OIDC Dynamic Client Registration without initial access token (Anonymous Dynamic Client Registration). So this condition can be used for example to ensure that all clients registered through OIDC Dynamic Client Registration are FAPI or OAuth 2.1 compliant. Author of a client (Checked by presence to the particular role or group) On OpenID Connect dynamic client registration, an author of a client is the end user who was authenticated to get an access token for generating a new client, not Service Account of the existing client that actually accesses the registration endpoint with the access token. On registration by Admin REST API, an author of a client is the end user like the administrator of the Red Hat build of Keycloak. Client Access Type (confidential, public, bearer-only) For example when a client sends an authorization request, a policy is adopted if this client is confidential. Confidential client has enabled client authentication when public client has disabled client authentication. Bearer-only is a deprecated client type. Client Scope Evaluates to true if the client has a particular client scope (either as default or as an optional scope used in current request). This can be used for example to ensure that OIDC authorization requests with scope fapi-example-scope need to be FAPI compliant. Client Role Applies for clients with the client role of the specified name. Typically you can create a client role of specified name to requested clients and use it as a "marker role" to make sure that specified client policy will be applied for requested clients. Note A use-case often exists for requiring the application of a particular client policy for the specified clients such as my-client-1 and my-client-2 . The best way to achieve this result is to use a Client Role condition in your policy and then a create client role of specified name to requested clients. This client role can be used as a "marker role" used solely for marking that particular client policy for particular clients. Client Domain Name, Host or IP Address Applied for specific domain names of client. Or for the cases when the administrator registers/updates client from particular Host or IP Address. Client Attribute Applies to clients with the client attribute of the specified name and value. If you specify multiple client attributes, they will be evaluated using AND conditions. If you want to evaluate using OR conditions, set this condition multiple times. Any Client This condition always evaluates to true. It can be used for example to ensure that all clients in the particular realm are FAPI compliant. 13.7.3.2. Executor An executor specifies what action is executed on a client to which a policy is adopted. The executor executes one or several specified actions. For example, some executor checks whether the value of the parameter redirect_uri in the authorization request matches exactly with one of the pre-registered redirect URIs on Authorization Endpoint and rejects this request if not. The executor can not be used solely by itself. It can be used in a profile that is described afterwards. An executor can be configurable the same as other configurable providers. What can be configured depends on the nature of each executor. An executor acts on various events. An executor implementation can ignore certain types of events (For example, executor for checking OIDC request object acts just on the OIDC authorization request). Events are: Creating a client (including creation through dynamic client registration) Updating a client Sending an authorization request Sending a token request Sending a token refresh request Sending a token revocation request Sending a token introspection request Sending a userinfo request Sending a logout request with a refresh token (note that logout with refresh token is proprietary Red Hat build of Keycloak functionality unsupported by any specification. It is rather recommended to rely on the official OIDC logout ). On each event, an executor can work in multiple phases. For example, on creating/updating a client, the executor can modify the client configuration by autoconfigure specific client settings. After that, the executor validates this configuration in validation phase. One of several purposes for this executor is to realize the security requirements of client conformance profiles like FAPI and OAuth 2.1. To do so, the following executors are needed: Enforce secure Client Authentication method is used for the client Enforce Holder-of-key tokens are used Enforce Proof Key for Code Exchange (PKCE) is used Enforce secure signature algorithm for Signed JWT client authentication (private-key-jwt) is used Enforce HTTPS redirect URI and make sure that configured redirect URI does not contain wildcards Enforce OIDC request object satisfying high security level Enforce Response Type of OIDC Hybrid Flow including ID Token used as detached signature as described in the FAPI 1 specification, which means that ID Token returned from Authorization response won't contain user profile data Enforce more secure state and nonce parameters treatment for preventing CSRF Enforce more secure signature algorithm when client registration Enforce binding_message parameter is used for CIBA requests Enforce Client Secret Rotation Enforce Client Registration Access Token Enforce checking if a client is the one to which an intent was issued in a use case where an intent is issued before starting an authorization code flow to get an access token like UK OpenBanking Enforce prohibiting implicit and hybrid flow Enforce checking if a PAR request includes necessary parameters included by an authorization request Enforce DPoP-binding tokens is used (available when dpop feature is enabled) Enforce using lightweight access token Enforce that refresh token rotation is skipped and there is no refresh token returned from the refresh token response Enforce a valid redirect URI that the OAuth 2.1 specification requires Enforce SAML Redirect binding cannot be used or SAML requests and assertions are signed 13.7.3.3. Profile A profile consists of several executors, which can realize a security profile like FAPI and OAuth 2.1. Profile can be configured by the Admin REST API (Admin Console) together with its executors. Three global profiles exist and they are configured in Red Hat build of Keycloak by default with pre-configured executors compliant with the FAPI 1 Baseline, FAPI 1 Advanced, FAPI CIBA, FAPI 2 and OAuth 2.1 specifications. More details exist in the FAPI and OAuth 2.1 in the securing apps section. 13.7.3.4. Policy A policy consists of several conditions and profiles. The policy can be adopted to clients satisfying all conditions of this policy. The policy refers several profiles and all executors of these profiles execute their task against the client that this policy is adopted to. 13.7.4. Configuration Policies, profiles, conditions, executors can be configured by Admin REST API, which means also the Admin Console. To do so, there is a tab Realm Realm Settings Client Policies , which means the administrator can have client policies per realm. The Global Client Profiles are automatically available in each realm. However there are no client policies configured by default. This means that the administrator is always required to create any client policy if they want for example the clients of his realm to be FAPI compliant. Global profiles cannot be updated, but the administrator can easily use them as a template and create their own profile if they want to do some slight changes in the global profile configurations. There is JSON Editor available in the Admin Console, which simplifies the creation of new profile based on some global profile. 13.7.5. Backward Compatibility Client Policies can replace Client Registration Policies described in the Client registration service from Securing applications and Services guide . However, Client Registration Policies also still co-exist. This means that for example during a Dynamic Client Registration request to create/update a client, both client policies and client registration policies are applied. The current plans are for the Client Registration Policies feature to be removed and the existing client registration policies will be migrated into new client policies automatically. 13.7.6. Client Secret Rotation Example See an example configuration for client secret rotation .
[ "https://myhost.com/myapp/k_jwks", "POST /realms/demo/protocol/openid-connect/token Authorization: Basic cHJvZHVjdC1zYS1jbGllbnQ6cGFzc3dvcmQ= Content-Type: application/x-www-form-urlencoded grant_type=client_credentials", "HTTP/1.1 200 OK Content-Type: application/json;charset=UTF-8 Cache-Control: no-store Pragma: no-cache { \"access_token\":\"2YotnFZFEjr1zCsicMWpAA\", \"token_type\":\"bearer\", \"expires_in\":60 }", "\"audience\": [ \"<trusted service>\" ]", "root/realms/{realm}/protocol/saml/descriptor", "http://host:port/realms/master/clients/account/redirect", "scope=openid phone" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/assembly-managing-clients_server_administration_guide
Chapter 132. Spring LDAP
Chapter 132. Spring LDAP Since Camel 2.11 Only producer is supported The Spring LDAP component provides a Camel wrapper for Spring LDAP . 132.1. Dependencies When using spring-ldap with Red Hat build of Camel Spring Boot, use the following Maven dependency to enable support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-ldap-starter</artifactId> </dependency> 132.2. URI format Where springLdapTemplate is the name of the Spring LDAP Template bean . In this bean, you configure the URL and the credentials for your LDAP access. 132.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 132.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 132.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 132.4. Component Options The Spring LDAP component supports 2 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 132.5. Endpoint Options The Spring LDAP endpoint is configured using URI syntax: Following are the path and query parameters: 132.5.1. Path Parameters (1 parameters) Name Description Default Type templateName (producer) Required Name of the Spring LDAP Template bean. String 132.5.2. Query Parameters (3 parameters) Name Description Default Type operation (producer) Required The LDAP operation to be performed. Enum values: SEARCH BIND UNBIND AUTHENTICATE MODIFY_ATTRIBUTES FUNCTION_DRIVEN LdapOperation scope (producer) The scope of the search operation. Enum values: object onelevel subtree subtree String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 132.6. Usage The component supports producer endpoints only. An attempt to create a consumer endpoint can result in an UnsupportedOperationException . The body of the message must be a map (an instance of java.util.Map ). Unless a base DN is specified in the configuration of your ContextSource, this map must contain at least an entry with the key dn (not needed for function_driven operation) that specifies the root node for the LDAP operation to be performed. Other entries of the map are operation-specific. The body of the message remains unchanged for the bind and unbind operations. For the search and function_driven operations, the body is set to the result of the search, see http://static.springsource.org/spring-ldap/site/apidocs/org/springframework/ldap/core/LdapTemplate.html#search%28java.lang.String,%20java.lang.String,%20int,%20org.springframework.ldap.core.AttributesMapper%29 . 132.6.1. Search The message body must have an entry with the key filter . The value must be a String representing a valid LDAP filter, see http://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol#Search_and_Compare . 132.6.2. Bind The message body must have an entry with the key attributes . The value must be an instance of javax.naming.directory.Attributes This entry specifies the LDAP node to be created. 132.6.3. Unbind No further entries are necessary, the node with the specified dn is deleted. 132.6.4. Authenticate The message body must have entries with the keys filter and password . The values must be an instance of String representing a valid LDAP filter and a user password, respectively. 132.6.5. Modify Attributes The message body must have an entry with the key modificationItems . The value must be an instance of any array of type javax.naming.directory.ModificationItem 132.6.6. Function-Driven The message body must have entries with the keys function and request . The function value must be of type java.util.function.BiFunction<L, Q, S> . The L type parameter must be of type org.springframework.ldap.core.LdapOperations . The request value must be the same type as the Q type parameter in the function and it must encapsulate the parameters expected by the LdapTemplate method being invoked within the function . The S type parameter represents the response type as returned by the LdapTemplate method being invoked. This operation allows dynamic invocation of LdapTemplate methods that are not covered by the operations mentioned above. Key definitions In order to avoid spelling errors, the following constants are defined in org.apache.camel.springldap.SpringLdapProducer : public static final String DN = "dn" public static final String FILTER = "filter" public static final String ATTRIBUTES = "attributes" public static final String PASSWORD = "password"; public static final String MODIFICATION_ITEMS = "modificationItems"; public static final String FUNCTION = "function"; public static final String REQUEST = "request"; Following is an example of createMap function: Here, createMap function returns Map object that contains information about attributes and domain name of ldap server. You must also configure ldap connection using Spring Boot auto-configuration or LdapTemplate Bean for the above example. Example for Spring Boot auto-configuration: 132.7. Spring Boot Auto-Configuration The component supports 3 options that are listed below. Name Description Default Type camel.component.spring-ldap.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.spring-ldap.enabled Whether to enable auto configuration of the spring-ldap component. This is enabled by default. Boolean camel.component.spring-ldap.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-spring-ldap-starter</artifactId> </dependency>", "spring-ldap:springLdapTemplate[?options]", "spring-ldap:templateName", "from(\"direct:start\") .setBody(constant(createMap())) .to(\"spring-ldap:ldapTemplate?operation=BIND\");", "private static Map<String, Object> createMap() { BasicAttributes basicAttributes = new BasicAttributes(); basicAttributes.put(\"cn\", \"Name Surname\"); basicAttributes.put(\"sn\", \"Surname\"); basicAttributes.put(\"objectClass\", \"person\"); Map<String, Object> map = new HashMap<>(); map.put(SpringLdapProducer.DN, \"cn=LdapDN,dc=example,dc=org\"); map.put(SpringLdapProducer.ATTRIBUTES, basicAttributes); return map; }", "spring.ldap.password=passwordforldapserver spring.ldap.urls=urlForLdapServer spring.ldap.username=usernameForLdapServer" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-spring-ldap-component-starter
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/troubleshooting_openshift_data_foundation/providing-feedback-on-red-hat-documentation_rhodf
Chapter 12. Managing control plane machines
Chapter 12. Managing control plane machines 12.1. About control plane machine sets With control plane machine sets, you can automate management of the control plane machine resources within your OpenShift Container Platform cluster. Important Control plane machine sets cannot manage compute machines, and compute machine sets cannot manage control plane machines. Control plane machine sets provide for control plane machines similar management capabilities as compute machine sets provide for compute machines. However, these two types of machine sets are separate custom resources defined within the Machine API and have several fundamental differences in their architecture and functionality. 12.1.1. Control Plane Machine Set Operator overview The Control Plane Machine Set Operator uses the ControlPlaneMachineSet custom resource (CR) to automate management of the control plane machine resources within your OpenShift Container Platform cluster. When the state of the cluster control plane machine set is set to Active , the Operator ensures that the cluster has the correct number of control plane machines with the specified configuration. This allows the automated replacement of degraded control plane machines and rollout of changes to the control plane. A cluster has only one control plane machine set, and the Operator only manages objects in the openshift-machine-api namespace. 12.1.2. Control Plane Machine Set Operator limitations The Control Plane Machine Set Operator has the following limitations: Only Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and VMware vSphere clusters are supported. Clusters that do not have preexisting machines that represent the control plane nodes cannot use a control plane machine set or enable the use of a control plane machine set after installation. Generally, preexisting control plane machines are only present if a cluster was installed using infrastructure provisioned by the installation program. To determine if a cluster has the required preexisting control plane machines, run the following command as a user with administrator privileges: USD oc get machine \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machine-role=master Example output showing preexisting control plane machines NAME PHASE TYPE REGION ZONE AGE <infrastructure_id>-master-0 Running m6i.xlarge us-west-1 us-west-1a 5h19m <infrastructure_id>-master-1 Running m6i.xlarge us-west-1 us-west-1b 5h19m <infrastructure_id>-master-2 Running m6i.xlarge us-west-1 us-west-1a 5h19m Example output missing preexisting control plane machines No resources found in openshift-machine-api namespace. The Operator requires the Machine API Operator to be operational and is therefore not supported on clusters with manually provisioned machines. When installing a OpenShift Container Platform cluster with manually provisioned machines for a platform that creates an active generated ControlPlaneMachineSet custom resource (CR), you must remove the Kubernetes manifest files that define the control plane machine set as instructed in the installation process. Only clusters with three control plane machines are supported. Horizontal scaling of the control plane is not supported. Deploying Azure control plane machines on Ephemeral OS disks increases risk for data loss and is not supported. Deploying control plane machines as AWS Spot Instances, GCP preemptible VMs, or Azure Spot VMs is not supported. Important Attempting to deploy control plane machines as AWS Spot Instances, GCP preemptible VMs, or Azure Spot VMs might cause the cluster to lose etcd quorum. A cluster that loses all control plane machines simultaneously is unrecoverable. Making changes to the control plane machine set during or prior to installation is not supported. You must make any changes to the control plane machine set only after installation. 12.1.3. Additional resources Control Plane Machine Set Operator reference ControlPlaneMachineSet custom resource 12.2. Getting started with control plane machine sets The process for getting started with control plane machine sets depends on the state of the ControlPlaneMachineSet custom resource (CR) in your cluster. Clusters with an active generated CR Clusters that have a generated CR with an active state use the control plane machine set by default. No administrator action is required. Clusters with an inactive generated CR For clusters that include an inactive generated CR, you must review the CR configuration and activate the CR . Clusters without a generated CR For clusters that do not include a generated CR, you must create and activate a CR with the appropriate configuration for your cluster. If you are uncertain about the state of the ControlPlaneMachineSet CR in your cluster, you can verify the CR status . 12.2.1. Supported cloud providers In OpenShift Container Platform 4.13, the control plane machine sets are supported for Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and VMware vSphere clusters. The status of the control plane machine set after installation depends on your cloud provider and the version of OpenShift Container Platform that you installed on your cluster. Table 12.1. Control plane machine set implementation for OpenShift Container Platform 4.13 Cloud provider Active by default Generated CR Manual CR required Amazon Web Services (AWS) X [1] X Google Cloud Platform (GCP) X [2] X Microsoft Azure X [2] X VMware vSphere X AWS clusters that are upgraded from version 4.11 or earlier require CR activation . GCP and Azure clusters that are upgraded from version 4.12 or earlier require CR activation . 12.2.2. Checking the control plane machine set custom resource state You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR). Procedure Determine the state of the CR by running the following command: USD oc get controlplanemachineset.machine.openshift.io cluster \ --namespace openshift-machine-api A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required. A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated. A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR. steps To use the control plane machine set, you must ensure that a ControlPlaneMachineSet CR with the correct settings for your cluster exists. If your cluster has an existing CR, you must verify that the configuration in the CR is correct for your cluster. If your cluster does not have an existing CR, you must create one with the correct configuration for your cluster. 12.2.3. Activating the control plane machine set custom resource To use the control plane machine set, you must ensure that a ControlPlaneMachineSet custom resource (CR) with the correct settings for your cluster exists. On a cluster with a generated CR, you must verify that the configuration in the CR is correct for your cluster and activate it. Note For more information about the parameters in the CR, see "Control plane machine set configuration". Procedure View the configuration of the CR by running the following command: USD oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster Change the values of any fields that are incorrect for your cluster configuration. When the configuration is correct, activate the CR by setting the .spec.state field to Active and saving your changes. Important To activate the CR, you must change the .spec.state field to Active in the same oc edit session that you use to update the CR configuration. If the CR is saved with the state left as Inactive , the control plane machine set generator resets the CR to its original settings. Additional resources Control Plane Machine Set Operator configuration 12.2.4. Creating a control plane machine set custom resource To use the control plane machine set, you must ensure that a ControlPlaneMachineSet custom resource (CR) with the correct settings for your cluster exists. On a cluster without a generated CR, you must create the CR manually and activate it. Note For more information about the structure and parameters of the CR, see "Control plane machine set configuration". Procedure Create a YAML file using the following template: Control plane machine set CR YAML file template apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 2 strategy: type: RollingUpdate 3 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 4 <platform_failure_domains> 5 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> 6 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 7 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You must specify this value when you create a ControlPlaneMachineSet CR. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 Specify the state of the Operator. When the state is Inactive , the Operator is not operational. You can activate the Operator by setting the value to Active . Important Before you activate the CR, you must ensure that its configuration is correct for your cluster requirements. 3 Specify the update strategy for the cluster. The allowed values are OnDelete and RollingUpdate . The default value is RollingUpdate . 4 Specify your cloud provider platform name. The allowed values are AWS , Azure , GCP , and VSphere . 5 Add the <platform_failure_domains> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample failure domain configuration for your cloud provider. Note VMware vSphere does not support failure domains. For vSphere clusters, replace <platform_failure_domains> with an empty failureDomains: parameter. 6 Specify the infrastructure ID. 7 Add the <platform_provider_spec> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample provider specification for your cloud provider. Refer to the sample YAML for a control plane machine set CR and populate your file with values that are appropriate for your cluster configuration. Refer to the sample failure domain configuration and sample provider specification for your cloud provider and update those sections of your file with the appropriate values. When the configuration is correct, activate the CR by setting the .spec.state field to Active and saving your changes. Create the CR from your YAML file by running the following command: USD oc create -f <control_plane_machine_set>.yaml where <control_plane_machine_set> is the name of the YAML file that contains the CR configuration. Additional resources Control Plane Machine Set Operator configuration Sample YAML for configuring Amazon Web Services clusters Sample YAML for configuring Google Cloud Platform clusters Sample YAML for configuring Microsoft Azure clusters Sample YAML for configuring VMware vSphere clusters 12.3. Control plane machine set configuration These example YAML file and snippets demonstrate the base structure for a control plane machine set custom resource (CR) and platform-specific samples for provider specification and failure domain configurations. 12.3.1. Sample YAML for a control plane machine set custom resource The base of the ControlPlaneMachineSet CR is structured the same way for all platforms. Sample ControlPlaneMachineSet CR YAML file apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster 1 namespace: openshift-machine-api spec: replicas: 3 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 3 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 4 strategy: type: RollingUpdate 5 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 6 <platform_failure_domains> 7 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 8 1 Specifies the name of the ControlPlaneMachineSet CR, which is cluster . Do not change this value. 2 Specifies the number of control plane machines. Only clusters with three control plane machines are supported, so the replicas value is 3 . Horizontal scaling is not supported. Do not change this value. 3 Specifies the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You must specify this value when you create a ControlPlaneMachineSet CR. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 4 Specifies the state of the Operator. When the state is Inactive , the Operator is not operational. You can activate the Operator by setting the value to Active . Important Before you activate the Operator, you must ensure that the ControlPlaneMachineSet CR configuration is correct for your cluster requirements. For more information about activating the Control Plane Machine Set Operator, see "Getting started with control plane machine sets". 5 Specifies the update strategy for the cluster. The allowed values are OnDelete and RollingUpdate . The default value is RollingUpdate . For more information about update strategies, see "Updating the control plane configuration". 6 Specifies the cloud provider platform name. Do not change this value. 7 Specifies the <platform_failure_domains> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample failure domain configuration for your cloud provider. Note VMware vSphere does not support failure domains. 8 Specifies the <platform_provider_spec> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample provider specification for your cloud provider. Additional resources Getting started with control plane machine sets Updating the control plane configuration Provider-specific configuration The <platform_provider_spec> and <platform_failure_domains> sections of the control plane machine set resources are provider-specific. Refer to the example YAML for your cluster: Sample YAML snippets for configuring Amazon Web Services clusters Sample YAML snippets for configuring Google Cloud Platform clusters Sample YAML snippets for configuring Microsoft Azure clusters Sample YAML snippets for configuring VMware vSphere clusters 12.3.2. Sample YAML for configuring Amazon Web Services clusters Some sections of the control plane machine set CR are provider-specific. The example YAML in this section show provider specification and failure domain configurations for an Amazon Web Services (AWS) cluster. 12.3.2.1. Sample AWS provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine CR that is created by the installation program. You can omit any field that is set in the failure domain section of the CR. In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Sample AWS providerSpec values providerSpec: value: ami: id: ami-<ami_id_string> 1 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: 2 encrypted: true iops: 0 kmsKey: arn: "" volumeSize: 120 volumeType: gp3 credentialsSecret: name: aws-cloud-credentials 3 deviceIndex: 0 iamInstanceProfile: id: <cluster_id>-master-profile 4 instanceType: m6i.xlarge 5 kind: AWSMachineProviderConfig 6 loadBalancers: 7 - name: <cluster_id>-int type: network - name: <cluster_id>-ext type: network metadata: creationTimestamp: null metadataServiceOptions: {} placement: 8 region: <region> 9 securityGroups: - filters: - name: tag:Name values: - <cluster_id>-master-sg 10 subnet: {} 11 userDataSecret: name: master-user-data 12 1 Specifies the Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Images (AMI) ID for the cluster. The AMI must belong to the same region as the cluster. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. 2 Specifies the configuration of an encrypted EBS volume. 3 Specifies the secret name for the cluster. Do not change this value. 4 Specifies the AWS Identity and Access Management (IAM) instance profile. Do not change this value. 5 Specifies the AWS instance type for the control plane. 6 Specifies the cloud provider platform type. Do not change this value. 7 Specifies the internal ( int ) and external ( ext ) load balancers for the cluster. Note You can omit the external ( ext ) load balancer parameters on private OpenShift Container Platform clusters. 8 This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain. 9 Specifies the AWS region for the cluster. 10 Specifies the control plane machines security group. 11 This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain. 12 Specifies the control plane user data secret. Do not change this value. 12.3.2.2. Sample AWS failure domain configuration The control plane machine set concept of a failure domain is analogous to existing AWS concept of an Availability Zone (AZ) . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. When configuring AWS failure domains in the control plane machine set, you must specify the availability zone name and the subnet to use. Sample AWS failure domain values failureDomains: aws: - placement: availabilityZone: <aws_zone_a> 1 subnet: 2 filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_a> 3 type: Filters 4 - placement: availabilityZone: <aws_zone_b> 5 subnet: filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_b> 6 type: Filters platform: AWS 7 1 Specifies an AWS availability zone for the first failure domain. 2 Specifies a subnet configuration. In this example, the subnet type is Filters , so there is a filters stanza. 3 Specifies the subnet name for the first failure domain, using the infrastructure ID and the AWS availability zone. 4 Specifies the subnet type. The allowed values are: ARN , Filters and ID . The default value is Filters . 5 Specifies the subnet name for an additional failure domain, using the infrastructure ID and the AWS availability zone. 6 Specifies the cluster's infrastructure ID and the AWS availability zone for the additional failure domain. 7 Specifies the cloud provider platform name. Do not change this value. Additional resources Enabling Amazon Web Services features for control plane machines 12.3.3. Sample YAML for configuring Google Cloud Platform clusters Some sections of the control plane machine set CR are provider-specific. The example YAML in this section show provider specification and failure domain configurations for a Google Cloud Platform (GCP) cluster. 12.3.3.1. Sample GCP provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. You can omit any field that is set in the failure domain section of the CR. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI. Infrastructure ID The <cluster_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Image path The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{"\n"}' \ get ControlPlaneMachineSet/cluster Sample GCP providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials 1 deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 2 labels: null sizeGb: 200 type: pd-ssd kind: GCPMachineProviderSpec 3 machineType: e2-standard-4 metadata: creationTimestamp: null metadataServiceOptions: {} networkInterfaces: - network: <cluster_id>-network subnetwork: <cluster_id>-master-subnet projectID: <project_name> 4 region: <region> 5 serviceAccounts: 6 - email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform shieldedInstanceConfig: {} tags: - <cluster_id>-master targetPools: - <cluster_id>-api userDataSecret: name: master-user-data 7 zone: "" 8 1 Specifies the secret name for the cluster. Do not change this value. 2 Specifies the path to the image that was used to create the disk. To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 3 Specifies the cloud provider platform type. Do not change this value. 4 Specifies the name of the GCP project that you use for your cluster. 5 Specifies the GCP region for the cluster. 6 Specifies a single service account. Multiple service accounts are not supported. 7 Specifies the control plane user data secret. Do not change this value. 8 This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain. 12.3.3.2. Sample GCP failure domain configuration The control plane machine set concept of a failure domain is analogous to the existing GCP concept of a zone . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. When configuring GCP failure domains in the control plane machine set, you must specify the zone name to use. Sample GCP failure domain values failureDomains: gcp: - zone: <gcp_zone_a> 1 - zone: <gcp_zone_b> 2 - zone: <gcp_zone_c> - zone: <gcp_zone_d> platform: GCP 3 1 Specifies a GCP zone for the first failure domain. 2 Specifies an additional failure domain. Further failure domains are added the same way. 3 Specifies the cloud provider platform name. Do not change this value. 12.3.4. Sample YAML for configuring Microsoft Azure clusters Some sections of the control plane machine set CR are provider-specific. The example YAML in this section show provider specification and failure domain configurations for an Azure cluster. 12.3.4.1. Sample Azure provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane Machine CR that is created by the installation program. You can omit any field that is set in the failure domain section of the CR. In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Sample Azure providerSpec values providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials 1 namespace: openshift-machine-api diagnostics: {} image: 2 offer: "" publisher: "" resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 3 sku: "" version: "" internalLoadBalancer: <cluster_id>-internal 4 kind: AzureMachineProviderSpec 5 location: <region> 6 managedIdentity: <cluster_id>-identity metadata: creationTimestamp: null name: <cluster_id> networkResourceGroup: <cluster_id>-rg osDisk: 7 diskSettings: {} diskSizeGB: 1024 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <cluster_id> 8 resourceGroup: <cluster_id>-rg subnet: <cluster_id>-master-subnet 9 userDataSecret: name: master-user-data 10 vmSize: Standard_D8s_v3 vnet: <cluster_id>-vnet zone: "" 11 1 Specifies the secret name for the cluster. Do not change this value. 2 Specifies the image details for your control plane machine set. 3 Specifies an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 4 Specifies the internal load balancer for the control plane. This field might not be preconfigured but is required in both the ControlPlaneMachineSet and control plane Machine CRs. 5 Specifies the cloud provider platform type. Do not change this value. 6 Specifies the region to place control plane machines on. 7 Specifies the disk configuration for the control plane. 8 Specifies the public load balancer for the control plane. Note You can omit the publicLoadBalancer parameter on private OpenShift Container Platform clusters that have user-defined outbound routing. 9 Specifies the subnet for the control plane. 10 Specifies the control plane user data secret. Do not change this value. 11 This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain. 12.3.4.2. Sample Azure failure domain configuration The control plane machine set concept of a failure domain is analogous to existing Azure concept of an Azure availability zone . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. When configuring Azure failure domains in the control plane machine set, you must specify the availability zone name. Sample Azure failure domain values failureDomains: azure: 1 - zone: "1" - zone: "2" - zone: "3" platform: Azure 2 1 Each instance of zone specifies an Azure availability zone for a failure domain. 2 Specifies the cloud provider platform name. Do not change this value. Additional resources Enabling Microsoft Azure features for control plane machines 12.3.5. Sample YAML for configuring VMware vSphere clusters Some sections of the control plane machine set CR are provider-specific. The example YAML in this section shows a provider specification configuration for a VMware vSphere cluster. 12.3.5.1. Sample vSphere provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine CR that is created by the installation program. Sample vSphere providerSpec values providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 2 kind: VSphereMachineProviderSpec 3 memoryMiB: 16384 4 metadata: creationTimestamp: null network: 5 devices: - networkName: <vm_network_name> numCPUs: 4 6 numCoresPerSocket: 4 7 snapshot: "" template: <vm_template_name> 8 userDataSecret: name: master-user-data 9 workspace: datacenter: <vcenter_datacenter_name> 10 datastore: <vcenter_datastore_name> 11 folder: <path_to_vcenter_vm_folder> 12 resourcePool: <vsphere_resource_pool> 13 server: <vcenter_server_ip> 14 1 Specifies the secret name for the cluster. Do not change this value. 2 Specifies the VM disk size for the control plane machines. 3 Specifies the cloud provider platform type. Do not change this value. 4 Specifies the memory allocated for the control plane machines. 5 Specifies the network on which the control plane is deployed. 6 Specifies the number of CPUs allocated for the control plane machines. 7 Specifies the number of cores for each control plane CPU. 8 Specifies the vSphere VM template to use, such as user-5ddjd-rhcos . 9 Specifies the control plane user data secret. Do not change this value. 10 Specifies the vCenter Datacenter for the control plane. 11 Specifies the vCenter Datastore for the control plane. 12 Specifies the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 13 Specifies the vSphere resource pool for your VMs. 14 Specifies the vCenter server IP or fully qualified domain name. 12.4. Managing control plane machines with control plane machine sets Control plane machine sets automate several essential aspects of control plane management. 12.4.1. Replacing a control plane machine To replace a control plane machine in a cluster that has a control plane machine set, you delete the machine manually. The control plane machine set replaces the deleted machine with one using the specification in the control plane machine set custom resource (CR). Procedure List the control plane machines in your cluster by running the following command: USD oc get machines \ -l machine.openshift.io/cluster-api-machine-role==master \ -n openshift-machine-api Delete a control plane machine by running the following command: USD oc delete machine \ -n openshift-machine-api \ <control_plane_machine_name> 1 1 Specify the name of the control plane machine to delete. Note If you delete multiple control plane machines, the control plane machine set replaces them according to the configured update strategy: For clusters that use the default RollingUpdate update strategy, the Operator replaces one machine at a time until each machine is replaced. For clusters that are configured to use the OnDelete update strategy, the Operator creates all of the required replacement machines simultaneously. Both strategies maintain etcd health during control plane machine replacement. 12.4.2. Updating the control plane configuration You can make changes to the configuration of the machines in the control plane by updating the specification in the control plane machine set custom resource (CR). The Control Plane Machine Set Operator monitors the control plane machines and compares their configuration with the specification in the control plane machine set CR. When there is a discrepancy between the specification in the CR and the configuration of a control plane machine, the Operator marks that control plane machine for replacement. Note For more information about the parameters in the CR, see "Control plane machine set configuration". Prerequisites Your cluster has an activated and functioning Control Plane Machine Set Operator. Procedure Edit your control plane machine set CR by running the following command: USD oc edit controlplanemachineset.machine.openshift.io cluster \ -n openshift-machine-api Change the values of any fields that you want to update in your cluster configuration. Save your changes. steps For clusters that use the default RollingUpdate update strategy, the control plane machine set propagates changes to your control plane configuration automatically. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. 12.4.2.1. Automatic updates to the control plane configuration The RollingUpdate update strategy automatically propagates changes to your control plane configuration. This update strategy is the default configuration for the control plane machine set. For clusters that use the RollingUpdate update strategy, the Operator creates a replacement control plane machine with the configuration that is specified in the CR. When the replacement control plane machine is ready, the Operator deletes the control plane machine that is marked for replacement. The replacement machine then joins the control plane. If multiple control plane machines are marked for replacement, the Operator protects etcd health during replacement by repeating this replacement process one machine at a time until it has replaced each machine. 12.4.2.2. Manual updates to the control plane configuration You can use the OnDelete update strategy to propagate changes to your control plane configuration by replacing machines manually. Manually replacing machines allows you to test changes to your configuration on a single machine before applying the changes more broadly. For clusters that are configured to use the OnDelete update strategy, the Operator creates a replacement control plane machine when you delete an existing machine. When the replacement control plane machine is ready, the etcd Operator allows the existing machine to be deleted. The replacement machine then joins the control plane. If multiple control plane machines are deleted, the Operator creates all of the required replacement machines simultaneously. The Operator maintains etcd health by preventing more than one machine being removed from the control plane at once. 12.4.3. Enabling Amazon Web Services features for control plane machines You can enable Amazon Web Services (AWS) features on control plane machines by changing the configuration of your control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy. 12.4.3.1. Restricting the API server to private After you deploy a cluster to Amazon Web Services (AWS), you can reconfigure the API server to use only the private zone. Prerequisites Install the OpenShift CLI ( oc ). Have access to the web console as a user with admin privileges. Procedure In the web portal or console for your cloud provider, take the following actions: Locate and delete the appropriate load balancer component: For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer. Delete the api.USDclustername.USDyourdomain DNS entry in the public zone. Remove the external load balancers by deleting the following lines in the control plane machine set custom resource: providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network 1 2 Delete this line. 12.4.3.2. Changing the Amazon Web Services instance type by using a control plane machine set You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the specification in the control plane machine set custom resource (CR). Prerequisites Your AWS cluster uses a control plane machine set. Procedure Edit the following line under the providerSpec field: providerSpec: value: ... instanceType: <compatible_aws_instance_type> 1 1 Specify a larger AWS instance type with the same base as the selection. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . Save your changes. 12.4.3.3. Machine set options for the Amazon EC2 Instance Metadata Service You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2. Note Using IMDSv2 is only supported on AWS clusters that were created with OpenShift Container Platform version 4.7 or later. Important Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2. 12.4.3.3.1. Configuring IMDS by using machine sets You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication in the machine set YAML file for your machines. Prerequisites To use IMDSv2, your AWS cluster must have been created with OpenShift Container Platform version 4.7 or later. Procedure Add or edit the following lines under the providerSpec field: providerSpec: value: metadataServiceOptions: authentication: Required 1 1 To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. 12.4.3.4. Machine sets that deploy machines as Dedicated Instances You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account. Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware. 12.4.3.4.1. Creating Dedicated Instances by using machine sets You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy field in your machine set YAML file to launch a Dedicated Instance on AWS. Procedure Specify a dedicated tenancy under the providerSpec field: providerSpec: placement: tenancy: dedicated 12.4.4. Enabling Microsoft Azure features for control plane machines You can enable Microsoft Azure features on control plane machines by changing the configuration of your control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy. 12.4.4.1. Restricting the API server to private After you deploy a cluster to Microsoft Azure, you can reconfigure the API server to use only the private zone. Prerequisites Install the OpenShift CLI ( oc ). Have access to the web console as a user with admin privileges. Procedure In the web portal or console for your cloud provider, take the following actions: Locate and delete the appropriate load balancer component: For Azure, delete the api-internal rule for the load balancer. Delete the api.USDclustername.USDyourdomain DNS entry in the public zone. Remove the external load balancers by deleting the following lines in the control plane machine set custom resource: providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network 1 2 Delete this line. 12.4.4.2. Selecting an Azure Marketplace image You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 Note Regardless of the version of OpenShift Container Platform that you install, the correct version of the Azure Marketplace image to use is 4.8. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer, specifically the values for publisher , offer , sku , and version . Add the following parameters to the providerSpec section of your machine set YAML file using the image details for your offer: Sample providerSpec image values for Azure Marketplace machines providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: "" sku: rh-ocp-worker type: MarketplaceWithPlan version: 4.8.2021122100 12.4.4.3. Enabling Azure boot diagnostics You can enable boot diagnostics on Azure machines that your machine set creates. Prerequisites Have an existing Microsoft Azure cluster. Procedure Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file: For an Azure Managed storage account: providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1 1 Specifies an Azure Managed storage account. For an Azure Unmanaged storage account: providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2 1 Specifies an Azure Unmanaged storage account. 2 Replace <storage-account> with the name of your storage account. Note Only the Azure Blob Storage data service is supported. Verification On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine. 12.4.4.4. Machine sets that deploy machines with ultra disks as data disks You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads. Additional resources Microsoft Azure ultra disks documentation 12.4.4.4.1. Creating machines with ultra disks by using machine sets You can deploy machines with ultra disks on Azure by editing your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Create a custom secret in the openshift-machine-api namespace using the master data secret by running the following command: USD oc -n openshift-machine-api \ get secret <role>-user-data \ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2 1 Replace <role> with master . 2 Specify userData.txt as the name of the new custom secret. In a text editor, open the userData.txt file and locate the final } character in the file. On the immediately preceding line, add a , . Create a new line after the , and add the following configuration details: "storage": { "disks": [ 1 { "device": "/dev/disk/azure/scsi1/lun0", 2 "partitions": [ 3 { "label": "lun0p1", 4 "sizeMiB": 1024, 5 "startMiB": 0 } ] } ], "filesystems": [ 6 { "device": "/dev/disk/by-partlabel/lun0p1", "format": "xfs", "path": "/var/lib/lun0p1" } ] }, "systemd": { "units": [ 7 { "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var/lib/lun0p1\nWhat=/dev/disk/by-partlabel/lun0p1\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", 8 "enabled": true, "name": "var-lib-lun0p1.mount" } ] } 1 The configuration details for the disk that you want to attach to a node as an ultra disk. 2 Specify the lun value that is defined in the dataDisks stanza of the machine set you are using. For example, if the machine set contains lun: 0 , specify lun0 . You can initialize multiple data disks by specifying multiple "disks" entries in this configuration file. If you specify multiple "disks" entries, ensure that the lun value for each matches the value in the machine set. 3 The configuration details for a new partition on the disk. 4 Specify a label for the partition. You might find it helpful to use hierarchical names, such as lun0p1 for the first partition of lun0 . 5 Specify the total size in MiB of the partition. 6 Specify the filesystem to use when formatting a partition. Use the partition label to specify the partition. 7 Specify a systemd unit to mount the partition at boot. Use the partition label to specify the partition. You can create multiple partitions by specifying multiple "partitions" entries in this configuration file. If you specify multiple "partitions" entries, you must specify a systemd unit for each. 8 For Where , specify the value of storage.filesystems.path . For What , specify the value of storage.filesystems.device . Extract the disabling template value to a file called disableTemplating.txt by running the following command: USD oc -n openshift-machine-api get secret <role>-user-data \ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt 1 Replace <role> with master . Combine the userData.txt file and disableTemplating.txt file to create a data secret file by running the following command: USD oc -n openshift-machine-api create secret generic <role>-user-data-x5 \ 1 --from-file=userData=userData.txt \ --from-file=disableTemplating=disableTemplating.txt 1 For <role>-user-data-x5 , specify the name of the secret. Replace <role> with master . Edit your control plane machine set CR by running the following command: USD oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster Add the following lines in the positions indicated: apiVersion: machine.openshift.io/v1beta1 kind: ControlPlaneMachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4 1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value. 2 3 These lines enable the use of ultra disks. For dataDisks , include the entire stanza. 4 Specify the user data secret created earlier. Replace <role> with master . Save your changes. For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. Verification Validate that the machines are created by running the following command: USD oc get machines The machines should be in the Running state. For a machine that is running and has a node attached, validate the partition by running the following command: USD oc debug node/<node-name> -- chroot /host lsblk In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with -- . The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine. steps To use an ultra disk on the control plane, reconfigure your workload to use the control plane's ultra disk mount point. 12.4.4.4.2. Troubleshooting resources for machine sets that enable ultra disks Use the information in this section to understand and recover from issues you might encounter. 12.4.4.4.2.1. Incorrect ultra disk configuration If an incorrect configuration of the ultraSSDCapability parameter is specified in the machine set, the machine provisioning fails. For example, if the ultraSSDCapability parameter is set to Disabled , but an ultra disk is specified in the dataDisks parameter, the following error message appears: StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set. To resolve this issue, verify that your machine set configuration is correct. 12.4.4.4.2.2. Unsupported disk parameters If a region, availability zone, or instance size that is not compatible with ultra disks is specified in the machine set, the machine provisioning fails. Check the logs for the following error message: failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>." To resolve this issue, verify that you are using this feature in a supported environment and that your machine set configuration is correct. 12.4.4.4.2.3. Unable to delete disks If the deletion of ultra disks as data disks is not working as expected, the machines are deleted and the data disks are orphaned. You must delete the orphaned disks manually if desired. 12.4.4.5. Enabling customer-managed encryption keys for a machine set You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API. An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set. Prerequisites Create an Azure Key Vault instance . Create an instance of a disk encryption set . Grant the disk encryption set access to key vault . Procedure Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example: providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS Additional resources Azure documentation about customer-managed keys 12.4.4.6. Accelerated Networking for Microsoft Azure VMs Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled after installation. 12.4.4.6.1. Limitations Consider the following limitations when deciding whether to use Accelerated Networking: Accelerated Networking is only supported on clusters where the Machine API is operational. Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of vmSize in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation . 12.4.4.6.2. Enabling Accelerated Networking on an existing Microsoft Azure cluster You can enable Accelerated Networking on Azure by adding acceleratedNetworking to your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster where the Machine API is operational. Procedure Add the following to the providerSpec field: providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2 1 This line enables Accelerated Networking. 2 Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation . Verification On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the Accelerated networking field is set to Enabled . 12.4.5. Enabling Google Cloud Platform features for control plane machines You can enable Google Cloud Platform (GCP) features on control plane machines by changing the configuration of your control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy. 12.4.5.1. Configuring persistent disk types by using machine sets You can configure the type of persistent disk that a machine set deploys machines on by editing the machine set YAML file. For more information about persistent disk types, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about persistent disks . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following line under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet ... spec: template: spec: providerSpec: value: disks: type: pd-ssd 1 1 Control plane nodes must use the pd-ssd disk type. Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Type field matches the configured disk type. 12.4.5.2. Configuring Confidential VM by using machine sets By editing the machine set YAML file, you can configure the Confidential VM options that a machine set uses for machines that it deploys. For more information about Confidential Compute features, functionality, and compatibility, see the GCP Compute Engine documentation about Confidential VM . Important Confidential Computing is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet ... spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3 ... 1 Specify whether Confidential VM is enabled. Valid values are Disabled or Enabled . 2 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VM does not support live VM migration. 3 Specify a machine type that supports Confidential VM. Confidential VM supports the N2D and C2D series of machine types. Verification On the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Confidential VM options match the values that you configured. 12.4.5.3. Configuring Shielded VM options by using machine sets By editing the machine set YAML file, you can configure the Shielded VM options that a machine set uses for machines that it deploys. For more information about Shielded VM features and functionality, see the GCP Compute Engine documentation about Shielded VM . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet ... spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4 ... 1 In this section, specify any Shielded VM options that you want. 2 Specify whether UEFI Secure Boot is enabled. Valid values are Disabled or Enabled . 3 Specify whether integrity monitoring is enabled. Valid values are Disabled or Enabled . Note When integrity monitoring is enabled, you must not disable virtual trusted platform module (vTPM). 4 Specify whether vTPM is enabled. Valid values are Disabled or Enabled . Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Shielded VM options match the values that you configured. Additional resources What is Shielded VM? Secure Boot Virtual Trusted Platform Module (vTPM) Integrity monitoring 12.4.5.4. Enabling customer-managed encryption keys for a machine set Google Cloud Platform (GCP) Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer's data. By default, Compute Engine encrypts this data by using Compute Engine keys. You can enable encryption with a customer-managed key in clusters that use the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key. Note If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. Procedure To allow a specific service account to use your KMS key and to grant the service account the correct IAM role, run the following command with your KMS key name, key ring name, and location: USD gcloud kms keys add-iam-policy-binding <key_name> \ --keyring <key_ring_name> \ --location <key_ring_location> \ --member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com" \ --role roles/cloudkms.cryptoKeyEncrypterDecrypter Configure the encryption key under the providerSpec field in your machine set YAML file. For example: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet ... spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5 1 The name of the customer-managed encryption key that is used for the disk encryption. 2 The name of the KMS key ring that the KMS key belongs to. 3 The GCP location in which the KMS key ring exists. 4 Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the machine set projectID in which the machine set was created is used. 5 Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used. When a new machine is created by using the updated providerSpec object configuration, the disk encryption key is encrypted with the KMS key. 12.5. Control plane resiliency and recovery You can use the control plane machine set to improve the resiliency of the control plane for your OpenShift Container Platform cluster. 12.5.1. High availability and fault tolerance with failure domains When possible, the control plane machine set spreads the control plane machines across multiple failure domains. This configuration provides high availability and fault tolerance within the control plane. This strategy can help protect the control plane when issues arise within the infrastructure provider. 12.5.1.1. Failure domain platform support and configuration The control plane machine set concept of a failure domain is analogous to existing concepts on cloud providers. Not all platforms support the use of failure domains. Table 12.2. Failure domain support matrix Cloud provider Support for failure domains Provider nomenclature Amazon Web Services (AWS) X Availability Zone (AZ) Google Cloud Platform (GCP) X zone Microsoft Azure X Azure availability zone VMware vSphere Not applicable The failure domain configuration in the control plane machine set custom resource (CR) is platform-specific. For more information about failure domain parameters in the CR, see the sample failure domain configuration for your provider. Additional resources Sample Amazon Web Services failure domain configuration Sample Google Cloud Platform failure domain configuration Sample Microsoft Azure failure domain configuration 12.5.1.2. Balancing control plane machines The control plane machine set balances control plane machines across the failure domains that are specified in the custom resource (CR). When possible, the control plane machine set uses each failure domain equally to ensure appropriate fault tolerance. If there are fewer failure domains than control plane machines, failure domains are selected for reuse alphabetically by name. For clusters with no failure domains specified, all control plane machines are placed within a single failure domain. Some changes to the failure domain configuration cause the control plane machine set to rebalance the control plane machines. For example, if you add failure domains to a cluster with fewer failure domains than control plane machines, the control plane machine set rebalances the machines across all available failure domains. 12.5.2. Recovery of failed control plane machines The Control Plane Machine Set Operator automates the recovery of control plane machines. When a control plane machine is deleted, the Operator creates a replacement with the configuration that is specified in the ControlPlaneMachineSet custom resource (CR). For clusters that use control plane machine sets, you can configure a machine health check. The machine health check deletes unhealthy control plane machines so that they are replaced. Important If you configure a MachineHealthCheck resource for the control plane, set the value of maxUnhealthy to 1 . This configuration ensures that the machine health check takes no action when multiple control plane machines appear to be unhealthy. Multiple unhealthy control plane machines can indicate that the etcd cluster is degraded or that a scaling operation to replace a failed machine is in progress. If the etcd cluster is degraded, manual intervention might be required. If a scaling operation is in progress, the machine health check should allow it to finish. Additional resources Deploying machine health checks 12.5.3. Quorum protection with machine lifecycle hooks For OpenShift Container Platform clusters that use the Machine API Operator, the etcd Operator uses lifecycle hooks for the machine deletion phase to implement a quorum protection mechanism. By using a preDrain lifecycle hook, the etcd Operator can control when the pods on a control plane machine are drained and removed. To protect etcd quorum, the etcd Operator prevents the removal of an etcd member until it migrates that member onto a new node within the cluster. This mechanism allows the etcd Operator precise control over the members of the etcd quorum and allows the Machine API Operator to safely create and remove control plane machines without specific operational knowledge of the etcd cluster. 12.5.3.1. Control plane deletion with quorum protection processing order When a control plane machine is replaced on a cluster that uses a control plane machine set, the cluster temporarily has four control plane machines. When the fourth control plane node joins the cluster, the etcd Operator starts a new etcd member on the replacement node. When the etcd Operator observes that the old control plane machine is marked for deletion, it stops the etcd member on the old node and promotes the replacement etcd member to join the quorum of the cluster. The control plane machine Deleting phase proceeds in the following order: A control plane machine is slated for deletion. The control plane machine enters the Deleting phase. To satisfy the preDrain lifecycle hook, the etcd Operator takes the following actions: The etcd Operator waits until a fourth control plane machine is added to the cluster as an etcd member. This new etcd member has a state of Running but not ready until it receives the full database update from the etcd leader. When the new etcd member receives the full database update, the etcd Operator promotes the new etcd member to a voting member and removes the old etcd member from the cluster. After this transition is complete, it is safe for the old etcd pod and its data to be removed, so the preDrain lifecycle hook is removed. The control plane machine status condition Drainable is set to True . The machine controller attempts to drain the node that is backed by the control plane machine. If draining fails, Drained is set to False and the machine controller attempts to drain the node again. If draining succeeds, Drained is set to True . The control plane machine status condition Drained is set to True . If no other Operators have added a preTerminate lifecycle hook, the control plane machine status condition Terminable is set to True . The machine controller removes the instance from the infrastructure provider. The machine controller deletes the Node object. YAML snippet demonstrating the etcd quorum protection preDrain lifecycle hook apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: ... spec: lifecycleHooks: preDrain: - name: EtcdQuorumOperator 1 owner: clusteroperator/etcd 2 ... 1 The name of the preDrain lifecycle hook. 2 The hook-implementing controller that manages the preDrain lifecycle hook. Additional resources Lifecycle hooks for the machine deletion phase 12.6. Troubleshooting the control plane machine set Use the information in this section to understand and recover from issues you might encounter. 12.6.1. Checking the control plane machine set custom resource state You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR). Procedure Determine the state of the CR by running the following command: USD oc get controlplanemachineset.machine.openshift.io cluster \ --namespace openshift-machine-api A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required. A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated. A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR. steps To use the control plane machine set, you must ensure that a ControlPlaneMachineSet CR with the correct settings for your cluster exists. If your cluster has an existing CR, you must verify that the configuration in the CR is correct for your cluster. If your cluster does not have an existing CR, you must create one with the correct configuration for your cluster. Additional resources Activating the control plane machine set custom resource Creating a control plane machine set custom resource 12.6.2. Adding a missing Azure internal load balancer The internalLoadBalancer parameter is required in both the ControlPlaneMachineSet and control plane Machine custom resources (CRs) for Azure. If this parameter is not preconfigured on your cluster, you must add it to both CRs. For more information about where this parameter is located in the Azure provider specification, see the sample Azure provider specification. The placement in the control plane Machine CR is similar. Procedure List the control plane machines in your cluster by running the following command: USD oc get machines \ -l machine.openshift.io/cluster-api-machine-role==master \ -n openshift-machine-api For each control plane machine, edit the CR by running the following command: USD oc edit machine <control_plane_machine_name> Add the internalLoadBalancer parameter with the correct details for your cluster and save your changes. Edit your control plane machine set CR by running the following command: USD oc edit controlplanemachineset.machine.openshift.io cluster \ -n openshift-machine-api Add the internalLoadBalancer parameter with the correct details for your cluster and save your changes. steps For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. Additional resources Sample Azure provider specification 12.6.3. Recovering a degraded etcd Operator Certain situations can cause the etcd Operator to become degraded. For example, while performing remediation, the machine health check might delete a control plane machine that is hosting etcd. If the etcd member is not reachable at that time, the etcd Operator becomes degraded. When the etcd Operator is degraded, manual intervention is required to force the Operator to remove the failed member and restore the cluster state. Procedure List the control plane machines in your cluster by running the following command: USD oc get machines \ -l machine.openshift.io/cluster-api-machine-role==master \ -n openshift-machine-api \ -o wide Any of the following conditions might indicate a failed control plane machine: The STATE value is stopped . The PHASE value is Failed . The PHASE value is Deleting for more than ten minutes. Important Before continuing, ensure that your cluster has two healthy control plane machines. Performing the actions in this procedure on more than one control plane machine risks losing etcd quorum and can cause data loss. If you have lost the majority of your control plane hosts, leading to etcd quorum loss, then you must follow the disaster recovery procedure "Restoring to a cluster state" instead of this procedure. Edit the machine CR for the failed control plane machine by running the following command: USD oc edit machine <control_plane_machine_name> Remove the contents of the lifecycleHooks parameter from the failed control plane machine and save your changes. The etcd Operator removes the failed machine from the cluster and can then safely add new etcd members. Additional resources Restoring to a cluster state 12.7. Disabling the control plane machine set The .spec.state field in an activated ControlPlaneMachineSet custom resource (CR) cannot be changed from Active to Inactive . To disable the control plane machine set, you must delete the CR so that it is removed from the cluster. When you delete the CR, the Control Plane Machine Set Operator performs cleanup operations and disables the control plane machine set. The Operator then removes the CR from the cluster and creates an inactive control plane machine set with default settings. 12.7.1. Deleting the control plane machine set To stop managing control plane machines with the control plane machine set on your cluster, you must delete the ControlPlaneMachineSet custom resource (CR). Procedure Delete the control plane machine set CR by running the following command: USD oc delete controlplanemachineset.machine.openshift.io cluster \ -n openshift-machine-api Verification Check the control plane machine set custom resource state. A result of Inactive indicates that the removal and replacement process is successful. A ControlPlaneMachineSet CR exists but is not activated. 12.7.2. Checking the control plane machine set custom resource state You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR). Procedure Determine the state of the CR by running the following command: USD oc get controlplanemachineset.machine.openshift.io cluster \ --namespace openshift-machine-api A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required. A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated. A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR. 12.7.3. Re-enabling the control plane machine set To re-enable the control plane machine set, you must ensure that the configuration in the CR is correct for your cluster and activate it. Additional resources Activating the control plane machine set custom resource
[ "oc get machine -n openshift-machine-api -l machine.openshift.io/cluster-api-machine-role=master", "NAME PHASE TYPE REGION ZONE AGE <infrastructure_id>-master-0 Running m6i.xlarge us-west-1 us-west-1a 5h19m <infrastructure_id>-master-1 Running m6i.xlarge us-west-1 us-west-1b 5h19m <infrastructure_id>-master-2 Running m6i.xlarge us-west-1 us-west-1a 5h19m", "No resources found in openshift-machine-api namespace.", "oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api", "oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 2 strategy: type: RollingUpdate 3 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 4 <platform_failure_domains> 5 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> 6 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 7", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc create -f <control_plane_machine_set>.yaml", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster 1 namespace: openshift-machine-api spec: replicas: 3 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 3 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 4 strategy: type: RollingUpdate 5 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 6 <platform_failure_domains> 7 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 8", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "providerSpec: value: ami: id: ami-<ami_id_string> 1 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: 2 encrypted: true iops: 0 kmsKey: arn: \"\" volumeSize: 120 volumeType: gp3 credentialsSecret: name: aws-cloud-credentials 3 deviceIndex: 0 iamInstanceProfile: id: <cluster_id>-master-profile 4 instanceType: m6i.xlarge 5 kind: AWSMachineProviderConfig 6 loadBalancers: 7 - name: <cluster_id>-int type: network - name: <cluster_id>-ext type: network metadata: creationTimestamp: null metadataServiceOptions: {} placement: 8 region: <region> 9 securityGroups: - filters: - name: tag:Name values: - <cluster_id>-master-sg 10 subnet: {} 11 userDataSecret: name: master-user-data 12", "failureDomains: aws: - placement: availabilityZone: <aws_zone_a> 1 subnet: 2 filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_a> 3 type: Filters 4 - placement: availabilityZone: <aws_zone_b> 5 subnet: filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_b> 6 type: Filters platform: AWS 7", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get ControlPlaneMachineSet/cluster", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials 1 deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 2 labels: null sizeGb: 200 type: pd-ssd kind: GCPMachineProviderSpec 3 machineType: e2-standard-4 metadata: creationTimestamp: null metadataServiceOptions: {} networkInterfaces: - network: <cluster_id>-network subnetwork: <cluster_id>-master-subnet projectID: <project_name> 4 region: <region> 5 serviceAccounts: 6 - email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform shieldedInstanceConfig: {} tags: - <cluster_id>-master targetPools: - <cluster_id>-api userDataSecret: name: master-user-data 7 zone: \"\" 8", "failureDomains: gcp: - zone: <gcp_zone_a> 1 - zone: <gcp_zone_b> 2 - zone: <gcp_zone_c> - zone: <gcp_zone_d> platform: GCP 3", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials 1 namespace: openshift-machine-api diagnostics: {} image: 2 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 3 sku: \"\" version: \"\" internalLoadBalancer: <cluster_id>-internal 4 kind: AzureMachineProviderSpec 5 location: <region> 6 managedIdentity: <cluster_id>-identity metadata: creationTimestamp: null name: <cluster_id> networkResourceGroup: <cluster_id>-rg osDisk: 7 diskSettings: {} diskSizeGB: 1024 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <cluster_id> 8 resourceGroup: <cluster_id>-rg subnet: <cluster_id>-master-subnet 9 userDataSecret: name: master-user-data 10 vmSize: Standard_D8s_v3 vnet: <cluster_id>-vnet zone: \"\" 11", "failureDomains: azure: 1 - zone: \"1\" - zone: \"2\" - zone: \"3\" platform: Azure 2", "providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 2 kind: VSphereMachineProviderSpec 3 memoryMiB: 16384 4 metadata: creationTimestamp: null network: 5 devices: - networkName: <vm_network_name> numCPUs: 4 6 numCoresPerSocket: 4 7 snapshot: \"\" template: <vm_template_name> 8 userDataSecret: name: master-user-data 9 workspace: datacenter: <vcenter_datacenter_name> 10 datastore: <vcenter_datastore_name> 11 folder: <path_to_vcenter_vm_folder> 12 resourcePool: <vsphere_resource_pool> 13 server: <vcenter_server_ip> 14", "oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api", "oc delete machine -n openshift-machine-api <control_plane_machine_name> 1", "oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "providerSpec: value: instanceType: <compatible_aws_instance_type> 1", "providerSpec: value: metadataServiceOptions: authentication: Required 1", "providerSpec: placement: tenancy: dedicated", "providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 4.8.2021122100", "providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1", "providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2", "\"storage\": { \"disks\": [ 1 { \"device\": \"/dev/disk/azure/scsi1/lun0\", 2 \"partitions\": [ 3 { \"label\": \"lun0p1\", 4 \"sizeMiB\": 1024, 5 \"startMiB\": 0 } ] } ], \"filesystems\": [ 6 { \"device\": \"/dev/disk/by-partlabel/lun0p1\", \"format\": \"xfs\", \"path\": \"/var/lib/lun0p1\" } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var/lib/lun0p1\\nWhat=/dev/disk/by-partlabel/lun0p1\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", 8 \"enabled\": true, \"name\": \"var-lib-lun0p1.mount\" } ] }", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt", "oc -n openshift-machine-api create secret generic <role>-user-data-x5 \\ 1 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt", "oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster", "apiVersion: machine.openshift.io/v1beta1 kind: ControlPlaneMachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code=\"BadRequest\" Message=\"Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>.\"", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: disks: type: pd-ssd 1", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4", "gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: - name: EtcdQuorumOperator 1 owner: clusteroperator/etcd 2", "oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api", "oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api", "oc edit machine <control_plane_machine_name>", "oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api -o wide", "oc edit machine <control_plane_machine_name>", "oc delete controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/machine_management/managing-control-plane-machines
Chapter 2. What are the benefits of the subscriptions service?
Chapter 2. What are the benefits of the subscriptions service? The subscriptions service provides these benefits: Tracks selected Red Hat product usage and capacity at the fleet or account level in a unified inventory and provides a daily snapshot of that data in a digestible, filterable dashboard at console.redhat.com. Tracks data over time for self-governance and analytics that can inform purchasing and renewal decisions, ongoing capacity planning, and mitigation for high-risk scenarios. Helps procurement officers make data-driven choices with portfolio-centered reporting dashboards that show both inventory-occupying subscriptions and current subscription limits across the entire organization. With its robust reporting capabilities, enables the transition to simple content access tooling that features broader, organizational-level subscription enforcement instead of system-level quantity enforcement.
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/con-benefits-of-subscriptionwatch_assembly-about-subscriptionwatch-ctxt
Chapter 2. Architectures
Chapter 2. Architectures Red Hat Enterprise Linux 7.6 is distributed with the kernel version 3.10.0-957, which provides support for the following architectures: [1] 64-bit AMD 64-bit Intel IBM POWER7+ (big endian) IBM POWER8 (big endian) [2] IBM POWER8 (little endian) [3] IBM POWER9 (little endian) [4] [5] IBM Z [4] [6] 64-bit ARM [4] [1] Note that the Red Hat Enterprise Linux 7.6 installation is supported only on 64-bit hardware. Red Hat Enterprise Linux 7.6 is able to run 32-bit operating systems, including versions of Red Hat Enterprise Linux, as virtual machines. [2] Red Hat Enterprise Linux 7.6 POWER8 (big endian) are currently supported as KVM guests on Red Hat Enterprise Linux 7.6 POWER8 systems that run the KVM hypervisor, and on PowerVM. [3] Red Hat Enterprise Linux 7.6 POWER8 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.6 POWER8 systems that run the KVM hypervisor, and on PowerVM. In addition, Red Hat Enterprise Linux 7.6 POWER8 (little endian) guests are supported on Red Hat Enterprise Linux 7.6 POWER9 systems that run the KVM hypervisor in POWER8-compatibility mode on version 4.14 kernel using the kernel-alt package. [4] This architecture is supported with the kernel version 4.14, provided by the kernel-alt packages. For details, see the Red Hat Enterprise Linux 7.5 . [5] Red Hat Enterprise Linux 7.6 POWER9 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.6 POWER9 systems that run the KVM hypervisor on version 4.14 kernel using the kernel-alt package, and on PowerVM. [6] Red Hat Enterprise Linux 7.6 for IBM Z (both the 3.10 kernel version and the 4.14 kernel version) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.6 for IBM Z hosts that run the KVM hypervisor on version 4.14 kernel using the kernel-alt package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/chap-red_hat_enterprise_linux-7.6_release_notes-architectures
3.4. CPU Power Saving Policies
3.4. CPU Power Saving Policies cpupower provides ways to regulate your processor's power saving policies. Use the following options with the cpupower set command: --perf-bias <0-15> Allows software on supported Intel processors to more actively contribute to determining the balance between optimum performance and saving power. This does not override other power saving policies. Assigned values range from 0 to 15, where 0 is optimum performance and 15 is optimum power efficiency. By default, this option applies to all cores. To apply it only to individual cores, add the --cpu <cpulist> option. --sched-mc <0|1|2> Restricts the use of power by system processes to the cores in one CPU package before other CPU packages are drawn from. 0 sets no restrictions, 1 initially employs only a single CPU package, and 2 does this in addition to favouring semi-idle CPU packages for handling task wakeups. --sched-smt <0|1|2> Restricts the use of power by system processes to the thread siblings of one CPU core before drawing on other cores. 0 sets no restrictions, 1 initially employs only a single CPU package, and 2 does this in addition to favouring semi-idle CPU packages for handling task wakeups.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/cpu_power_saving
9.3. External Provider Networks
9.3. External Provider Networks 9.3.1. Importing Networks From External Providers To use networks from an external network provider (OpenStack Networking or any third-party provider that implements the OpenStack Neutron REST API), register the provider with the Manager. See Adding an OpenStack Network Service Neutron for Network Provisioning or Adding an External Network Provider for more information. Then, use the following procedure to import the networks provided by that provider into the Manager so the networks can be used by virtual machines. Importing a Network From an External Provider Click Network Networks . Click Import . From the Network Provider drop-down list, select an external provider. The networks offered by that provider are automatically discovered and listed in the Provider Networks list. Using the check boxes, select the networks to import in the Provider Networks list and click the down arrow to move those networks into the Networks to Import list. You can customize the name of the network that you are importing. To customize the name, click the network's name in the Name column, and change the text. From the Data Center drop-down list, select the data center into which the networks will be imported. Optional: Clear the Allow All check box to prevent that network from being available to all users. Click Import . The selected networks are imported into the target data center and can be attached to virtual machines. See Adding a New Network Interface in the Virtual Machine Management Guide for more information. 9.3.2. Limitations to Using External Provider Networks The following limitations apply to using logical networks imported from an external provider in a Red Hat Virtualization environment. Logical networks offered by external providers must be used as virtual machine networks, and cannot be used as display networks. The same logical network can be imported more than once, but only to different data centers. You cannot edit logical networks offered by external providers in the Manager. To edit the details of a logical network offered by an external provider, you must edit the logical network directly from the external provider that provides that logical network. Port mirroring is not available for virtual network interface cards connected to logical networks offered by external providers. If a virtual machine uses a logical network offered by an external provider, that provider cannot be deleted from the Manager while the logical network is still in use by the virtual machine. Networks offered by external providers are non-required. As such, scheduling for clusters in which such logical networks have been imported will not take those logical networks into account during host selection. Moreover, it is the responsibility of the user to ensure the availability of the logical network on hosts in clusters in which such logical networks have been imported. 9.3.3. Configuring Subnets on External Provider Logical Networks A logical network provided by an external provider can only assign IP addresses to virtual machines if one or more subnets have been defined on that logical network. If no subnets are defined, virtual machines will not be assigned IP addresses. If there is one subnet, virtual machines will be assigned an IP address from that subnet, and if there are multiple subnets, virtual machines will be assigned an IP address from any of the available subnets. The DHCP service provided by the external network provider on which the logical network is hosted is responsible for assigning these IP addresses. While the Red Hat Virtualization Manager automatically discovers predefined subnets on imported logical networks, you can also add or remove subnets to or from logical networks from within the Manager. If you add Open Virtual Network (OVN) (ovirt-provider-ovn) as an external network provider, multiple subnets can be connected to each other by routers. To manage these routers, you can use the OpenStack Networking API v2.0 . Please note, however, that ovirt-provider-ovn has a limitation: Source NAT (enable_snat in the OpenStack API) is not implemented. 9.3.4. Adding Subnets to External Provider Logical Networks Create a subnet on a logical network provided by an external provider. Adding Subnets to External Provider Logical Networks Click Network Networks . Click the logical network's name to open the details view. Click the Subnets tab. Click New . Enter a Name and CIDR for the new subnet. From the IP Version drop-down list, select either IPv4 or IPv6 . Click OK . Note For IPv6, Red Hat Virtualization supports only static addressing. 9.3.5. Removing Subnets from External Provider Logical Networks Remove a subnet from a logical network provided by an external provider. Removing Subnets from External Provider Logical Networks Click Network Networks . Click the logical network's name to open the details view. Click the Subnets tab. Select a subnet and click Remove . Click OK . 9.3.6. Assigning Security Groups to Logical Networks and Ports Note This feature is only available when Open Virtual Network (OVN) is added as an external network provider (as ovirt-provider-ovn). Security groups cannot be created through the Red Hat Virtualization Manager. You must create security groups through OpenStack Networking API v2.0 or Ansible. A security group is a collection of strictly enforced rules that allow you to filter inbound and outbound traffic over a network. You can also use security groups to filter traffic at the port level. In Red Hat Virtualization 4.2.7, security groups are disabled by default. Assigning Security Groups to Logical Networks Click Compute Clusters . Click the cluster name to open the details view. Click the Logical Networks tab. Click Add Network and define the properties, ensuring that you select ovirt-provider-ovn from the External Providers drop-down list. For more information, see Section 9.1.2, "Creating a New Logical Network in a Data Center or Cluster" . Select Enabled from the Security Group drop-down list. For more details see Section 9.1.7, "Logical Network General Settings Explained" . Click OK . Create security groups using either OpenStack Networking API v2.0 or Ansible . Create security group rules using either OpenStack Networking API v2.0 or Ansible . Update the ports with the security groups that you defined using either OpenStack Networking API v2.0 or Ansible . Optional. Define whether the security feature is enabled at the port level. Currently, this is only possible using the OpenStack Networking API . If the port_security_enabled attribute is not set, it will default to the value specified in the network to which it belongs.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-external_provider_networks
Chapter 4. Storage
Chapter 4. Storage LVM Cache As of Red Hat Enterprise Linux 7.1, LVM cache is fully supported. This feature allows users to create logical volumes with a small fast device performing as a cache to larger slower devices. Please refer to the lvm(7) manual page for information on creating cache logical volumes. Note that the following restrictions on the use of cache logical volumes (LV): The cache LV must be a top-level device. It cannot be used as a thin-pool LV, an image of a RAID LV, or any other sub-LV type. The cache LV sub-LVs (the origin LV, metadata LV, and data LV) can only be of linear, stripe, or RAID type. The properties of the cache LV cannot be changed after creation. To change cache properties, remove the cache and recreate it with the desired properties. Storage Array Management with libStorageMgmt API Since Red Hat Enterprise Linux 7.1, storage array management with libStorageMgmt , a storage array independent API, is fully supported. The provided API is stable, consistent, and allows developers to programmatically manage different storage arrays and utilize the hardware-accelerated features provided. System administrators can also use libStorageMgmt to manually configure storage and to automate storage management tasks with the included command-line interface. Please note that the Targetd plug-in is not fully supported and remains a Technology Preview. Supported hardware: NetApp Filer (ontap 7-Mode) Nexenta (nstor 3.1.x only) SMI-S, for the following vendors: HP 3PAR OS release 3.2.1 or later EMC VMAX and VNX Solutions Enabler V7.6.2.48 or later SMI-S Provider V4.6.2.18 hotfix kit or later HDS VSP Array non-embedded provider Hitachi Command Suite v8.0 or later For more information on libStorageMgmt , refer to the relevant chapter in the Storage Administration Guide . Support for LSI Syncro Red Hat Enterprise Linux 7.1 includes code in the megaraid_sas driver to enable LSI Syncro CS high-availability direct-attached storage (HA-DAS) adapters. While the megaraid_sas driver is fully supported for previously enabled adapters, the use of this driver for Syncro CS is available as a Technology Preview. Support for this adapter will be provided directly by LSI, your system integrator, or system vendor. Users deploying Syncro CS on Red Hat Enterprise Linux 7.1 are encouraged to provide feedback to Red Hat and LSI. For more information on LSI Syncro CS solutions, please visit http://www.lsi.com/products/shared-das/pages/default.aspx . DIF/DIX Support DIF/DIX is a new addition to the SCSI Standard and a Technology Preview in Red Hat Enterprise Linux 7.1. DIF/DIX increases the size of the commonly used 512-byte disk block from 512 to 520 bytes, adding the Data Integrity Field (DIF). The DIF stores a checksum value for the data block that is calculated by the Host Bus Adapter (HBA) when a write occurs. The storage device then confirms the checksum on receive, and stores both the data and the checksum. Conversely, when a read occurs, the checksum can be verified by the storage device, and by the receiving HBA. For more information, refer to the section Block Devices with DIF/DIX Enabled in the Storage Administration Guide . Enhanced device-mapper-multipath Syntax Error Checking and Output The device-mapper-multipath tool has been enhanced to verify the multipath.conf file more reliably. As a result, if multipath.conf contains any lines that cannot be parsed, device-mapper-multipath reports an error and ignores these lines to avoid incorrect parsing. In addition, the following wildcard expressions have been added for the multipathd show paths format command: %N and %n for the host and target Fibre Channel World Wide Node Names, respectively. %R and %r for the host and target Fibre Channel World Wide Port Names, respectively. Now, it is easier to associate multipaths with specific Fibre Channel hosts, targets, and their ports, which allows users to manage their storage configuration more effectively.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-storage
Chapter 31. Introduction to NetworkManager Debugging
Chapter 31. Introduction to NetworkManager Debugging Increasing the log levels for all or certain domains helps to log more details of the operations that NetworkManager performs. You can use this information to troubleshoot problems. NetworkManager provides different levels and domains to produce logging information. The /etc/NetworkManager/NetworkManager.conf file is the main configuration file for NetworkManager. The logs are stored in the journal. 31.1. Introduction to NetworkManager reapply method The NetworkManager service uses a profile to manage the connection settings of a device. Desktop Bus (D-Bus) API can create, modify, and delete these connection settings. For any changes in a profile, D-Bus API clones the existing settings to the modified settings of a connection. Despite cloning, changes do not apply to the modified settings. To make it effective, reactivate the existing settings of a connection or use the reapply() method. The reapply() method has the following features: Updating modified connection settings without deactivation or restart of a network interface. Removing pending changes from the modified connection settings. As NetworkManager does not revert the manual changes, you can reconfigure the device and revert external or manual parameters. Creating different modified connection settings than that of the existing connection settings. Also, reapply() method supports the following attributes: bridge.ageing-time bridge.forward-delay bridge.group-address bridge.group-forward-mask bridge.hello-time bridge.max-age bridge.multicast-hash-max bridge.multicast-last-member-count bridge.multicast-last-member-interval bridge.multicast-membership-interval bridge.multicast-querier bridge.multicast-querier-interval bridge.multicast-query-interval bridge.multicast-query-response-interval bridge.multicast-query-use-ifaddr bridge.multicast-router bridge.multicast-snooping bridge.multicast-startup-query-count bridge.multicast-startup-query-interval bridge.priority bridge.stp bridge.VLAN-filtering bridge.VLAN-protocol bridge.VLANs 802-3-ethernet.accept-all-mac-addresses 802-3-ethernet.cloned-mac-address IPv4.addresses IPv4.dhcp-client-id IPv4.dhcp-iaid IPv4.dhcp-timeout IPv4.DNS IPv4.DNS-priority IPv4.DNS-search IPv4.gateway IPv4.ignore-auto-DNS IPv4.ignore-auto-routes IPv4.may-fail IPv4.method IPv4.never-default IPv4.route-table IPv4.routes IPv4.routing-rules IPv6.addr-gen-mode IPv6.addresses IPv6.dhcp-duid IPv6.dhcp-iaid IPv6.dhcp-timeout IPv6.DNS IPv6.DNS-priority IPv6.DNS-search IPv6.gateway IPv6.ignore-auto-DNS IPv6.may-fail IPv6.method IPv6.never-default IPv6.ra-timeout IPv6.route-metric IPv6.route-table IPv6.routes IPv6.routing-rules Additional resources nm-settings-nmcli(5) man page on your system 31.2. Setting the NetworkManager log level By default, all the log domains are set to record the INFO log level. Disable rate-limiting before collecting debug logs. With rate-limiting, systemd-journald drops messages if there are too many of them in a short time. This can occur when the log level is TRACE . This procedure disables rate-limiting and enables recording debug logs for the all (ALL) domains. Procedure To disable rate-limiting, edit the /etc/systemd/journald.conf file, uncomment the RateLimitBurst parameter in the [Journal] section, and set its value as 0 : Restart the systemd-journald service. Create the /etc/NetworkManager/conf.d/95-nm-debug.conf file with the following content: The domains parameter can contain multiple comma-separated domain:level pairs. Restart the NetworkManager service. Verification Query the systemd journal to display the journal entries of the NetworkManager unit: 31.3. Temporarily setting log levels at run time using nmcli You can change the log level at run time using nmcli . Procedure Optional: Display the current logging settings: To modify the logging level and domains, use the following options: To set the log level for all domains to the same LEVEL , enter: To change the level for specific domains, enter: Note that updating the logging level using this command disables logging for all the other domains. To change the level of specific domains and preserve the level of all other domains, enter: 31.4. Viewing NetworkManager logs You can view the NetworkManager logs for troubleshooting. Procedure To view the logs, enter: Additional resources NetworkManager.conf(5) and journalctl(1) man pages on your system 31.5. Debugging levels and domains You can use the levels and domains parameters to manage the debugging for NetworkManager. The level defines the verbosity level, whereas the domains define the category of the messages to record the logs with given severity ( level ). Log levels Description OFF Does not log any messages about NetworkManager ERR Logs only critical errors WARN Logs warnings that can reflect the operation INFO Logs various informational messages that are useful for tracking state and operations DEBUG Enables verbose logging for debugging purposes TRACE Enables more verbose logging than the DEBUG level Note that subsequent levels log all messages from earlier levels. For example, setting the log level to INFO also logs messages contained in the ERR and WARN log level. Additional resources NetworkManager.conf(5) man page on your system
[ "RateLimitBurst=0", "systemctl restart systemd-journald", "[logging] domains=ALL:TRACE", "systemctl restart NetworkManager", "journalctl -u NetworkManager Jun 30 15:24:32 server NetworkManager[164187]: <debug> [1656595472.4939] active-connection[0x5565143c80a0]: update activation type from assume to managed Jun 30 15:24:32 server NetworkManager[164187]: <trace> [1656595472.4939] device[55b33c3bdb72840c] (enp1s0): sys-iface-state: assume -> managed Jun 30 15:24:32 server NetworkManager[164187]: <trace> [1656595472.4939] l3cfg[4281fdf43e356454,ifindex=3]: commit type register (type \"update\", source \"device\", existing a369f23014b9ede3) -> a369f23014b9ede3 Jun 30 15:24:32 server NetworkManager[164187]: <info> [1656595472.4940] manager: NetworkManager state is now CONNECTED_SITE", "nmcli general logging LEVEL DOMAINS INFO PLATFORM,RFKILL,ETHER,WIFI,BT,MB,DHCP4,DHCP6,PPP,WIFI_SCAN,IP4,IP6,A UTOIP4,DNS,VPN,SHARING,SUPPLICANT,AGENTS,SETTINGS,SUSPEND,CORE,DEVICE,OLPC, WIMAX,INFINIBAND,FIREWALL,ADSL,BOND,VLAN,BRIDGE,DBUS_PROPS,TEAM,CONCHECK,DC B,DISPATCH", "nmcli general logging level LEVEL domains ALL", "nmcli general logging level LEVEL domains DOMAINS", "nmcli general logging level KEEP domains DOMAIN:LEVEL , DOMAIN:LEVEL", "journalctl -u NetworkManager -b" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/introduction-to-networkmanager-debugging_configuring-and-managing-networking
Chapter 17. Servers and Services
Chapter 17. Servers and Services rear rebased to version 2.4 The rear packages that provide the Relax-and-Recover tool (ReaR) have been upgraded to upstream version 2.4, which provides a number of bug fixes and enhancements over the version. Notably: The default behavior when resizing partitions in migration mode has been changed. Only the size of the last partition is now changed by default; the start positions of every partition are preserved. If the behavior is needed, set the AUTORESIZE_PARTITIONS configuration variable to yes . See the description of the configuration variables AUTORESIZE_PARTITIONS , AUTORESIZE_EXCLUDE_PARTITIONS , AUTOSHRINK_DISK_SIZE_LIMIT_PERCENTAGE , and AUTOINCREASE_DISK_SIZE_THRESHOLD_PERCENTAGE in the /usr/share/rear/conf/default.conf file for more information on how to control the partition resizing. The network setup now supports teaming (with the exception of Link Aggregation Control Protocol - LACP), bridges, bonding, and VLANs. Support for Tivoli Storage Manager (TSM) has been improved. In particular, support for the password store in the TSM client versions 8.1.2 and later has been added, fixing the bug where the generated ISO image did not support restoring the OS if those TSM versions were used for backup. Support for partition names containing blank and slash characters has been fixed. SSH secrets (private keys) are no longer copied to the recovery system, which prevents their leaking. As a consequence, SSH in the recovery system cannot use the secret keys from the original system. See the description of the SSH_FILES , SSH_ROOT_PASSWORD , and SSH_UNPROTECTED_PRIVATE_KEYS variables in the /usr/share/rear/conf/default.conf file for more information on controlling this behavior. Numerous improvements to support of the IBM POWER Systems architecture have been added, such as support for including the backup in the rescue ISO image and for multiple ISOs. Multipath support has been enhanced. For example, support for software RAID on multipath devices has been added. Support for secure boot has been added. The SECURE_BOOT_BOOTLOADER variable can be used for specifying any custom-signed boot loader. Support for restoring disk layout of software RAID devices with missing components has been added. The standard error and standard output channels of programs invoked by ReaR are redirected to the log file instead of appearing on the terminal. Programs prompting for user input on the standard output or standard error channel will not work correctly. Their standard output channel should be redirected to file descriptor 7 and standard input channel from file descriptor 6 . See the Coding Style documentation on the ReaR wiki for more details. Support for recovery of systems with LVM thin pool and thin volumes has been added. (BZ# 1496518 , BZ# 1484051 , BZ# 1534646 , BZ#1498828, BZ# 1571266 , BZ# 1539063 , BZ# 1464353 , BZ# 1536023 ) The rear package now includes a user guide This update adds the user guide into the rear package, which provides the Relax-and-Recover tool (ReaR). After installation of rear , you can find the user guide in the /usr/share/doc/rear-2.4/relax-and-recover-user-guide.html file. (BZ# 1418459 ) The pcsc-lite interface now supports up to 32 devices In Red Hat Enterprise Linux 7.6, the number of devices the pcsc-lite smart card interface supports has been increased from 16 to 32. (BZ#1516993) tuned rebased to version 2.10.0 The tuned packages have been rebased to upstream version 2.10.0, which provides a number of bug fixes and enhancements over the version. Notable changes include: an added mssql profile (shipped in a separate tuned-profiles-mssql subpackage) the tuned-adm tool now displays a relevant log snippet in case of error fixed verification of a CPU mask on systems with more than 32 cores (BZ# 1546598 ) The STOU FTP command has improved algorithm for generating unique file names The STOU FTP command allows transferring files to the server and storing them with unique file names. Previously, the STOU command created the names of the files by taking the file name, supplied as an argument to the command, and adding a numerical suffix and incrementing the suffix by one. In some cases, this led to a race condition. Subsequently the scripts which used STOU to upload files with the same file name could fail. This update modifies STOU to create unique file names in a way which helps to avoid the race condition and improves the functioning of scripts that use STOU . To enable the improved algorithm for generating unique file names using STOU , enable the better_stou option in the configuration file (usually /etc/vsftpd/vsftpd.conf ) by adding the following line: better_stou=YES (BZ#1479237) rsyslog imfile now supports symlinks With this update, the rsyslog imfile module delivers better performance and more configuration options. This enables to use the module for more complicated file monitoring use cases. Users of rsyslog are now able to use file monitors with glob patterns anywhere along the configured path and rotate symlink targets with increased data throughput when compared to the version. (BZ# 1531295 ) New rsyslog module: omkafka To enable kafka centralized data storage scenarios, you can now forward logs to the kafka infrastructure using the new omkafka module. (BZ#1482819) New rsyslog module: mmkubernetes To enable scenarios using rsyslog in favor of other log collectors and where kubernetes container metadata are required, a new mmkubernetes module has been added to Red Hat Enterprise Linux. (BZ# 1539193 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/new_features_servers_and_services
Chapter 1. About OpenShift Service Mesh
Chapter 1. About OpenShift Service Mesh Red Hat OpenShift Service Mesh, which is based on the open source Istio project , addresses a variety of problems in a microservice architecture by creating a centralized point of control in an application. 1.1. Introduction to Red Hat OpenShift Service Mesh Red Hat OpenShift Service Mesh adds a transparent layer on existing distributed applications without requiring any changes to the application code. The mesh introduces an easy way to create a network of deployed services that provides discovery, load balancing, service-to-service authentication, failure recovery, metrics, and monitoring. A service mesh also provides more complex operational functionality, including A/B testing, canary releases, access control, and end-to-end authentication. Microservice architectures split the work of enterprise applications into modular services, which can make scaling and maintenance easier. However, as an enterprise application built on a microservice architecture grows in size and complexity, it becomes difficult to understand and manage. Service Mesh can address those architecture problems by capturing or intercepting traffic between services and can modify, redirect, or create new requests to other services. 1.2. Core features Red Hat OpenShift Service Mesh provides a number of key capabilities uniformly across a network of services: Traffic Management - Control the flow of traffic and API calls between services, make calls more reliable, and make the network more robust in the face of adverse conditions. Service Identity and Security - Provide services in the mesh with a verifiable identity and provide the ability to protect service traffic as it flows over networks of varying degrees of trustworthiness. Policy Enforcement - Apply organizational policy to the interaction between services, ensure access policies are enforced and resources are fairly distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code. Telemetry - Gain understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_mesh/3.0.0tp1/html/about/ossm-about-openshift-service-mesh
Chapter 7. New Packages
Chapter 7. New Packages 7.1. RHEA-2014:1521 - new package: convmv A new convmv package is now available for Red Hat Enterprise Linux 6. The convmv package contains a tool for converting the character-set encoding of file names. It is particularly useful for converting file names encoded in a legacy charset encoding such as ISO-8859 to UTF-8, or EUC to UTF-8. This enhancement update adds the convmv package to Red Hat Enterprise Linux 6. (BZ# 1005068 ) All users who require convmv are advised to install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/ch07
Chapter 2. Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack
Chapter 2. Preparing to install a cluster that uses SR-IOV or OVS-DPDK on OpenStack Before you install a OpenShift Container Platform cluster that uses single-root I/O virtualization (SR-IOV) or Open vSwitch with the Data Plane Development Kit (OVS-DPDK) on Red Hat OpenStack Platform (RHOSP), you must understand the requirements for each technology and then perform preparatory tasks. 2.1. Requirements for clusters on RHOSP that use either SR-IOV or OVS-DPDK If you use SR-IOV or OVS-DPDK with your deployment, you must meet the following requirements: RHOSP compute nodes must use a flavor that supports huge pages. 2.1.1. Requirements for clusters on RHOSP that use SR-IOV To use single-root I/O virtualization (SR-IOV) with your deployment, you must meet the following requirements: Plan your Red Hat OpenStack Platform (RHOSP) SR-IOV deployment . OpenShift Container Platform must support the NICs that you use. For a list of supported NICs, see "About Single Root I/O Virtualization (SR-IOV) hardware networks" in the "Hardware networks" subsection of the "Networking" documentation. For each node that will have an attached SR-IOV NIC, your RHOSP cluster must have: One instance from the RHOSP quota One port attached to the machines subnet One port for each SR-IOV Virtual Function A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space SR-IOV deployments often employ performance optimizations, such as dedicated or isolated CPUs. For maximum performance, configure your underlying RHOSP deployment to use these optimizations, and then run OpenShift Container Platform compute machines on the optimized infrastructure. For more information about configuring performant RHOSP compute nodes, see Configuring Compute nodes for performance . 2.1.2. Requirements for clusters on RHOSP that use OVS-DPDK To use Open vSwitch with the Data Plane Development Kit (OVS-DPDK) with your deployment, you must meet the following requirements: Plan your Red Hat OpenStack Platform (RHOSP) OVS-DPDK deployment by referring to Planning your OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. Configure your RHOSP OVS-DPDK deployment according to Configuring an OVS-DPDK deployment in the Network Functions Virtualization Planning and Configuration Guide. 2.2. Preparing to install a cluster that uses SR-IOV You must configure RHOSP before you install a cluster that uses SR-IOV on it. 2.2.1. Creating SR-IOV networks for compute machines If your Red Hat OpenStack Platform (RHOSP) deployment supports single root I/O virtualization (SR-IOV) , you can provision SR-IOV networks that compute machines run on. Note The following instructions entail creating an external flat network and an external, VLAN-based network that can be attached to a compute machine. Depending on your RHOSP deployment, other network types might be required. Prerequisites Your cluster supports SR-IOV. Note If you are unsure about what your cluster supports, review the OpenShift Container Platform SR-IOV hardware networks documentation. You created radio and uplink provider networks as part of your RHOSP deployment. The names radio and uplink are used in all example commands to represent these networks. Procedure On a command line, create a radio RHOSP network: USD openstack network create radio --provider-physical-network radio --provider-network-type flat --external Create an uplink RHOSP network: USD openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external Create a subnet for the radio network: USD openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio Create a subnet for the uplink network: USD openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink 2.3. Preparing to install a cluster that uses OVS-DPDK You must configure RHOSP before you install a cluster that uses SR-IOV on it. Complete Creating a flavor and deploying an instance for OVS-DPDK before you install a cluster on RHOSP. After you perform preinstallation tasks, install your cluster by following the most relevant OpenShift Container Platform on RHOSP installation instructions. Then, perform the tasks under " steps" on this page. 2.4. steps For either type of deployment: Configure the Node Tuning Operator with huge pages support . To complete SR-IOV configuration after you deploy your cluster: Install the SR-IOV Operator . Configure your SR-IOV network device . Create SR-IOV compute machines . Consult the following references after you deploy your cluster to improve its performance: A test pod template for clusters that use OVS-DPDK on OpenStack . A test pod template for clusters that use SR-IOV on OpenStack . A performance profile template for clusters that use OVS-DPDK on OpenStack .
[ "openstack network create radio --provider-physical-network radio --provider-network-type flat --external", "openstack network create uplink --provider-physical-network uplink --provider-network-type vlan --external", "openstack subnet create --network radio --subnet-range <radio_network_subnet_range> radio", "openstack subnet create --network uplink --subnet-range <uplink_network_subnet_range> uplink" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_openstack/installing-openstack-nfv-preparing
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 1-0.48 Thu Jan 21 2016 Lenka Spackova Added information about the removed systemtap-grapher package to the Deprecated Functionality Chapter. Revision 1-0.47 Tue Jan 27 2015 Milan Navratil Updated the Red Hat Enterprise Linux 6.5 Technical Notes. Revision 1-0.42 Mon Oct 20 2014 Miroslav Svoboda Updated the Red Hat Enterprise Linux 6.5 Technical Notes with the latest kernel changes. Revision 1-0.32 Fri Jun 20 2014 Miroslav Svoboda Updated the Red Hat Enterprise Linux 6.5 Technical Notes with the latest kernel erratum, RHSA-2014-0771. Revision 1-0.30 Mon Jun 02 2014 Eliska Slobodova Clarified that iSCSI and FCoE boot are fully supported features in Red Hat Enterprise Linux 6.5. Revision 1-0.29 Thu May 08 2014 Miroslav Svoboda Updated the Red Hat Enterprise Linux 6.5 Technical Notes with the latest kernel erratum, RHSA-2014-0475. Revision 1-0.25 Tue Feb 18 2014 Eliska Slobodova Added a Mellanox SR-IOV Technology Preview. Revision 1-0.23 Wed Feb 12 2014 Miroslav Svoboda Updated the Red Hat Enterprise Linux 6.5 Technical Notes with the latest kernel erratum, RHSA-2014-0159. Revision 1-0.22 Wed Jan 22 2014 Eliska Slobodova Added the missing eCryptfs Technology Preview. Revision 1-0.21 Fri Jan 10 2014 Eliska Slobodova Fixed several typos. Revision 1-0.17 Fri Dec 13 2013 Miroslav Svoboda Updated the Red Hat Enterprise Linux 6.5 Technical Notes with the latest kernel erratum, RHSA-2013-1801. Revision 1-0.15 Thu Nov 21 2013 Eliska Slobodova Release of the Red Hat Enterprise Linux 6.5 Technical Notes. Revision 1-0.0 Thu Oct 03 2013 Eliska Slobodova Release of the Red Hat Enterprise Linux 6.5 Beta Technical Notes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/appe-technical_notes-revision_history
Chapter 7. Windows node upgrades
Chapter 7. Windows node upgrades You can ensure your Windows nodes have the latest updates by upgrading the Windows Machine Config Operator (WMCO). 7.1. Windows Machine Config Operator upgrades When a new version of the Windows Machine Config Operator (WMCO) is released that is compatible with the current cluster version, the Operator is upgraded based on the upgrade channel and subscription approval strategy it was installed with when using the Operator Lifecycle Manager (OLM). The WMCO upgrade results in the Kubernetes components in the Windows machine being upgraded. Because WMCO 6.0.0 uses containerd as the default container runtime instead of Docker, note the following changes that are made during the upgrade: For nodes created using a machine set: All machine objects are deleted, which results in the draining and deletion of any Windows nodes. New Windows nodes are created. The upgraded WMCO configures the new Windows nodes with containerd as the default runtime. After the new Windows nodes join the OpenShift Container Platform cluster, you can deploy pods on those nodes. For Bring-Your-Own-Host (BYOH) nodes: The kubelet, kube-proxy, CNI, and the hybrid-overlay components, which were installed by the WMCO, are all uninstalled. Any Windows OS-specific configurations that were created as part of configuring the instance, such as HNS networks, are deleted or reverted. The WMCO installs containerd as the default runtime, and reinstalls the kubelet, kube-proxy, CNI, and hybrid-overlay components. The kubelet service starts. After the new Windows nodes join the OpenShift Container Platform cluster, you can deploy pods on those nodes. If any Docker service is present, it continues to run. Alternatively, you can manually uninstall Docker. Note If you are upgrading to a new version of the WMCO and want to use cluster monitoring, you must have the openshift.io/cluster-monitoring=true label present in the WMCO namespace. If you add the label to a pre-existing WMCO namespace, and there are already Windows nodes configured, restart the WMCO pod to allow monitoring graphs to display. For a non-disruptive upgrade, the WMCO terminates the Windows machines configured by the version of the WMCO and recreates them using the current version. This is done by deleting the Machine object, which results in the drain and deletion of the Windows node. To facilitate an upgrade, the WMCO adds a version annotation to all the configured nodes. During an upgrade, a mismatch in version annotation results in the deletion and recreation of a Windows machine. To have minimal service disruptions during an upgrade, the WMCO only updates one Windows machine at a time. Important The WMCO is only responsible for updating Kubernetes components, not for Windows operating system updates. You provide the Windows image when creating the VMs; therefore, you are responsible for providing an updated image. You can provide an updated Windows image by changing the image configuration in the MachineSet spec. For more information on Operator upgrades using the Operator Lifecycle Manager (OLM), see Updating installed Operators .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/windows_container_support_for_openshift/windows-node-upgrades
Chapter 2. Configuring an IBM Cloud account
Chapter 2. Configuring an IBM Cloud account Before you can install OpenShift Container Platform, you must configure an IBM Cloud(R) account. 2.1. Prerequisites You have an IBM Cloud(R) account with a subscription. You cannot install OpenShift Container Platform on a free or on a trial IBM Cloud(R) account. 2.2. Quotas and limits on IBM Power Virtual Server The OpenShift Container Platform cluster uses several IBM Cloud(R) and IBM Power(R) Virtual Server components, and the default quotas and limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain regions, or run multiple clusters from your account, you might need to request additional resources for your IBM Cloud(R) account. For a comprehensive list of the default IBM Cloud(R) quotas and service limits, see the IBM Cloud(R) documentation for Quotas and service limits . Virtual Private Cloud Each OpenShift Container Platform cluster creates its own Virtual Private Cloud (VPC). The default quota of VPCs per region is 10. If you have 10 VPCs created, you will need to increase your quota before attempting an installation. Application load balancer By default, each cluster creates two application load balancers (ALBs): Internal load balancer for the control plane API server External load balancer for the control plane API server You can create additional LoadBalancer service objects to create additional ALBs. The default quota of VPC ALBs are 50 per region. To have more than 50 ALBs, you must increase this quota. VPC ALBs are supported. Classic ALBs are not supported for IBM Power(R) Virtual Server. Transit Gateways Each OpenShift Container Platform cluster creates its own Transit Gateway to enable communication with a VPC. The default quota of transit gateways per account is 10. If you have 10 transit gateways created, you will need to increase your quota before attempting an installation. Dynamic Host Configuration Protocol Service There is a limit of one Dynamic Host Configuration Protocol (DHCP) service per IBM Power(R) Virtual Server instance. Networking Due to networking limitations, there is a restriction of one OpenShift cluster installed through IPI per zone per account. This is not configurable. Virtual Server Instances By default, a cluster creates server instances with the following resources : 0.5 CPUs 32 GB RAM System Type: s922 Processor Type: uncapped , shared Storage Tier: Tier-3 The following nodes are created: One bootstrap machine, which is removed after the installation is complete Three control plane nodes Three compute nodes For more information, see Creating a Power Systems Virtual Server in the IBM Cloud(R) documentation. 2.3. Configuring DNS resolution How you configure DNS resolution depends on the type of OpenShift Container Platform cluster you are installing: If you are installing a public cluster, you use IBM Cloud(R) Internet Services (CIS). If you are installing a private cluster, you use IBM Cloud(R) DNS Services (DNS Services). 2.4. Using IBM Cloud Internet Services for DNS resolution The installation program uses IBM Cloud(R) Internet Services (CIS) to configure cluster DNS resolution and provide name lookup for a public cluster. Note This offering does not support IPv6, so dual stack or IPv6 environments are not possible. You must create a domain zone in CIS in the same account as your cluster. You must also ensure the zone is authoritative for the domain. You can do this using a root domain or subdomain. Prerequisites You have installed the IBM Cloud(R) CLI . You have an existing domain and registrar. For more information, see the IBM(R) documentation . Procedure Create a CIS instance to use with your cluster: Install the CIS plugin: USD ibmcloud plugin install cis Log in to IBM Cloud(R) by using the CLI: USD ibmcloud login Create the CIS instance: USD ibmcloud cis instance-create <instance_name> standard- 1 1 At a minimum, you require a Standard plan for CIS to manage the cluster subdomain and its DNS records. Note After you have configured your registrar or DNS provider, it can take up to 24 hours for the changes to take effect. Connect an existing domain to your CIS instance: Set the context instance for CIS: USD ibmcloud cis instance-set <instance_CRN> 1 1 The instance CRN (Cloud Resource Name). For example: ibmcloud cis instance-set crn:v1:bluemix:public:power-iaas:osa21:a/65b64c1f1c29460d8c2e4bbfbd893c2c:c09233ac-48a5-4ccb-a051-d1cfb3fc7eb5:: Add the domain for CIS: USD ibmcloud cis domain-add <domain_name> 1 1 The fully qualified domain name. You can use either the root domain or subdomain value as the domain name, depending on which you plan to configure. Note A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Open the CIS web console , navigate to the Overview page, and note your CIS name servers. These name servers will be used in the step. Configure the name servers for your domains or subdomains at the domain's registrar or DNS provider. For more information, see the IBM Cloud(R) documentation . 2.5. IBM Cloud IAM Policies and API Key To install OpenShift Container Platform into your IBM Cloud(R) account, the installation program requires an IAM API key, which provides authentication and authorization to access IBM Cloud(R) service APIs. You can use an existing IAM API key that contains the required policies or create a new one. For an IBM Cloud(R) IAM overview, see the IBM Cloud(R) documentation . 2.5.1. Pre-requisite permissions Table 2.1. Pre-requisite permissions Role Access Viewer, Operator, Editor, Administrator, Reader, Writer, Manager Internet Services service in <resource_group> resource group Viewer, Operator, Editor, Administrator, User API key creator, Service ID creator IAM Identity Service service Viewer, Operator, Administrator, Editor, Reader, Writer, Manager, Console Administrator VPC Infrastructure Services service in <resource_group> resource group Viewer Resource Group: Access to view the resource group itself. The resource type should equal Resource group , with a value of <your_resource_group_name>. 2.5.2. Cluster-creation permissions Table 2.2. Cluster-creation permissions Role Access Viewer <resource_group> (Resource Group Created for Your Team) Viewer, Operator, Editor, Reader, Writer, Manager All Identity and IAM enabled services in Default resource group Viewer, Reader Internet Services service Viewer, Operator, Reader, Writer, Manager, Content Reader, Object Reader, Object Writer, Editor Cloud Object Storage service Viewer Default resource group: The resource type should equal Resource group , with a value of Default . If your account administrator changed your account's default resource group to something other than Default, use that value instead. Viewer, Operator, Editor, Reader, Manager Workspace for IBM Power(R) Virtual Server service in <resource_group> resource group Viewer, Operator, Editor, Reader, Writer, Manager, Administrator Internet Services service in <resource_group> resource group: CIS functional scope string equals reliability Viewer, Operator, Editor Transit Gateway service Viewer, Operator, Editor, Administrator, Reader, Writer, Manager, Console Administrator VPC Infrastructure Services service <resource_group> resource group 2.5.3. Access policy assignment In IBM Cloud(R) IAM, access policies can be attached to different subjects: Access group (Recommended) Service ID User The recommended method is to define IAM access policies in an access group . This helps organize all the access required for OpenShift Container Platform and enables you to onboard users and service IDs to this group. You can also assign access to users and service IDs directly, if desired. 2.5.4. Creating an API key You must create a user API key or a service ID API key for your IBM Cloud(R) account. Prerequisites You have assigned the required access policies to your IBM Cloud(R) account. You have attached you IAM access policies to an access group, or other appropriate resource. Procedure Create an API key, depending on how you defined your IAM access policies. For example, if you assigned your access policies to a user, you must create a user API key . If you assigned your access policies to a service ID, you must create a service ID API key . If your access policies are assigned to an access group, you can use either API key type. For more information on IBM Cloud(R) API keys, see Understanding API keys . 2.6. Supported IBM Power Virtual Server regions and zones You can deploy an OpenShift Container Platform cluster to the following regions: dal (Dallas, USA) dal10 dal12 eu-de (Frankfurt, Germany) eu-de-1 eu-de-2 lon (London, UK) lon04 mad (Madrid, Spain) mad02 mad04 osa (Osaka, Japan) osa21 sao (Sao Paulo, Brazil) sao01 sao04 syd (Sydney, Australia) syd04 wdc (Washington DC, USA) wdc06 wdc07 You might optionally specify the IBM Cloud(R) region in which the installer will create any VPC components. Supported regions in IBM Cloud(R) are: us-south eu-de eu-es eu-gb jp-osa au-syd br-sao ca-tor jp-tok 2.7. steps Creating an IBM Power(R) Virtual Server workspace
[ "ibmcloud plugin install cis", "ibmcloud login", "ibmcloud cis instance-create <instance_name> standard-next 1", "ibmcloud cis instance-set <instance_CRN> 1", "ibmcloud cis domain-add <domain_name> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_power_virtual_server/installing-ibm-cloud-account-power-vs
Chapter 13. Kernel Process Tapset
Chapter 13. Kernel Process Tapset This family of probe points is used to probe process-related activities. It contains the following probe points:
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/kprocess.stp
Chapter 18. Route [route.openshift.io/v1]
Chapter 18. Route [route.openshift.io/v1] Description A route allows developers to expose services through an HTTP(S) aware load balancing and proxy layer via a public DNS entry. The route may further specify TLS options and a certificate, or specify a public CNAME that the router should also accept for HTTP and HTTPS traffic. An administrator typically configures their router to be visible outside the cluster firewall, and may also add additional security, caching, or traffic controls on the service content. Routers usually talk directly to the service endpoints. Once a route is created, the host field may not be changed. Generally, routers use the oldest route with a given host when resolving conflicts. Routers are subject to additional customization and may support additional controls via the annotations field. Because administrators may configure multiple routers, the route status field is used to return information to clients about the names and states of the route under each router. If a client chooses a duplicate name, for instance, the route status conditions are used to indicate the route cannot be chosen. To enable HTTP/2 ALPN on a route it requires a custom (non-wildcard) certificate. This prevents connection coalescing by clients, notably web browsers. We do not support HTTP/2 ALPN on routes that use the default certificate because of the risk of connection re-use/coalescing. Routes that do not have their own custom certificate will not be HTTP/2 ALPN-enabled on either the frontend or the backend. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 18.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object RouteSpec describes the hostname or path the route exposes, any security information, and one to four backends (services) the route points to. Requests are distributed among the backends depending on the weights assigned to each backend. When using roundrobin scheduling the portion of requests that go to each backend is the backend weight divided by the sum of all of the backend weights. When the backend has more than one endpoint the requests that end up on the backend are roundrobin distributed among the endpoints. Weights are between 0 and 256 with default 100. Weight 0 causes no requests to the backend. If all weights are zero the route will be considered to have no backends and return a standard 503 response. The tls field is optional and allows specific certificates or behavior for the route. Routers typically configure a default certificate on a wildcard domain to terminate routes without explicit certificates, but custom hostnames usually must choose passthrough (send traffic directly to the backend via the TLS Server-Name- Indication field) or provide a certificate. status object RouteStatus provides relevant info about the status of a route, including which routers acknowledge it. 18.1.1. .spec Description RouteSpec describes the hostname or path the route exposes, any security information, and one to four backends (services) the route points to. Requests are distributed among the backends depending on the weights assigned to each backend. When using roundrobin scheduling the portion of requests that go to each backend is the backend weight divided by the sum of all of the backend weights. When the backend has more than one endpoint the requests that end up on the backend are roundrobin distributed among the endpoints. Weights are between 0 and 256 with default 100. Weight 0 causes no requests to the backend. If all weights are zero the route will be considered to have no backends and return a standard 503 response. The tls field is optional and allows specific certificates or behavior for the route. Routers typically configure a default certificate on a wildcard domain to terminate routes without explicit certificates, but custom hostnames usually must choose passthrough (send traffic directly to the backend via the TLS Server-Name- Indication field) or provide a certificate. Type object Required to Property Type Description alternateBackends array alternateBackends allows up to 3 additional backends to be assigned to the route. Only the Service kind is allowed, and it will be defaulted to Service. Use the weight field in RouteTargetReference object to specify relative preference. alternateBackends[] object RouteTargetReference specifies the target that resolve into endpoints. Only the 'Service' kind is allowed. Use 'weight' field to emphasize one over others. host string host is an alias/DNS that points to the service. Optional. If not specified a route name will typically be automatically chosen. Must follow DNS952 subdomain conventions. httpHeaders object RouteHTTPHeaders defines policy for HTTP headers. path string path that the router watches for, to route traffic for to the service. Optional port object RoutePort defines a port mapping from a router to an endpoint in the service endpoints. subdomain string subdomain is a DNS subdomain that is requested within the ingress controller's domain (as a subdomain). If host is set this field is ignored. An ingress controller may choose to ignore this suggested name, in which case the controller will report the assigned name in the status.ingress array or refuse to admit the route. If this value is set and the server does not support this field host will be populated automatically. Otherwise host is left empty. The field may have multiple parts separated by a dot, but not all ingress controllers may honor the request. This field may not be changed after creation except by a user with the update routes/custom-host permission. Example: subdomain frontend automatically receives the router subdomain apps.mycluster.com to have a full hostname frontend.apps.mycluster.com . tls object TLSConfig defines config used to secure a route and provide termination to object RouteTargetReference specifies the target that resolve into endpoints. Only the 'Service' kind is allowed. Use 'weight' field to emphasize one over others. wildcardPolicy string Wildcard policy if any for the route. Currently only 'Subdomain' or 'None' is allowed. 18.1.2. .spec.alternateBackends Description alternateBackends allows up to 3 additional backends to be assigned to the route. Only the Service kind is allowed, and it will be defaulted to Service. Use the weight field in RouteTargetReference object to specify relative preference. Type array 18.1.3. .spec.alternateBackends[] Description RouteTargetReference specifies the target that resolve into endpoints. Only the 'Service' kind is allowed. Use 'weight' field to emphasize one over others. Type object Required kind name Property Type Description kind string The kind of target that the route is referring to. Currently, only 'Service' is allowed name string name of the service/target that is being referred to. e.g. name of the service weight integer weight as an integer between 0 and 256, default 100, that specifies the target's relative weight against other target reference objects. 0 suppresses requests to this backend. 18.1.4. .spec.httpHeaders Description RouteHTTPHeaders defines policy for HTTP headers. Type object Property Type Description actions object RouteHTTPHeaderActions defines configuration for actions on HTTP request and response headers. 18.1.5. .spec.httpHeaders.actions Description RouteHTTPHeaderActions defines configuration for actions on HTTP request and response headers. Type object Property Type Description request array request is a list of HTTP request headers to modify. Currently, actions may define to either Set or Delete headers values. Actions defined here will modify the request headers of all requests made through a route. These actions are applied to a specific Route defined within a cluster i.e. connections made through a route. Currently, actions may define to either Set or Delete headers values. Route actions will be executed after IngressController actions for request headers. Actions are applied in sequence as defined in this list. A maximum of 20 request header actions may be configured. You can use this field to specify HTTP request headers that should be set or deleted when forwarding connections from the client to your application. Sample fetchers allowed are "req.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[req.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". Any request header configuration applied directly via a Route resource using this API will override header configuration for a header of the same name applied via spec.httpHeaders.actions on the IngressController or route annotation. Note: This field cannot be used if your route uses TLS passthrough. request[] object RouteHTTPHeader specifies configuration for setting or deleting an HTTP header. response array response is a list of HTTP response headers to modify. Currently, actions may define to either Set or Delete headers values. Actions defined here will modify the response headers of all requests made through a route. These actions are applied to a specific Route defined within a cluster i.e. connections made through a route. Route actions will be executed before IngressController actions for response headers. Actions are applied in sequence as defined in this list. A maximum of 20 response header actions may be configured. You can use this field to specify HTTP response headers that should be set or deleted when forwarding responses from your application to the client. Sample fetchers allowed are "res.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[res.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". Note: This field cannot be used if your route uses TLS passthrough. response[] object RouteHTTPHeader specifies configuration for setting or deleting an HTTP header. 18.1.6. .spec.httpHeaders.actions.request Description request is a list of HTTP request headers to modify. Currently, actions may define to either Set or Delete headers values. Actions defined here will modify the request headers of all requests made through a route. These actions are applied to a specific Route defined within a cluster i.e. connections made through a route. Currently, actions may define to either Set or Delete headers values. Route actions will be executed after IngressController actions for request headers. Actions are applied in sequence as defined in this list. A maximum of 20 request header actions may be configured. You can use this field to specify HTTP request headers that should be set or deleted when forwarding connections from the client to your application. Sample fetchers allowed are "req.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[req.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". Any request header configuration applied directly via a Route resource using this API will override header configuration for a header of the same name applied via spec.httpHeaders.actions on the IngressController or route annotation. Note: This field cannot be used if your route uses TLS passthrough. Type array 18.1.7. .spec.httpHeaders.actions.request[] Description RouteHTTPHeader specifies configuration for setting or deleting an HTTP header. Type object Required name action Property Type Description action object RouteHTTPHeaderActionUnion specifies an action to take on an HTTP header. name string name specifies the name of a header on which to perform an action. Its value must be a valid HTTP header name as defined in RFC 2616 section 4.2. The name must consist only of alphanumeric and the following special characters, "-!#USD%&'*+.^_`". The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Cookie, Set-Cookie. It must be no more than 255 characters in length. Header name must be unique. 18.1.8. .spec.httpHeaders.actions.request[].action Description RouteHTTPHeaderActionUnion specifies an action to take on an HTTP header. Type object Required type Property Type Description set object RouteSetHTTPHeader specifies what value needs to be set on an HTTP header. type string type defines the type of the action to be applied on the header. Possible values are Set or Delete. Set allows you to set HTTP request and response headers. Delete allows you to delete HTTP request and response headers. 18.1.9. .spec.httpHeaders.actions.request[].action.set Description RouteSetHTTPHeader specifies what value needs to be set on an HTTP header. Type object Required value Property Type Description value string value specifies a header value. Dynamic values can be added. The value will be interpreted as an HAProxy format string as defined in http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#8.2.6 and may use HAProxy's %[] syntax and otherwise must be a valid HTTP header value as defined in https://datatracker.ietf.org/doc/html/rfc7230#section-3.2 . The value of this field must be no more than 16384 characters in length. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. 18.1.10. .spec.httpHeaders.actions.response Description response is a list of HTTP response headers to modify. Currently, actions may define to either Set or Delete headers values. Actions defined here will modify the response headers of all requests made through a route. These actions are applied to a specific Route defined within a cluster i.e. connections made through a route. Route actions will be executed before IngressController actions for response headers. Actions are applied in sequence as defined in this list. A maximum of 20 response header actions may be configured. You can use this field to specify HTTP response headers that should be set or deleted when forwarding responses from your application to the client. Sample fetchers allowed are "res.hdr" and "ssl_c_der". Converters allowed are "lower" and "base64". Example header values: "%[res.hdr(X-target),lower]", "%{+Q}[ssl_c_der,base64]". Note: This field cannot be used if your route uses TLS passthrough. Type array 18.1.11. .spec.httpHeaders.actions.response[] Description RouteHTTPHeader specifies configuration for setting or deleting an HTTP header. Type object Required name action Property Type Description action object RouteHTTPHeaderActionUnion specifies an action to take on an HTTP header. name string name specifies the name of a header on which to perform an action. Its value must be a valid HTTP header name as defined in RFC 2616 section 4.2. The name must consist only of alphanumeric and the following special characters, "-!#USD%&'*+.^_`". The following header names are reserved and may not be modified via this API: Strict-Transport-Security, Proxy, Cookie, Set-Cookie. It must be no more than 255 characters in length. Header name must be unique. 18.1.12. .spec.httpHeaders.actions.response[].action Description RouteHTTPHeaderActionUnion specifies an action to take on an HTTP header. Type object Required type Property Type Description set object RouteSetHTTPHeader specifies what value needs to be set on an HTTP header. type string type defines the type of the action to be applied on the header. Possible values are Set or Delete. Set allows you to set HTTP request and response headers. Delete allows you to delete HTTP request and response headers. 18.1.13. .spec.httpHeaders.actions.response[].action.set Description RouteSetHTTPHeader specifies what value needs to be set on an HTTP header. Type object Required value Property Type Description value string value specifies a header value. Dynamic values can be added. The value will be interpreted as an HAProxy format string as defined in http://cbonte.github.io/haproxy-dconv/2.6/configuration.html#8.2.6 and may use HAProxy's %[] syntax and otherwise must be a valid HTTP header value as defined in https://datatracker.ietf.org/doc/html/rfc7230#section-3.2 . The value of this field must be no more than 16384 characters in length. Note that the total size of all net added headers after interpolating dynamic values must not exceed the value of spec.tuningOptions.headerBufferMaxRewriteBytes on the IngressController. 18.1.14. .spec.port Description RoutePort defines a port mapping from a router to an endpoint in the service endpoints. Type object Required targetPort Property Type Description targetPort IntOrString The target port on pods selected by the service this route points to. If this is a string, it will be looked up as a named port in the target endpoints port list. Required 18.1.15. .spec.tls Description TLSConfig defines config used to secure a route and provide termination Type object Required termination Property Type Description caCertificate string caCertificate provides the cert authority certificate contents certificate string certificate provides certificate contents. This should be a single serving certificate, not a certificate chain. Do not include a CA certificate. destinationCACertificate string destinationCACertificate provides the contents of the ca certificate of the final destination. When using reencrypt termination this file should be provided in order to have routers use it for health checks on the secure connection. If this field is not specified, the router may provide its own destination CA and perform hostname validation using the short service name (service.namespace.svc), which allows infrastructure generated certificates to automatically verify. externalCertificate object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. insecureEdgeTerminationPolicy string insecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80. * Allow - traffic is sent to the server on the insecure port (edge/reencrypt terminations only) (default). * None - no traffic is allowed on the insecure port. * Redirect - clients are redirected to the secure port. key string key provides key file contents termination string termination indicates termination type. * edge - TLS termination is done by the router and http is used to communicate with the backend (default) * passthrough - Traffic is sent straight to the destination without the router providing TLS termination * reencrypt - TLS termination is done by the router and https is used to communicate with the backend Note: passthrough termination is incompatible with httpHeader actions 18.1.16. .spec.tls.externalCertificate Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 18.1.17. .spec.to Description RouteTargetReference specifies the target that resolve into endpoints. Only the 'Service' kind is allowed. Use 'weight' field to emphasize one over others. Type object Required kind name Property Type Description kind string The kind of target that the route is referring to. Currently, only 'Service' is allowed name string name of the service/target that is being referred to. e.g. name of the service weight integer weight as an integer between 0 and 256, default 100, that specifies the target's relative weight against other target reference objects. 0 suppresses requests to this backend. 18.1.18. .status Description RouteStatus provides relevant info about the status of a route, including which routers acknowledge it. Type object Property Type Description ingress array ingress describes the places where the route may be exposed. The list of ingress points may contain duplicate Host or RouterName values. Routes are considered live once they are Ready ingress[] object RouteIngress holds information about the places where a route is exposed. 18.1.19. .status.ingress Description ingress describes the places where the route may be exposed. The list of ingress points may contain duplicate Host or RouterName values. Routes are considered live once they are Ready Type array 18.1.20. .status.ingress[] Description RouteIngress holds information about the places where a route is exposed. Type object Property Type Description conditions array Conditions is the state of the route, may be empty. conditions[] object RouteIngressCondition contains details for the current condition of this route on a particular router. host string Host is the host string under which the route is exposed; this value is required routerCanonicalHostname string CanonicalHostname is the external host name for the router that can be used as a CNAME for the host requested for this route. This value is optional and may not be set in all cases. routerName string Name is a name chosen by the router to identify itself; this value is required wildcardPolicy string Wildcard policy is the wildcard policy that was allowed where this route is exposed. 18.1.21. .status.ingress[].conditions Description Conditions is the state of the route, may be empty. Type array 18.1.22. .status.ingress[].conditions[] Description RouteIngressCondition contains details for the current condition of this route on a particular router. Type object Required type status Property Type Description lastTransitionTime Time RFC 3339 date and time when this condition last transitioned message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition, and is usually a machine and human readable constant status string Status is the status of the condition. Can be True, False, Unknown. type string Type is the type of the condition. Currently only Admitted or UnservableInFutureVersions. 18.2. API endpoints The following API endpoints are available: /apis/route.openshift.io/v1/routes GET : list or watch objects of kind Route /apis/route.openshift.io/v1/watch/routes GET : watch individual changes to a list of Route. deprecated: use the 'watch' parameter with a list operation instead. /apis/route.openshift.io/v1/namespaces/{namespace}/routes DELETE : delete collection of Route GET : list or watch objects of kind Route POST : create a Route /apis/route.openshift.io/v1/watch/namespaces/{namespace}/routes GET : watch individual changes to a list of Route. deprecated: use the 'watch' parameter with a list operation instead. /apis/route.openshift.io/v1/namespaces/{namespace}/routes/{name} DELETE : delete a Route GET : read the specified Route PATCH : partially update the specified Route PUT : replace the specified Route /apis/route.openshift.io/v1/watch/namespaces/{namespace}/routes/{name} GET : watch changes to an object of kind Route. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/route.openshift.io/v1/namespaces/{namespace}/routes/{name}/status GET : read status of the specified Route PATCH : partially update status of the specified Route PUT : replace status of the specified Route 18.2.1. /apis/route.openshift.io/v1/routes HTTP method GET Description list or watch objects of kind Route Table 18.1. HTTP responses HTTP code Reponse body 200 - OK RouteList schema 401 - Unauthorized Empty 18.2.2. /apis/route.openshift.io/v1/watch/routes HTTP method GET Description watch individual changes to a list of Route. deprecated: use the 'watch' parameter with a list operation instead. Table 18.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 18.2.3. /apis/route.openshift.io/v1/namespaces/{namespace}/routes HTTP method DELETE Description delete collection of Route Table 18.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 18.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Route Table 18.5. HTTP responses HTTP code Reponse body 200 - OK RouteList schema 401 - Unauthorized Empty HTTP method POST Description create a Route Table 18.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.7. Body parameters Parameter Type Description body Route schema Table 18.8. HTTP responses HTTP code Reponse body 200 - OK Route schema 201 - Created Route schema 202 - Accepted Route schema 401 - Unauthorized Empty 18.2.4. /apis/route.openshift.io/v1/watch/namespaces/{namespace}/routes HTTP method GET Description watch individual changes to a list of Route. deprecated: use the 'watch' parameter with a list operation instead. Table 18.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 18.2.5. /apis/route.openshift.io/v1/namespaces/{namespace}/routes/{name} Table 18.10. Global path parameters Parameter Type Description name string name of the Route HTTP method DELETE Description delete a Route Table 18.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 18.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Route Table 18.13. HTTP responses HTTP code Reponse body 200 - OK Route schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Route Table 18.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.15. HTTP responses HTTP code Reponse body 200 - OK Route schema 201 - Created Route schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Route Table 18.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.17. Body parameters Parameter Type Description body Route schema Table 18.18. HTTP responses HTTP code Reponse body 200 - OK Route schema 201 - Created Route schema 401 - Unauthorized Empty 18.2.6. /apis/route.openshift.io/v1/watch/namespaces/{namespace}/routes/{name} Table 18.19. Global path parameters Parameter Type Description name string name of the Route HTTP method GET Description watch changes to an object of kind Route. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 18.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 18.2.7. /apis/route.openshift.io/v1/namespaces/{namespace}/routes/{name}/status Table 18.21. Global path parameters Parameter Type Description name string name of the Route HTTP method GET Description read status of the specified Route Table 18.22. HTTP responses HTTP code Reponse body 200 - OK Route schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Route Table 18.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.24. HTTP responses HTTP code Reponse body 200 - OK Route schema 201 - Created Route schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Route Table 18.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 18.26. Body parameters Parameter Type Description body Route schema Table 18.27. HTTP responses HTTP code Reponse body 200 - OK Route schema 201 - Created Route schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_apis/route-route-openshift-io-v1
4.5. Configuration File Devices
4.5. Configuration File Devices Table 4.3, "Device Attributes" shows the attributes that you can set for each individual storage device in the devices section of the multipath.conf configuration file. These attributes are used by DM-Multipath unless they are overwritten by the attributes specified in the multipaths section of the multipath.conf file for paths that contain the device. These attributes override the attributes set in the defaults section of the multipath.conf file. Many devices that support multipathing are included by default in a multipath configuration. The values for the devices that are supported by default are listed in the multipath.conf.defaults file. You probably will not need to modify the values for these devices, but if you do you can overwrite the default values by including an entry in the configuration file for the device that overwrites those values. You can copy the device configuration defaults from the multipath.conf.defaults file for the device and override the values that you want to change. To add a device that is not configured automatically by default to this section of the configuration file, you need to set the vendor and product parameters. You can find these values by looking at /sys/block/ device_name /device/vendor and /sys/block/ device_name /device/model where device_name is the device to be multipathed, as in the following example: The additional parameters to specify depend on your specific device. If the device is active/active, you will usually not need to set additional parameters. You may want to set path_grouping_policy to multibus . Other parameters you may need to set are no_path_retry and rr_min_io , as described in Table 4.3, "Device Attributes" . If the device is active/passive, but it automatically switches paths with I/O to the passive path, you need to change the checker function to one that does not send I/O to the path to test if it is working (otherwise, your device will keep failing over). This almost always means that you set the path_checker to tur ; this works for all SCSI devices that support the Test Unit Ready command, which most do. Table 4.3. Device Attributes Attribute Description vendor Specifies the vendor name of the storage device to which the device attributes apply, for example COMPAQ . product Specifies the product name of the storage device to which the device attributes apply, for example HSV110 (C)COMPAQ . revision Specifies the product revision identifier of the storage device. product_blacklist Specifies a regular expression used to blacklist devices by product. hardware_handler Specifies a module that will be used to perform hardware specific actions when switching path groups or handling I/O errors. Possible values include: 1 emc : hardware handler for EMC storage arrays. 1 alua : hardware handler for SCSI-3 ALUA arrays. 1 hp_sw : hardware handler for Compaq/HP controllers. 1 rdac : hardware handler for the LSI/Engenio RDAC controllers. path_grouping_policy Specifies the default path grouping policy to apply to unspecified multipaths. Possible values include: failover = 1 path per priority group multibus = all valid paths in 1 priority group group_by_serial = 1 priority group per detected serial number group_by_prio = 1 priority group per path priority value group_by_node_name = 1 priority group per target node name getuid_callout Specifies the default program and arguments to call out to obtain a unique path identifier. An absolute path is required. path_selector Specifies the default algorithm to use in determining what path to use for the I/O operation. Possible values include: round-robin 0 : Loop through every path in the path group, sending the same amount of I/O to each. queue-length 0 : Send the bunch of I/O down the path with the least number of outstanding I/O requests. service-time 0 : Send the bunch of I/O down the path with the shortest estimated service time, which is determined by dividing the total size of the outstanding I/O to each path by its relative throughput. path_checker Specifies the default method used to determine the state of the paths. Possible values include: readsector0 : Read the first sector of the device. tur : Issue a TEST UNIT READY to the device. emc_clariion : Query the EMC Clariion specific EVPD page 0xC0 to determine the path. hp_sw : Check the path state for HP storage arrays with Active/Standby firmware. rdac : Check the path stat for LSI/Engenio RDAC storage controller. directio : Read the first sector with direct I/O. features The default extra features of multipath devices, using the format: " number_of_features_plus_arguments feature1 ...". Possible values for features include: queue_if_no_path , which is the same as setting no_path_retry to queue . For information on issues that may arise when using this feature, see Section 5.6, "Issues with queue_if_no_path feature" . retain_attached_hw_handler : (Red Hat Enterprise Linux Release 6.4 and later) If this parameter is set to yes and the scsi layer has already attached a hardware handler to the path device, multipath will not force the device to use the hardware_handler specified by the multipath.conf file. If the scsi layer has not attached a hardware handler, multipath will continue to use its configured hardware handler as usual. pg_init_retries n : Retry path group initialization up to n times before failing where 1 <= n <= 50. pg_init_delay_msecs n : Wait n milliseconds between path group initialization retries where 0 <= n <= 60000. prio Specifies the default function to call to obtain a path priority value. For example, the ALUA bits in SPC-3 provide an exploitable prio value. Possible values include: const : Set a priority of 1 to all paths. emc : Generate the path priority for EMC arrays. alua : Generate the path priority based on the SCSI-3 ALUA settings. As of Red Hat Enterprise Linux 6.8, if you specify prio "alua exclusive_pref_bit" in your device configuration, multipath will create a path group that contains only the path with the pref bit set and will give that path group the highest priority. tpg_pref : Generate the path priority based on the SCSI-3 ALUA settings, using the preferred port bit. ontap : Generate the path priority for NetApp arrays. rdac : Generate the path priority for LSI/Engenio RDAC controller. hp_sw : Generate the path priority for Compaq/HP controller in active/standby mode. hds : Generate the path priority for Hitachi HDS Modular storage arrays. failback Manages path group failback. A value of immediate specifies immediate failback to the highest priority path group that contains active paths. A value of manual specifies that there should not be immediate failback but that failback can happen only with operator intervention. A value of followover specifies that automatic failback should be performed when the first path of a path group becomes active. This keeps a node from automatically failing back when another node requested the failover. A numeric value greater than zero specifies deferred failback, expressed in seconds. rr_weight If set to priorities , then instead of sending rr_min_io requests to a path before calling path_selector to choose the path, the number of requests to send is determined by rr_min_io times the path's priority, as determined by the prio function. If set to uniform , all path weights are equal. no_path_retry A numeric value for this attribute specifies the number of times the system should attempt to use a failed path before disabling queuing. A value of fail indicates immediate failure, without queuing. A value of queue indicates that queuing should not stop until the path is fixed. rr_min_io Specifies the number of I/O requests to route to a path before switching to the path in the current path group. This setting is only for systems running kernels older that 2.6.31. Newer systems should use rr_min_io_rq . The default value is 1000. rr_min_io_rq Specifies the number of I/O requests to route to a path before switching to the path in the current path group, using request-based device-mapper-multipath. This setting should be used on systems running current kernels. On systems running kernels older than 2.6.31, use rr_min_io . The default value is 1. fast_io_fail_tmo The number of seconds the SCSI layer will wait after a problem has been detected on an FC remote port before failing I/O to devices on that remote port. This value should be smaller than the value of dev_loss_tmo . Setting this to off will disable the timeout. dev_loss_tmo The number of seconds the SCSI layer will wait after a problem has been detected on an FC remote port before removing it from the system. Setting this to infinity will set this to 2147483647 seconds,or 68 years. flush_on_last_del If set to yes , the multipathd daemon will disable queuing when the last path to a device has been deleted. user_friendly_names If set to yes , specifies that the system should use the /etc/multipath/bindings file to assign a persistent and unique alias to the multipath, in the form of mpath n . If set to no , specifies that the system should use use the WWID as the alias for the multipath. In either case, what is specified here will be overridden by any device-specific aliases you specify in the multipaths section of the configuration file. The default value is no . retain_attached_hw_handler (Red Hat Enterprise Linux Release 6.4 and later) If this parameter is set to yes and the scsi layer has already attached a hardware handler to the path device, multipath will not force the device to use the hardware_handler specified by the multipath.conf file. If the scsi layer has not attached a hardware handler, multipath will continue to use its configured hardware handler as usual. detect_prio (Red Hat Enterprise Linux Release 6.4 and later) If this is set to yes , multipath will first check if the device supports ALUA, and if so it will automatically assign the device the alua prioritizer. If the device does not support ALUA, it will determine the prioritizer as it always does. delay_watch_checks (Red Hat Enterprise Linux Release 6.7 and later) If set to a value greater than 0, the multipathd daemon will watch paths that have recently become valid for the specified number of checks. If they fail again while they are being watched, when they become valid they will not be used until they have stayed up for the number of consecutive checks specified with delay_wait_checks . This allows you to keep paths that may be unreliable from immediately being put back into use as soon as they come back online. delay_wait_checks (Red Hat Enterprise Linux Release 6.7 and later) If set to a value greater than 0, when a device that has recently come back online fails again within the number of checks specified with delay_watch_checks , the time it comes back online it will be marked and delayed and it will not be used until it has passed the number of checks specified in delay_wait_checks . skip_kpartx (Red Hat Enterprise Linux Release 6.9 and later) If set to yes , kpartx will not automatically create partitions on the device. This allows users to create a multipath device without creating partitions, even if the device has a partition table. max_sectors_kb (Red Hat Enterprise Linux Release 6.9 and later) Sets the max_sectors_kb device queue parameter to the specified value on all underlying paths of a multipath device before the multipath device is first activated. When a multipath device is created, the device inherits the max_sectors_kb value from the path devices. Manually raising this value for the multipath device or lowering this value for the path devices can cause multipath to create I/O operations larger than the path devices allow. Using the max_sectors_kb parameter is an easy way to set these values before a multipath device is created on top of the path devices and prevent invalid-sized I/O operations from being passed If this parameter is not set by the user, the path devices have it set by their device driver, and the multipath device inherits it from the path devices. all_devs When this parameter is set to yes , all of the options set in this device configuration will override the values for those options in all of the other device configurations, both the ones in the configuration file and the built-in defaults. The following example shows a device entry in the multipath configuration file. The following configuration sets no_path_retry to fail for all of the built-in device configurations.
[ "cat /sys/block/sda/device/vendor WINSYS cat /sys/block/sda/device/model SF2372", "# } # device { # vendor \"COMPAQ \" # product \"MSA1000 \" # path_grouping_policy multibus # path_checker tur # rr_weight priorities # } #}", "devices { device { all_devs yes no_path_retry fail } }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/config_file_devices
23.13. Events Configuration
23.13. Events Configuration Using the following sections of domain XML it is possible to override the default actions for various events: <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <on_lockfailure>poweroff</on_lockfailure> Figure 23.23. Events Configuration The following collections of elements allow the actions to be specified when a guest virtual machine operating system triggers a life cycle operation. A common use case is to force a reboot to be treated as a power off when doing the initial operating system installation. This allows the VM to be re-configured for the first post-install boot up. The components of this section of the domain XML are as follows: Table 23.9. Event configuration elements State Description <on_poweroff> Specifies the action that is to be executed when the guest virtual machine requests a power off. Four arguments are possible: destroy - This action terminates the domain completely and releases all resources. restart - This action terminates the domain completely and restarts it with the same configuration. preserve - This action terminates the domain completely but and its resources are preserved to allow for future analysis. rename-restart - This action terminates the domain completely and then restarts it with a new name. <on_reboot> Specifies the action to be executed when the guest virtual machine requests a reboot. Four arguments are possible: destroy - This action terminates the domain completely and releases all resources. restart - This action terminates the domain completely and restarts it with the same configuration. preserve - This action terminates the domain completely but and its resources are preserved to allow for future analysis. rename-restart - This action terminates the domain completely and then restarts it with a new name. <on_crash> Specifies the action that is to be executed when the guest virtual machine crashes. In addition, it supports these additional actions: coredump-destroy - The crashed domain's core is dumped, the domain is terminated completely, and all resources are released. coredump-restart - The crashed domain's core is dumped, and the domain is restarted with the same configuration settings. Four arguments are possible: destroy - This action terminates the domain completely and releases all resources. restart - This action terminates the domain completely and restarts it with the same configuration. preserve - This action terminates the domain completely but and its resources are preserved to allow for future analysis. rename-restart - This action terminates the domain completely and then restarts it with a new name. <on_lockfailure> Specifies the action to take when a lock manager loses resource locks. The following actions are recognized by libvirt, although not all of them need to be supported by individual lock managers. When no action is specified, each lock manager will take its default action. The following arguments are possible: poweroff - Forcefully powers off the domain. restart - Restarts the domain to reacquire its locks. pause - Pauses the domain so that it can be manually resumed when lock issues are solved. ignore - Keeps the domain running as if nothing happened.
[ "<on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <on_lockfailure>poweroff</on_lockfailure>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Manipulating_the_domain_xml-Events_configuration
Chapter 10. Publishing on the catalog
Chapter 10. Publishing on the catalog After submitting your test results through the Red Hat certification portal, your application is scanned for vulnerabilities. When the scanning is completed, you can publish your product on the the Red Hat Ecosystem Catalog . A RHOSP application certification is generated if, you have performed the following: You ran the required tests successfully. Red Hat reviewed the testing configuration report, and found it was valid and appropriate for the certification. Perform the following steps to publish your product on the catalog: Procedure Navigate to your Product listing page. Click Publish . Your certified application is now published on the Red Hat Ecosystem Catalog.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_application_and_vnf_workflow_guide/assembly-publishing-certification-catalog_rhosp-vnf-wf-setting-up-test-env
Chapter 31. Using Ansible to manage DNS records in IdM
Chapter 31. Using Ansible to manage DNS records in IdM This chapter describes how to manage DNS records in Identity Management (IdM) using an Ansible playbook. As an IdM administrator, you can add, modify, and delete DNS records in IdM. The chapter contains the following sections: Ensuring the presence of A and AAAA DNS records in IdM using Ansible Ensuring the presence of A and PTR DNS records in IdM using Ansible Ensuring the presence of multiple DNS records in IdM using Ansible Ensuring the presence of multiple CNAME records in IdM using Ansible Ensuring the presence of an SRV record in IdM using Ansible 31.1. DNS records in IdM Identity Management (IdM) supports many different DNS record types. The following four are used most frequently: A This is a basic map for a host name and an IPv4 address. The record name of an A record is a host name, such as www . The IP Address value of an A record is an IPv4 address, such as 192.0.2.1 . For more information about A records, see RFC 1035 . AAAA This is a basic map for a host name and an IPv6 address. The record name of an AAAA record is a host name, such as www . The IP Address value is an IPv6 address, such as 2001:DB8::1111 . For more information about AAAA records, see RFC 3596 . SRV Service (SRV) resource records map service names to the DNS name of the server that is providing that particular service. For example, this record type can map a service like an LDAP directory to the server which manages it. The record name of an SRV record has the format _service . _protocol , such as _ldap._tcp . The configuration options for SRV records include priority, weight, port number, and host name for the target service. For more information about SRV records, see RFC 2782 . PTR A pointer record (PTR) adds a reverse DNS record, which maps an IP address to a domain name. Note All reverse DNS lookups for IPv4 addresses use reverse entries that are defined in the in-addr.arpa. domain. The reverse address, in human-readable form, is the exact reverse of the regular IP address, with the in-addr.arpa. domain appended to it. For example, for the network address 192.0.2.0/24 , the reverse zone is 2.0.192.in-addr.arpa . The record name of a PTR must be in the standard format specified in RFC 1035 , extended in RFC 2317 , and RFC 3596 . The host name value must be a canonical host name of the host for which you want to create the record. Note Reverse zones can also be configured for IPv6 addresses, with zones in the .ip6.arpa. domain. For more information about IPv6 reverse zones, see RFC 3596 . When adding DNS resource records, note that many of the records require different data. For example, a CNAME record requires a host name, while an A record requires an IP address. In the IdM Web UI, the fields in the form for adding a new record are updated automatically to reflect what data is required for the currently selected type of record. 31.2. Common ipa dnsrecord-* options You can use the following options when adding, modifying and deleting the most common DNS resource record types in Identity Management (IdM): A (IPv4) AAAA (IPv6) SRV PTR In Bash , you can define multiple entries by listing the values in a comma-separated list inside curly braces, such as --option={val1,val2,val3} . Table 31.1. General Record Options Option Description --ttl = number Sets the time to live for the record. --structured Parses the raw DNS records and returns them in a structured format. Table 31.2. "A" record options Option Description Examples --a-rec = ARECORD Passes a single A record or a list of A records. ipa dnsrecord-add idm.example.com host1 --a-rec=192.168.122.123 Can create a wildcard A record with a given IP address. ipa dnsrecord-add idm.example.com "*" --a-rec=192.168.122.123 [a] --a-ip-address = string Gives the IP address for the record. When creating a record, the option to specify the A record value is --a-rec . However, when modifying an A record, the --a-rec option is used to specify the current value for the A record. The new value is set with the --a-ip-address option. ipa dnsrecord-mod idm.example.com --a-rec 192.168.122.123 --a-ip-address 192.168.122.124 [a] The example creates a wildcard A record with the IP address of 192.0.2.123. Table 31.3. "AAAA" record options Option Description Example --aaaa-rec = AAAARECORD Passes a single AAAA (IPv6) record or a list of AAAA records. ipa dnsrecord-add idm.example.com www --aaaa-rec 2001:db8::1231:5675 --aaaa-ip-address = string Gives the IPv6 address for the record. When creating a record, the option to specify the A record value is --aaaa-rec . However, when modifying an A record, the --aaaa-rec option is used to specify the current value for the A record. The new value is set with the --a-ip-address option. ipa dnsrecord-mod idm.example.com --aaaa-rec 2001:db8::1231:5675 --aaaa-ip-address 2001:db8::1231:5676 Table 31.4. "PTR" record options Option Description Example --ptr-rec = PTRRECORD Passes a single PTR record or a list of PTR records. When adding the reverse DNS record, the zone name used with the ipa dnsrecord-add command is reversed, compared to the usage for adding other DNS records. Typically, the host IP address is the last octet of the IP address in a given network. The first example on the right adds a PTR record for server4.idm.example.com with IPv4 address 192.168.122.4. The second example adds a reverse DNS entry to the 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. IPv6 reverse zone for the host server2.example.com with the IP address 2001:DB8::1111 . ipa dnsrecord-add 122.168.192.in-addr.arpa 4 --ptr-rec server4.idm.example.com. USD ipa dnsrecord-add 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. 1.1.1.0.0.0.0.0.0.0.0.0.0.0.0 --ptr-rec server2.idm.example.com. --ptr-hostname = string Gives the host name for the record. Table 31.5. "SRV" Record Options Option Description Example --srv-rec = SRVRECORD Passes a single SRV record or a list of SRV records. In the examples on the right, _ldap._tcp defines the service type and the connection protocol for the SRV record. The --srv-rec option defines the priority, weight, port, and target values. The weight values of 51 and 49 in the examples add up to 100 and represent the probability, in percentages, that a particular record is used. # ipa dnsrecord-add idm.example.com _ldap._tcp --srv-rec="0 51 389 server1.idm.example.com." # ipa dnsrecord-add server.idm.example.com _ldap._tcp --srv-rec="1 49 389 server2.idm.example.com." --srv-priority = number Sets the priority of the record. There can be multiple SRV records for a service type. The priority (0 - 65535) sets the rank of the record; the lower the number, the higher the priority. A service has to use the record with the highest priority first. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="1 49 389 server2.idm.example.com." --srv-priority=0 --srv-weight = number Sets the weight of the record. This helps determine the order of SRV records with the same priority. The set weights should add up to 100, representing the probability (in percentages) that a particular record is used. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="0 49 389 server2.idm.example.com." --srv-weight=60 --srv-port = number Gives the port for the service on the target host. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="0 60 389 server2.idm.example.com." --srv-port=636 --srv-target = string Gives the domain name of the target host. This can be a single period (.) if the service is not available in the domain. Additional resources Run ipa dnsrecord-add --help . 31.3. Ensuring the presence of A and AAAA DNS records in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure that A and AAAA records for a particular IdM host are present. In the example used in the procedure below, an IdM administrator ensures the presence of A and AAAA records for host1 in the idm.example.com DNS zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-A-and-AAAA-records-are-present.yml Ansible playbook file. For example: Open the ensure-A-and-AAAA-records-are-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. Set the zone_name variable to idm.example.com . In the records variable, set the name variable to host1 , and the a_ip_address variable to 192.168.122.123 . In the records variable, set the name variable to host1 , and the aaaa_ip_address variable to ::1 . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources DNS records in IdM The README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory Sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory 31.4. Ensuring the presence of A and PTR DNS records in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure that an A record for a particular IdM host is present, with a corresponding PTR record. In the example used in the procedure below, an IdM administrator ensures the presence of A and PTR records for host1 with an IP address of 192.168.122.45 in the idm.example.com zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com DNS zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-dnsrecord-with-reverse-is-present.yml Ansible playbook file. For example: Open the ensure-dnsrecord-with-reverse-is-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. Set the name variable to host1 . Set the zone_name variable to idm.example.com . Set the ip_address variable to 192.168.122.45 . Set the create_reverse variable to true . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources DNS records in IdM The README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory Sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory 31.5. Ensuring the presence of multiple DNS records in IdM using Ansible Follow this procedure to use an Ansible playbook to ensure that multiple values are associated with a particular IdM DNS record. In the example used in the procedure below, an IdM administrator ensures the presence of multiple A records for host1 in the idm.example.com DNS zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-presence-multiple-records.yml Ansible playbook file. For example: Open the ensure-presence-multiple-records-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. In the records section, set the name variable to host1 . In the records section, set the zone_name variable to idm.example.com . In the records section, set the a_rec variable to 192.168.122.112 and to 192.168.122.122 . Define a second record in the records section: Set the name variable to host1 . Set the zone_name variable to idm.example.com . Set the aaaa_rec variable to ::1 . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources DNS records in IdM The README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory Sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory 31.6. Ensuring the presence of multiple CNAME records in IdM using Ansible A Canonical Name record (CNAME record) is a type of resource record in the Domain Name System (DNS) that maps one domain name, an alias, to another name, the canonical name. You may find CNAME records useful when running multiple services from a single IP address: for example, an FTP service and a web service, each running on a different port. Follow this procedure to use an Ansible playbook to ensure that multiple CNAME records are present in IdM DNS. In the example used in the procedure below, host03 is both an HTTP server and an FTP server. The IdM administrator ensures the presence of the www and ftp CNAME records for the host03 A record in the idm.example.com zone. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . The host03 A record exists in the idm.example.com zone. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-CNAME-record-is-present.yml Ansible playbook file. For example: Open the ensure-CNAME-record-is-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Optional: Adapt the description provided by the name of the play. Set the ipaadmin_password variable to your IdM administrator password. Set the zone_name variable to idm.example.com . In the records variable section, set the following variables and values: Set the name variable to www . Set the cname_hostname variable to host03 . Set the name variable to ftp . Set the cname_hostname variable to host03 . This is the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources See the README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory. See sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory. 31.7. Ensuring the presence of an SRV record in IdM using Ansible A DNS service (SRV) record defines the hostname, port number, transport protocol, priority and weight of a service available in a domain. In Identity Management (IdM), you can use SRV records to locate IdM servers and replicas. Follow this procedure to use an Ansible playbook to ensure that an SRV record is present in IdM DNS. In the example used in the procedure below, an IdM administrator ensures the presence of the _kerberos._udp.idm.example.com SRV record with the value of 10 50 88 idm.example.com . This sets the following values: It sets the priority of the service to 10. It sets the weight of the service to 50. It sets the port to be used by the service to 88. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the IdM administrator password. The idm.example.com zone exists and is managed by IdM DNS. For more information about adding a primary DNS zone in IdM DNS, see Using Ansible playbooks to manage IdM DNS zones . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory: Open your inventory file and ensure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-SRV-record-is-present.yml Ansible playbook file. For example: Open the ensure-SRV-record-is-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipadnsrecord task section: Set the ipaadmin_password variable to your IdM administrator password. Set the name variable to _kerberos._udp.idm.example.com . Set the srv_rec variable to '10 50 88 idm.example.com' . Set the zone_name variable to idm.example.com . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Additional resources DNS records in IdM The README-dnsrecord.md file in the /usr/share/doc/ansible-freeipa/ directory Sample Ansible playbooks in the /usr/share/doc/ansible-freeipa/playbooks/dnsrecord directory
[ "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-A-and-AAAA-records-are-present.yml ensure-A-and-AAAA-records-are-present-copy.yml", "--- - name: Ensure A and AAAA records are present hosts: ipaserver become: true gather_facts: false tasks: # Ensure A and AAAA records are present - name: Ensure that 'host1' has A and AAAA records. ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" zone_name: idm.example.com records: - name: host1 a_ip_address: 192.168.122.123 - name: host1 aaaa_ip_address: ::1", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-A-and-AAAA-records-are-present-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-dnsrecord-with-reverse-is-present.yml ensure-dnsrecord-with-reverse-is-present-copy.yml", "--- - name: Ensure DNS Record is present. hosts: ipaserver become: true gather_facts: false tasks: # Ensure that dns record is present - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host1 zone_name: idm.example.com ip_address: 192.168.122.45 create_reverse: true state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-dnsrecord-with-reverse-is-present-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-presence-multiple-records.yml ensure-presence-multiple-records-copy.yml", "--- - name: Test multiple DNS Records are present. hosts: ipaserver become: true gather_facts: false tasks: # Ensure that multiple dns records are present - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" records: - name: host1 zone_name: idm.example.com a_rec: 192.168.122.112 a_rec: 192.168.122.122 - name: host1 zone_name: idm.example.com aaaa_rec: ::1", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-presence-multiple-records-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-CNAME-record-is-present.yml ensure-CNAME-record-is-present-copy.yml", "--- - name: Ensure that 'www.idm.example.com' and 'ftp.idm.example.com' CNAME records point to 'host03.idm.example.com'. hosts: ipaserver become: true gather_facts: false tasks: - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" zone_name: idm.example.com records: - name: www cname_hostname: host03 - name: ftp cname_hostname: host03", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-CNAME-record-is-present.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/dnsrecord", "[ipaserver] server.idm.example.com", "cp ensure-SRV-record-is-present.yml ensure-SRV-record-is-present-copy.yml", "--- - name: Test multiple DNS Records are present. hosts: ipaserver become: true gather_facts: false tasks: # Ensure a SRV record is present - ipadnsrecord: ipaadmin_password: \"{{ ipaadmin_password }}\" name: _kerberos._udp.idm.example.com srv_rec: '10 50 88 idm.example.com' zone_name: idm.example.com state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-SRV-record-is-present.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_ansible_to_install_and_manage_identity_management/using-ansible-to-manage-dns-records-in-idm_using-ansible-to-install-and-manage-identity-management
Chapter 2. Package Lists - Supplementary repository
Chapter 2. Package Lists - Supplementary repository This chapter lists packages available in the Supplementary repository. The Supplementary repository includes proprietary-licensed packages that are not included in the open source Red Hat Enterprise Linux repositories. Software packages in the Supplementary repository are not supported, nor are the ABIs guaranteed. For more information, see the Scope of Coverage Details document. 2.1. Supplementary - Server Variant The following table lists all the packages in the Supplementary repository, Server variant. For more information about support scope, see the Scope of Coverage Details document. Package License dptfxtract Intel java-1.7.1-ibm IBM Binary Code License java-1.7.1-ibm-demo IBM Binary Code License java-1.7.1-ibm-devel IBM Binary Code License java-1.7.1-ibm-jdbc IBM Binary Code License java-1.7.1-ibm-plugin IBM Binary Code License java-1.7.1-ibm-src IBM Binary Code License java-1.8.0-ibm IBM Binary Code License java-1.8.0-ibm-demo IBM Binary Code License java-1.8.0-ibm-devel IBM Binary Code License java-1.8.0-ibm-jdbc IBM Binary Code License java-1.8.0-ibm-plugin IBM Binary Code License java-1.8.0-ibm-src IBM Binary Code License libdfp LGPLv2.1 libdfp-devel LGPLv2.1 virtio-win Red Hat Proprietary and GPLv2 2.2. Supplementary - Workstation Variant The following table lists all the packages in the Supplementary repository, Workstation variant. For more information about support scope, see the Scope of Coverage Details document. Package License dptfxtract Intel java-1.7.1-ibm IBM Binary Code License java-1.7.1-ibm-demo IBM Binary Code License java-1.7.1-ibm-devel IBM Binary Code License java-1.7.1-ibm-jdbc IBM Binary Code License java-1.7.1-ibm-plugin IBM Binary Code License java-1.7.1-ibm-src IBM Binary Code License java-1.8.0-ibm IBM Binary Code License java-1.8.0-ibm-demo IBM Binary Code License java-1.8.0-ibm-devel IBM Binary Code License java-1.8.0-ibm-jdbc IBM Binary Code License java-1.8.0-ibm-plugin IBM Binary Code License java-1.8.0-ibm-src IBM Binary Code License virtio-win Red Hat Proprietary and GPLv2 2.3. Supplementary - Client Variant The following table lists all the packages in the Supplementary repository, Client variant. For more information about support scope, see the Scope of Coverage Details document. Package License dptfxtract Intel java-1.7.1-ibm IBM Binary Code License java-1.7.1-ibm-demo IBM Binary Code License java-1.7.1-ibm-devel IBM Binary Code License java-1.7.1-ibm-jdbc IBM Binary Code License java-1.7.1-ibm-plugin IBM Binary Code License java-1.7.1-ibm-src IBM Binary Code License java-1.8.0-ibm IBM Binary Code License java-1.8.0-ibm-demo IBM Binary Code License java-1.8.0-ibm-devel IBM Binary Code License java-1.8.0-ibm-jdbc IBM Binary Code License java-1.8.0-ibm-plugin IBM Binary Code License java-1.8.0-ibm-src IBM Binary Code License virtio-win Red Hat Proprietary and GPLv2 2.4. Supplementary - ComputeNode Variant The following table lists all the packages in the Supplementary repository, ComputeNode variant. For more information about support scope, see the Scope of Coverage Details document. Package License dptfxtract Intel java-1.7.1-ibm IBM Binary Code License java-1.7.1-ibm-demo IBM Binary Code License java-1.7.1-ibm-devel IBM Binary Code License java-1.7.1-ibm-src IBM Binary Code License java-1.8.0-ibm IBM Binary Code License java-1.8.0-ibm-demo IBM Binary Code License java-1.8.0-ibm-devel IBM Binary Code License java-1.8.0-ibm-src IBM Binary Code License
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/package_manifest/package_lists_supplementary_repository
Chapter 3. Installing and configuring the Datadog agent for Ceph
Chapter 3. Installing and configuring the Datadog agent for Ceph Install the Datadog agent for Ceph and configure it to report back the Ceph data to the Datadog App. Prerequisites Root-level access to the Ceph monitor node. Appropriate Ceph key providing access to the Red Hat Ceph Storage cluster. Internet access. Procedure Log in to the Datadog App . The user interface will present navigation on the left side of the screen. Click Integrations . To install the agent from the command line, click on the Agent tab at the top of the screen. Open a command line and enter the one-step command line agent installation. Example Note Copy the example from the Datadog user interface, as the key differs from the example above and with each user account.
[ "DD_API_KEY= KEY-STRING bash -c \"USD(curl -L https://raw.githubusercontent.com/DataDog/dd-agent/master/packaging/datadog-agent/source/install_agent.sh)\"" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/monitoring_ceph_with_datadog_guide/installing-and-configuring-the-datadog-agent-for-ceph_datadog
Chapter 2. Package Namespace Change for JBoss EAP 8.0
Chapter 2. Package Namespace Change for JBoss EAP 8.0 This section provides additional information for the package namespace changes in JBoss EAP 8.0. JBoss EAP 8.0 provides full support for Jakarta EE 10 and many other implementations of the Jakarta EE 10 APIs. An important change supported by Jakarta EE 10 for JBoss EAP 8.0 is the package namespace change. 2.1. javax to jakarta Namespace change A key difference between Jakarta EE 8 and EE 10 is the renaming of the EE API Java packages from javax.* to jakarta.* . This follows the move of Java EE to the Eclipse Foundation and the establishment of Jakarta EE. Adapting to this namespace change is the biggest task of migrating an application from JBoss EAP 7 to JBoss EAP 8. To migrate applications to Jakarta EE 10, you must complete the following steps: Update any import statements or other source code uses of EE API classes from the javax package to the jakarta package. Update the names of any EE-specified system properties or other configuration properties that begin with javax to begin with jakarta . For any application-provided implementations of EE interfaces or abstract classes that are bootstrapped using the java.util.ServiceLoader mechanism, change the name of the resource that identifies the implementation class from META-INF/services/javax.[rest_of_name] to META-INF/services/jakarta.[rest_of_name] . Note The Red Hat Migration Toolkit can assist in updating the namespaces in the application source code. For more information, see How to use Red Hat Migration Toolkit for Auto-Migration of an Application to the Jakarta EE 10 Namespace . In cases where source code migration is not an option, the Open Source Eclipse Transformer project provides bytecode transformation tooling to transform existing Java archives from the javax namespace to the jakarta namespace. Note This change does not affect javax packages that are part of Java SE. Additional resources For more information, see The javax to jakarta Package Namespace Change .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_on_openshift_container_platform/package-namespace-change-for-jboss-eap-8-0_default
Chapter 16. Distributed tracing
Chapter 16. Distributed tracing Distributed tracing allows you to track the progress of transactions between applications in a distributed system. In a microservices architecture, tracing tracks the progress of transactions between services. Trace data is useful for monitoring application performance and investigating issues with target systems and end-user applications. In AMQ Streams on Red Hat Enterprise Linux, tracing facilitates the end-to-end tracking of messages: from source systems to Kafka, and then from Kafka to target systems and applications. Tracing complements the available JMX metrics . How AMQ Streams supports tracing Support for tracing is provided for the following clients and components. Kafka clients: Kafka producers and consumers Kafka Streams API applications Kafka components: Kafka Connect Kafka Bridge MirrorMaker MirrorMaker 2.0 To enable tracing, you perform four high-level tasks: Enable a Jaeger tracer. Enable the Interceptors: For Kafka clients, you instrument your application code using the OpenTracing Apache Kafka Client Instrumentation library (included with AMQ Streams). For Kafka components, you set configuration properties for each component. Set tracing environment variables . Deploy the client or component. When instrumented, clients generate trace data. For example, when producing messages or writing offsets to the log. Traces are sampled according to a sampling strategy and then visualized in the Jaeger user interface. Note Tracing is not supported for Kafka brokers. Setting up tracing for applications and systems beyond AMQ Streams is outside the scope of this chapter. To learn more about this subject, search for "inject and extract" in the OpenTracing documentation . Outline of procedures To set up tracing for AMQ Streams, follow these procedures in order: Set up tracing for clients: Initialize a Jaeger tracer for Kafka clients Instrument producers and consumers for tracing Instrument Kafka Streams applications for tracing Set up tracing for MirrorMaker, MirrorMaker 2.0, and Kafka Connect: Enable tracing for MirrorMaker Enable tracing for MirrorMaker 2.0 Enable tracing for Kafka Connect Enable tracing for the Kafka Bridge Prerequisites The Jaeger backend components are deployed to the host operating system. For deployment instructions, see the Jaeger deployment documentation . 16.1. Overview of OpenTracing and Jaeger AMQ Streams uses the OpenTracing and Jaeger projects. OpenTracing is an API specification that is independent from the tracing or monitoring system. The OpenTracing APIs are used to instrument application code Instrumented applications generate traces for individual transactions across the distributed system Traces are composed of spans that define specific units of work over time Jaeger is a tracing system for microservices-based distributed systems. Jaeger implements the OpenTracing APIs and provides client libraries for instrumentation The Jaeger user interface allows you to query, filter, and analyze trace data Additional resources OpenTracing Jaeger 16.2. Setting up tracing for Kafka clients Initialize a Jaeger tracer to instrument your client applications for distributed tracing. 16.2.1. Initializing a Jaeger tracer for Kafka clients Configure and initialize a Jaeger tracer using a set of tracing environment variables . Procedure In each client application: Add Maven dependencies for Jaeger to the pom.xml file for the client application: <dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.1.0.redhat-00002</version> </dependency> Define the configuration of the Jaeger tracer using the tracing environment variables . Create the Jaeger tracer from the environment variables that you defined in step two: Tracer tracer = Configuration.fromEnv().getTracer(); Note For alternative ways to initialize a Jaeger tracer, see the Java OpenTracing library documentation. Register the Jaeger tracer as a global tracer: GlobalTracer.register(tracer); A Jaeger tracer is now initialized for the client application to use. 16.2.2. Instrumenting producers and consumers for tracing Use a Decorator pattern or Interceptors to instrument your Java producer and consumer application code for tracing. Procedure In the application code of each producer and consumer application: Add a Maven dependency for OpenTracing to the producer or consumer's pom.xml file. <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.15.redhat-00001</version> </dependency> Instrument your client application code using either a Decorator pattern or Interceptors. To use a Decorator pattern: // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); To use Interceptors: // Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); Custom span names in a Decorator pattern A span is a logical unit of work in Jaeger, with an operation name, start time, and duration. To use a Decorator pattern to instrument your producer and consumer applications, define custom span names by passing a BiFunction object as an additional argument when creating the TracingKafkaProducer and TracingKafkaConsumer objects. The OpenTracing Apache Kafka Client Instrumentation library includes several built-in span names. Example: Using custom span names to instrument client application code in a Decorator pattern // Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> "CUSTOM_PRODUCER_NAME"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have "CUSTOM_PRODUCER_NAME" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // "receive" -> "RECEIVE" Built-in span names When defining custom span names, you can use the following BiFunctions in the ClientSpanNameProvider class. If no spanNameProvider is specified, CONSUMER_OPERATION_NAME and PRODUCER_OPERATION_NAME are used. Table 16.1. BiFunctions to define custom span names BiFunction Description CONSUMER_OPERATION_NAME, PRODUCER_OPERATION_NAME Returns the operationName as the span name: "receive" for consumers and "send" for producers. CONSUMER_PREFIXED_OPERATION_NAME(String prefix), PRODUCER_PREFIXED_OPERATION_NAME(String prefix) Returns a String concatenation of prefix and operationName . CONSUMER_TOPIC, PRODUCER_TOPIC Returns the name of the topic that the message was sent to or retrieved from in the format (record.topic()) . PREFIXED_CONSUMER_TOPIC(String prefix), PREFIXED_PRODUCER_TOPIC(String prefix) Returns a String concatenation of prefix and the topic name in the format (record.topic()) . CONSUMER_OPERATION_NAME_TOPIC, PRODUCER_OPERATION_NAME_TOPIC Returns the operation name and the topic name: "operationName - record.topic()" . CONSUMER_PREFIXED_OPERATION_NAME_TOPIC(String prefix), PRODUCER_PREFIXED_OPERATION_NAME_TOPIC(String prefix) Returns a String concatenation of prefix and "operationName - record.topic()" . 16.2.3. Instrumenting Kafka Streams applications for tracing Instrument Kafka Streams applications for distributed tracing using a supplier interface. This enables the Interceptors in the application. Procedure In each Kafka Streams application: Add the opentracing-kafka-streams dependency to the Kafka Streams application's pom.xml file. <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.15.redhat-00001</version> </dependency> Create an instance of the TracingKafkaClientSupplier supplier interface: KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); Provide the supplier interface to KafkaStreams : KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start(); 16.3. Setting up tracing for MirrorMaker and Kafka Connect This section describes how to configure MirrorMaker, MirrorMaker 2.0, and Kafka Connect for distributed tracing. You must enable a Jaeger tracer for each component. 16.3.1. Enabling tracing for MirrorMaker Enable distributed tracing for MirrorMaker by passing the Interceptor properties as consumer and producer configuration parameters. Messages are traced from the source cluster to the target cluster. The trace data records messages entering and leaving the MirrorMaker component. Procedure Configure and enable a Jaeger tracer. Edit the /opt/kafka/config/consumer.properties file. Add the following Interceptor property: consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor Edit the /opt/kafka/config/producer.properties file. Add the following Interceptor property: producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor Start MirrorMaker with the consumer and producer configuration files as parameters: su - kafka /opt/kafka/bin/kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --producer.config /opt/kafka/config/producer.properties --num.streams=2 16.3.2. Enabling tracing for MirrorMaker 2.0 Enable distributed tracing for MirrorMaker 2.0 by defining the Interceptor properties in the MirrorMaker 2.0 properties file. Messages are traced between Kafka clusters. The trace data records messages entering and leaving the MirrorMaker 2.0 component. Procedure Configure and enable a Jaeger tracer. Edit the MirrorMaker 2.0 configuration properties file, ./config/connect-mirror-maker.properties , and add the following properties: header.converter=org.apache.kafka.connect.converters.ByteArrayConverter 1 consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor 2 producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor 1 Prevents Kafka Connect from converting message headers (containing trace IDs) to base64 encoding. This ensures that messages are the same in both the source and the target clusters. 2 Enables the Interceptors for MirrorMaker 2.0. Start MirrorMaker 2.0 using the instructions in Section 10.4, "Synchronizing data between Kafka clusters using MirrorMaker 2.0" . Additional resources Chapter 10, Using AMQ Streams with MirrorMaker 2.0 16.3.3. Enabling tracing for Kafka Connect Enable distributed tracing for Kafka Connect using configuration properties. Only messages produced and consumed by Kafka Connect itself are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. Procedure Configure and enable a Jaeger tracer. Edit the relevant Kafka Connect configuration file. If you are running Kafka Connect in standalone mode, edit the /opt/kafka/config/connect-standalone.properties file. If you are running Kafka Connect in distributed mode, edit the /opt/kafka/config/connect-distributed.properties file. Add the following properties to the configuration file: producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor Save the configuration file. Set tracing environment variables and then run Kafka Connect in standalone or distributed mode. The Interceptors in Kafka Connect's internal consumers and producers are now enabled. Additional resources Section 16.5, "Environment variables for tracing" Section 9.1.3, "Running Kafka Connect in standalone mode" Section 9.2.3, "Running distributed Kafka Connect" 16.4. Enabling tracing for the Kafka Bridge Enable distributed tracing for the Kafka Bridge by editing the Kafka Bridge configuration file. You can then deploy a Kafka Bridge instance that is configured for distributed tracing to the host operating system. Traces are generated when: The Kafka Bridge sends messages to HTTP clients and consumes messages from HTTP clients HTTP clients send HTTP requests to send and receive messages through the Kafka Bridge To have end-to-end tracing, you must configure tracing in your HTTP clients. Procedure Edit the config/application.properties file in the Kafka Bridge installation directory. Remove the code comments from the following line: bridge.tracing=jaeger Save the configuration file. Run the bin/kafka_bridge_run.sh script using the configuration properties as a parameter: cd kafka-bridge-0.xy.x.redhat-0000x ./bin/kafka_bridge_run.sh --config-file=config/application.properties The Interceptors in the Kafka Bridge's internal consumers and producers are now enabled. Additional resources Section 13.1.6, "Configuring Kafka Bridge properties" 16.5. Environment variables for tracing Use these environment variables when configuring a Jaeger tracer for Kafka clients and components. Note The tracing environment variables are part of the Jaeger project and are subject to change. For the latest environment variables, see the Jaeger documentation . Table 16.2. Jaeger tracer environment variables Property Required Description JAEGER_SERVICE_NAME Yes The name of the Jaeger tracer service. JAEGER_AGENT_HOST No The hostname for communicating with the jaeger-agent through the User Datagram Protocol (UDP). JAEGER_AGENT_PORT No The port used for communicating with the jaeger-agent through UDP. JAEGER_ENDPOINT No The traces endpoint. Only define this variable if the client application will bypass the jaeger-agent and connect directly to the jaeger-collector . JAEGER_AUTH_TOKEN No The authentication token to send to the endpoint as a bearer token. JAEGER_USER No The username to send to the endpoint if using basic authentication. JAEGER_PASSWORD No The password to send to the endpoint if using basic authentication. JAEGER_PROPAGATION No A comma-separated list of formats to use for propagating the trace context. Defaults to the standard Jaeger format. Valid values are jaeger and b3 . JAEGER_REPORTER_LOG_SPANS No Indicates whether the reporter should also log the spans. JAEGER_REPORTER_MAX_QUEUE_SIZE No The reporter's maximum queue size. JAEGER_REPORTER_FLUSH_INTERVAL No The reporter's flush interval, in ms. Defines how frequently the Jaeger reporter flushes span batches. JAEGER_SAMPLER_TYPE No The sampling strategy to use for client traces: Constant Probabilistic Rate Limiting Remote (the default) To sample all traces, use the Constant sampling strategy with a parameter of 1. For more information, see the Jaeger documentation . JAEGER_SAMPLER_PARAM No The sampler parameter (number). JAEGER_SAMPLER_MANAGER_HOST_PORT No The hostname and port to use if a Remote sampling strategy is selected. JAEGER_TAGS No A comma-separated list of tracer-level tags that are added to all reported spans. The value can also refer to an environment variable using the format USD{envVarName:default} . :default is optional and identifies a value to use if the environment variable cannot be found.
[ "<dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.1.0.redhat-00002</version> </dependency>", "Tracer tracer = Configuration.fromEnv().getTracer();", "GlobalTracer.register(tracer);", "<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.15.redhat-00001</version> </dependency>", "// Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList(\"messages\")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);", "// Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList(\"messages\")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);", "// Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> \"CUSTOM_PRODUCER_NAME\"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have \"CUSTOM_PRODUCER_NAME\" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // \"receive\" -> \"RECEIVE\"", "<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.15.redhat-00001</version> </dependency>", "KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer);", "KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();", "consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor", "producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor", "su - kafka /opt/kafka/bin/kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --producer.config /opt/kafka/config/producer.properties --num.streams=2", "header.converter=org.apache.kafka.connect.converters.ByteArrayConverter 1 consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor 2 producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor", "producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor", "bridge.tracing=jaeger", "cd kafka-bridge-0.xy.x.redhat-0000x ./bin/kafka_bridge_run.sh --config-file=config/application.properties" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_rhel/assembly-distributed-tracing-str
Chapter 2. APIServer [config.openshift.io/v1]
Chapter 2. APIServer [config.openshift.io/v1] Description APIServer holds configuration (like serving certificates, client CA and CORS domains) shared by all API servers in the system, among them especially kube-apiserver and openshift-apiserver. The canonical name of an instance is 'cluster'. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 2.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description additionalCORSAllowedOrigins array (string) additionalCORSAllowedOrigins lists additional, user-defined regular expressions describing hosts for which the API server allows access using the CORS headers. This may be needed to access the API and the integrated OAuth server from JavaScript applications. The values are regular expressions that correspond to the Golang regular expression language. audit object audit specifies the settings for audit configuration to be applied to all OpenShift-provided API servers in the cluster. clientCA object clientCA references a ConfigMap containing a certificate bundle for the signers that will be recognized for incoming client certificates in addition to the operator managed signers. If this is empty, then only operator managed signers are valid. You usually only have to set this if you have your own PKI you wish to honor client certificates from. The ConfigMap must exist in the openshift-config namespace and contain the following required fields: - ConfigMap.Data["ca-bundle.crt"] - CA bundle. encryption object encryption allows the configuration of encryption of resources at the datastore layer. servingCerts object servingCert is the TLS cert info for serving secure traffic. If not specified, operator managed certificates will be used for serving secure traffic. tlsSecurityProfile object tlsSecurityProfile specifies settings for TLS connections for externally exposed servers. If unset, a default (which may change between releases) is chosen. Note that only Old, Intermediate and Custom profiles are currently supported, and the maximum available MinTLSVersions is VersionTLS12. 2.1.2. .spec.audit Description audit specifies the settings for audit configuration to be applied to all OpenShift-provided API servers in the cluster. Type object Property Type Description customRules array customRules specify profiles per group. These profile take precedence over the top-level profile field if they apply. They are evaluation from top to bottom and the first one that matches, applies. customRules[] object AuditCustomRule describes a custom rule for an audit profile that takes precedence over the top-level profile. profile string profile specifies the name of the desired top-level audit profile to be applied to all requests sent to any of the OpenShift-provided API servers in the cluster (kube-apiserver, openshift-apiserver and oauth-apiserver), with the exception of those requests that match one or more of the customRules. The following profiles are provided: - Default: default policy which means MetaData level logging with the exception of events (not logged at all), oauthaccesstokens and oauthauthorizetokens (both logged at RequestBody level). - WriteRequestBodies: like 'Default', but logs request and response HTTP payloads for write requests (create, update, patch). - AllRequestBodies: like 'WriteRequestBodies', but also logs request and response HTTP payloads for read requests (get, list). - None: no requests are logged at all, not even oauthaccesstokens and oauthauthorizetokens. Warning: It is not recommended to disable audit logging by using the None profile unless you are fully aware of the risks of not logging data that can be beneficial when troubleshooting issues. If you disable audit logging and a support situation arises, you might need to enable audit logging and reproduce the issue in order to troubleshoot properly. If unset, the 'Default' profile is used as the default. 2.1.3. .spec.audit.customRules Description customRules specify profiles per group. These profile take precedence over the top-level profile field if they apply. They are evaluation from top to bottom and the first one that matches, applies. Type array 2.1.4. .spec.audit.customRules[] Description AuditCustomRule describes a custom rule for an audit profile that takes precedence over the top-level profile. Type object Required group profile Property Type Description group string group is a name of group a request user must be member of in order to this profile to apply. profile string profile specifies the name of the desired audit policy configuration to be deployed to all OpenShift-provided API servers in the cluster. The following profiles are provided: - Default: the existing default policy. - WriteRequestBodies: like 'Default', but logs request and response HTTP payloads for write requests (create, update, patch). - AllRequestBodies: like 'WriteRequestBodies', but also logs request and response HTTP payloads for read requests (get, list). - None: no requests are logged at all, not even oauthaccesstokens and oauthauthorizetokens. If unset, the 'Default' profile is used as the default. 2.1.5. .spec.clientCA Description clientCA references a ConfigMap containing a certificate bundle for the signers that will be recognized for incoming client certificates in addition to the operator managed signers. If this is empty, then only operator managed signers are valid. You usually only have to set this if you have your own PKI you wish to honor client certificates from. The ConfigMap must exist in the openshift-config namespace and contain the following required fields: - ConfigMap.Data["ca-bundle.crt"] - CA bundle. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 2.1.6. .spec.encryption Description encryption allows the configuration of encryption of resources at the datastore layer. Type object Property Type Description type string type defines what encryption type should be used to encrypt resources at the datastore layer. When this field is unset (i.e. when it is set to the empty string), identity is implied. The behavior of unset can and will change over time. Even if encryption is enabled by default, the meaning of unset may change to a different encryption type based on changes in best practices. When encryption is enabled, all sensitive resources shipped with the platform are encrypted. This list of sensitive resources can and will change over time. The current authoritative list is: 1. secrets 2. configmaps 3. routes.route.openshift.io 4. oauthaccesstokens.oauth.openshift.io 5. oauthauthorizetokens.oauth.openshift.io 2.1.7. .spec.servingCerts Description servingCert is the TLS cert info for serving secure traffic. If not specified, operator managed certificates will be used for serving secure traffic. Type object Property Type Description namedCertificates array namedCertificates references secrets containing the TLS cert info for serving secure traffic to specific hostnames. If no named certificates are provided, or no named certificates match the server name as understood by a client, the defaultServingCertificate will be used. namedCertificates[] object APIServerNamedServingCert maps a server DNS name, as understood by a client, to a certificate. 2.1.8. .spec.servingCerts.namedCertificates Description namedCertificates references secrets containing the TLS cert info for serving secure traffic to specific hostnames. If no named certificates are provided, or no named certificates match the server name as understood by a client, the defaultServingCertificate will be used. Type array 2.1.9. .spec.servingCerts.namedCertificates[] Description APIServerNamedServingCert maps a server DNS name, as understood by a client, to a certificate. Type object Property Type Description names array (string) names is a optional list of explicit DNS names (leading wildcards allowed) that should use this certificate to serve secure traffic. If no names are provided, the implicit names will be extracted from the certificates. Exact names trump over wildcard names. Explicit names defined here trump over extracted implicit names. servingCertificate object servingCertificate references a kubernetes.io/tls type secret containing the TLS cert info for serving secure traffic. The secret must exist in the openshift-config namespace and contain the following required fields: - Secret.Data["tls.key"] - TLS private key. - Secret.Data["tls.crt"] - TLS certificate. 2.1.10. .spec.servingCerts.namedCertificates[].servingCertificate Description servingCertificate references a kubernetes.io/tls type secret containing the TLS cert info for serving secure traffic. The secret must exist in the openshift-config namespace and contain the following required fields: - Secret.Data["tls.key"] - TLS private key. - Secret.Data["tls.crt"] - TLS certificate. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 2.1.11. .spec.tlsSecurityProfile Description tlsSecurityProfile specifies settings for TLS connections for externally exposed servers. If unset, a default (which may change between releases) is chosen. Note that only Old, Intermediate and Custom profiles are currently supported, and the maximum available MinTLSVersions is VersionTLS12. Type object Property Type Description custom `` custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate `` intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2 modern `` modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 minTLSVersion: TLSv1.3 NOTE: Currently unsupported. old `` old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 - DHE-RSA-CHACHA20-POLY1305 - ECDHE-ECDSA-AES128-SHA256 - ECDHE-RSA-AES128-SHA256 - ECDHE-ECDSA-AES128-SHA - ECDHE-RSA-AES128-SHA - ECDHE-ECDSA-AES256-SHA384 - ECDHE-RSA-AES256-SHA384 - ECDHE-ECDSA-AES256-SHA - ECDHE-RSA-AES256-SHA - DHE-RSA-AES128-SHA256 - DHE-RSA-AES256-SHA256 - AES128-GCM-SHA256 - AES256-GCM-SHA384 - AES128-SHA256 - AES256-SHA256 - AES128-SHA - AES256-SHA - DES-CBC3-SHA minTLSVersion: TLSv1.0 type string type is one of Old, Intermediate, Modern or Custom. Custom provides the ability to specify individual TLS security profile parameters. Old, Intermediate and Modern are TLS security profiles based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations The profiles are intent based, so they may change over time as new ciphers are developed and existing ciphers are found to be insecure. Depending on precisely which ciphers are available to a process, the list may be reduced. Note that the Modern profile is currently not supported because it is not yet well adopted by common software libraries. 2.1.12. .status Description status holds observed values from the cluster. They may not be overridden. Type object 2.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/apiservers DELETE : delete collection of APIServer GET : list objects of kind APIServer POST : create an APIServer /apis/config.openshift.io/v1/apiservers/{name} DELETE : delete an APIServer GET : read the specified APIServer PATCH : partially update the specified APIServer PUT : replace the specified APIServer /apis/config.openshift.io/v1/apiservers/{name}/status GET : read status of the specified APIServer PATCH : partially update status of the specified APIServer PUT : replace status of the specified APIServer 2.2.1. /apis/config.openshift.io/v1/apiservers Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of APIServer Table 2.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind APIServer Table 2.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.5. HTTP responses HTTP code Reponse body 200 - OK APIServerList schema 401 - Unauthorized Empty HTTP method POST Description create an APIServer Table 2.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.7. Body parameters Parameter Type Description body APIServer schema Table 2.8. HTTP responses HTTP code Reponse body 200 - OK APIServer schema 201 - Created APIServer schema 202 - Accepted APIServer schema 401 - Unauthorized Empty 2.2.2. /apis/config.openshift.io/v1/apiservers/{name} Table 2.9. Global path parameters Parameter Type Description name string name of the APIServer Table 2.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an APIServer Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.12. Body parameters Parameter Type Description body DeleteOptions schema Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified APIServer Table 2.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.15. HTTP responses HTTP code Reponse body 200 - OK APIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified APIServer Table 2.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.17. Body parameters Parameter Type Description body Patch schema Table 2.18. HTTP responses HTTP code Reponse body 200 - OK APIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified APIServer Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body APIServer schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK APIServer schema 201 - Created APIServer schema 401 - Unauthorized Empty 2.2.3. /apis/config.openshift.io/v1/apiservers/{name}/status Table 2.22. Global path parameters Parameter Type Description name string name of the APIServer Table 2.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified APIServer Table 2.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.25. HTTP responses HTTP code Reponse body 200 - OK APIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified APIServer Table 2.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.27. Body parameters Parameter Type Description body Patch schema Table 2.28. HTTP responses HTTP code Reponse body 200 - OK APIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified APIServer Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body APIServer schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK APIServer schema 201 - Created APIServer schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/config_apis/apiserver-config-openshift-io-v1
Chapter 17. Using Cruise Control to reassign partitions on JBOD disks
Chapter 17. Using Cruise Control to reassign partitions on JBOD disks If you are using JBOD storage and have Cruise Control installed with Strimzi, you can reassign partitions and move data between the JBOD disks used for storage on the same broker. This capability also allows you to remove JBOD disks without data loss. Use the Kafka kafka-log-dirs.sh tool to check information about Kafka topic partitions and their location on brokers before and after moving them. Make requests to the remove_disks endpoint of the Cruise Control REST API to demote a disk in the cluster and reassign its partitions to other disk volumes. Prerequisites You are logged in to Red Hat Enterprise Linux as the Kafka user. You have configured Cruise Control . You have deployed the Cruise Control Metrics Reporter . Kafka brokers use JBOD storage. More than one JBOD disk must be configured on the broker. In this procedure, we use a broker configured with three JBOD volumes and a topic replication factor of three. Example broker configuration with JBOD storage node.id=1 process.roles=broker default.replication.factor = 3 log.dirs = /var/lib/kafka/data-0,/var/lib/kafka/data-1,/var/lib/kafka/data-2 # ... In the procedure, we reassign partitions for broker 1 from volume 0 to volumes 1 and 2. Procedure Start the Cruise Control server. The server starts on port 9092 by default; optionally, specify a different port. cd ./cruise-control/ ./kafka-cruise-control-start.sh config/cruisecontrol.properties <port_number> To verify that Cruise Control is running, send a GET request to the /state endpoint of the Cruise Control server: curl -X GET 'http://<cc_host>:<cc_port>/kafkacruisecontrol/state' (Optional) Check the partition replica data on the broker by using the Kafka kafka-log-dirs.sh tool: kafka-log-dirs.sh --describe --bootstrap-server my-cluster-kafka-bootstrap:9092 --broker-list 1 --topic-list my-topic The tool returns topic information for each log directory. In this example, we are restricting the information to my-topic to show the steps against a single topic. The JBOD volumes used for log directories are mounted at /var/lib/kafka/<volume_id> . Example output data for each log directory { "brokers": [ { "broker": 1, 1 "logDirs": [ { "partitions": [ 2 { "partition": "my-topic-0", "size": 0, "offsetLag": 0, "isFuture": false } ], "error": null, 3 "logDir": "/var/lib/kafka/data-0" 4 }, { "partitions": [ { "partition": "my-topic-1", "size": 0, "offsetLag": 0, "isFuture": false } ], "error": null, "logDir": "/var/lib/kafka/data-1" }, { "partitions": [ { "partition": "my-topic-2", "size": 0, "offsetLag": 0, "isFuture": false } ], "error": null, "logDir": "/var/lib/kafka/data-2" } ] } ] } 1 The broker ID. 2 Partition details: name, size, offset lag. The ( isFuture ) property indicates that the partition is moving between log directories when showing as true . 3 If error is not null , there is an issue with the disk hosting the log directory. 4 The path and name of the log directory. Remove the volume from the node: curl -v -X POST 'http://<cc_host>:<cc_port>/kafkacruisecontrol/remove_disks?dryrun=false&brokerid_and_logdirs=1-/var/lib/kafka/data-0' The command specifies the broker ID and log directory for the volume being removed. If successful, partitions are reassigned from volume 0 on broker 1. Note If required, you can perform a dry run of this operation before applying the changes. Use the Kafka kafka-log-dirs.sh tool again to verify volume removal and data movement. In this example, volume 0 has been removed and my-topic-0 partition reassigned to /var/lib/kafka/data-1 . Example output data following reassignment of partitions { "brokers": [ { "broker": 1, "logDirs": [ { "partitions": [ { "partition": "my-topic-0", "size": 0, "offsetLag": 0, "isFuture": false }, { "partition": "my-topic-1", "size": 0, "offsetLag": 0, "isFuture": false } ], "error": null, "logDir": "/var/lib/kafka/data-1" }, { "partitions": [ { "partition": "my-topic-2", "size": 0, "offsetLag": 0, "isFuture": false } ], "error": null, "logDir": "/var/lib/kafka/data-2" } ] } ] }
[ "node.id=1 process.roles=broker default.replication.factor = 3 log.dirs = /var/lib/kafka/data-0,/var/lib/kafka/data-1,/var/lib/kafka/data-2", "cd ./cruise-control/ ./kafka-cruise-control-start.sh config/cruisecontrol.properties <port_number>", "curl -X GET 'http://<cc_host>:<cc_port>/kafkacruisecontrol/state'", "kafka-log-dirs.sh --describe --bootstrap-server my-cluster-kafka-bootstrap:9092 --broker-list 1 --topic-list my-topic", "{ \"brokers\": [ { \"broker\": 1, 1 \"logDirs\": [ { \"partitions\": [ 2 { \"partition\": \"my-topic-0\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false } ], \"error\": null, 3 \"logDir\": \"/var/lib/kafka/data-0\" 4 }, { \"partitions\": [ { \"partition\": \"my-topic-1\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false } ], \"error\": null, \"logDir\": \"/var/lib/kafka/data-1\" }, { \"partitions\": [ { \"partition\": \"my-topic-2\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false } ], \"error\": null, \"logDir\": \"/var/lib/kafka/data-2\" } ] } ] }", "curl -v -X POST 'http://<cc_host>:<cc_port>/kafkacruisecontrol/remove_disks?dryrun=false&brokerid_and_logdirs=1-/var/lib/kafka/data-0'", "{ \"brokers\": [ { \"broker\": 1, \"logDirs\": [ { \"partitions\": [ { \"partition\": \"my-topic-0\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false }, { \"partition\": \"my-topic-1\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false } ], \"error\": null, \"logDir\": \"/var/lib/kafka/data-1\" }, { \"partitions\": [ { \"partition\": \"my-topic-2\", \"size\": 0, \"offsetLag\": 0, \"isFuture\": false } ], \"error\": null, \"logDir\": \"/var/lib/kafka/data-2\" } ] } ] }" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/proc-cruise-control-moving-data-str
Chapter 55. Predicate Filter Action
Chapter 55. Predicate Filter Action Filter based on a JsonPath Expression 55.1. Configuration Options The following table summarizes the configuration options available for the predicate-filter-action Kamelet: Property Name Description Type Default Example expression * Expression The JsonPath Expression to evaluate, without the external parenthesis. Since this is a filter, the expression will be a negation, this means that if the foo field of the example is equals to John, the message will go ahead, otherwise it will be filtered out. string "@.foo =~ /.*John/" Note Fields marked with an asterisk (*) are mandatory. 55.2. Dependencies At runtime, the predicate-filter-action Kamelet relies upon the presence of the following dependencies: camel:core camel:kamelet camel:jsonpath 55.3. Usage This section describes how you can use the predicate-filter-action . 55.3.1. Knative Action You can use the predicate-filter-action Kamelet as an intermediate step in a Knative binding. predicate-filter-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: predicate-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: predicate-filter-action properties: expression: "@.foo =~ /.*John/" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 55.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 55.3.1.2. Procedure for using the cluster CLI Save the predicate-filter-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f predicate-filter-action-binding.yaml 55.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step predicate-filter-action -p "[email protected] =~ /.*John/" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 55.3.2. Kafka Action You can use the predicate-filter-action Kamelet as an intermediate step in a Kafka binding. predicate-filter-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: predicate-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: predicate-filter-action properties: expression: "@.foo =~ /.*John/" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 55.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 55.3.2.2. Procedure for using the cluster CLI Save the predicate-filter-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f predicate-filter-action-binding.yaml 55.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step predicate-filter-action -p "[email protected] =~ /.*John/" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 55.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/predicate-filter-action.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: predicate-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: predicate-filter-action properties: expression: \"@.foo =~ /.*John/\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f predicate-filter-action-binding.yaml", "kamel bind timer-source?message=Hello --step predicate-filter-action -p \"[email protected] =~ /.*John/\" channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: predicate-filter-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: predicate-filter-action properties: expression: \"@.foo =~ /.*John/\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f predicate-filter-action-binding.yaml", "kamel bind timer-source?message=Hello --step predicate-filter-action -p \"[email protected] =~ /.*John/\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/predicate-filter-action
Chapter 9. Installing a private cluster on AWS
Chapter 9. Installing a private cluster on AWS In OpenShift Container Platform version 4.12, you can install a private cluster into an existing VPC on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 9.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 9.2.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 9.2.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 9.3. About using a custom VPC In OpenShift Container Platform 4.12, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 9.3.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone field in the install-config.yaml file. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 9.3.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 9.3.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 9.3.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 9.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 9.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 9.7. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 9.7.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 9.7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 9.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 9.7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 9.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 9.7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 9.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings platform.aws.lbType Required to set the NLB load balancer type in AWS. Valid values are Classic or NLB . If no value is specified, the installation program defaults to Classic . The installation program sets the value provided here in the ingress cluster configuration object. If you do not specify a load balancer type for other Ingress Controllers, they use the type set in this parameter. Classic or NLB . The default value is Classic . publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 9.7.1.4. Optional AWS configuration parameters Optional AWS configuration parameters are described in the following table: Table 9.4. Optional AWS parameters Parameter Description Values compute.platform.aws.amiID The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. compute.platform.aws.iamRole A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. compute.platform.aws.rootVolume.iops The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. Integer, for example 4000 . compute.platform.aws.rootVolume.size The size in GiB of the root volume. Integer, for example 500 . compute.platform.aws.rootVolume.type The type of the root volume. Valid AWS EBS volume type , such as io1 . compute.platform.aws.rootVolume.kmsKeyARN The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of worker nodes with a specific KMS key. Valid key ID or the key ARN . compute.platform.aws.type The EC2 instance type for the compute machines. Valid AWS instance type, such as m4.2xlarge . See the Supported AWS machine types table that follows. compute.platform.aws.zones The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . compute.aws.region The AWS region that the installation program creates compute resources in. Any valid AWS region , such as us-east-1 . You can use the AWS CLI to access the regions available based on your selected instance type. For example: aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. Currently, AWS Graviton3 processors are only available in some regions. controlPlane.platform.aws.amiID The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. controlPlane.platform.aws.iamRole A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. controlPlane.platform.aws.rootVolume.kmsKeyARN The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of control plane nodes with a specific KMS key. Valid key ID and the key ARN . controlPlane.platform.aws.type The EC2 instance type for the control plane machines. Valid AWS instance type, such as m6i.xlarge . See the Supported AWS machine types table that follows. controlPlane.platform.aws.zones The availability zones where the installation program creates machines for the control plane machine pool. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . controlPlane.aws.region The AWS region that the installation program creates control plane resources in. Valid AWS region , such as us-east-1 . platform.aws.amiID The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. platform.aws.hostedZone An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. String, for example Z3URY6TWQ91KVV . platform.aws.serviceEndpoints.name The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. Valid AWS service endpoint name. platform.aws.serviceEndpoints.url The AWS service endpoint URL. The URL must use the https protocol and the host must trust the certificate. Valid AWS service endpoint URL. platform.aws.userTags A map of keys and values that the installation program adds as tags to all resources that it creates. Any valid YAML map, such as key value pairs in the <key>: <value> format. For more information about AWS tags, see Tagging Your Amazon EC2 Resources in the AWS documentation. Note You can add up to 25 user defined tags during installation. The remaining 25 tags are reserved for OpenShift Container Platform. platform.aws.propagateUserTags A flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. Boolean values, for example true or false . platform.aws.subnets If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same machineNetwork[].cidr ranges that you specify. For a standard cluster, specify a public and a private subnet for each availability zone. For a private cluster, specify a private subnet for each availability zone. Valid subnet IDs. 9.7.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 9.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 9.7.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 9.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 9.7.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) ARM64 instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 9.2. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 9.7.5. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{"auths": ...}' 23 1 12 14 23 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 9.7.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 9.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 9.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 9.11. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 9.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 9.13. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 publish: Internal 22 pullSecret: '{\"auths\": ...}' 23", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_aws/installing-aws-private
Chapter 6. Creating Access to Volumes
Chapter 6. Creating Access to Volumes Warning Do not enable the storage.fips-mode-rchecksum volume option on volumes with clients that use Red Hat Gluster Storage 3.4 or earlier. Red Hat Gluster Storage volumes can be accessed using a number of technologies: Native Client (see Section 6.2, "Native Client" ) Network File System (NFS) v3 (see Section 6.3, "NFS" ) Server Message Block (SMB) (see Section 6.4, "SMB" ) 6.1. Client Support Information 6.1.1. Cross Protocol Data Access Because of differences in locking semantics, a single Red Hat Gluster Storage volume cannot be concurrently accessed by multiple protocols. Current support for concurrent access is defined in the following table. Table 6.1. Cross Protocol Data Access Matrix SMB Gluster NFS NFS-Ganesha Native FUSE Object SMB Yes No No No No Gluster NFS (Deprecated) No Yes No No No NFS-Ganesha No No Yes No No Native FUSE No No No Yes Yes [a] 6.1.2. Client Operating System Protocol Support The following table describes the support level for each file access protocol in a supported client operating system. Table 6.2. Client OS Protocol Support Client OS FUSE Gluster NFS NFS-Ganesha SMB RHEL 5 Unsupported Unsupported Unsupported Unsupported RHEL 6 Supported Deprecated Unsupported Supported RHEL 7 Supported Deprecated Supported Supported RHEL 8 Supported Unsupported Supported Supported Windows Server 2008, 2012, 2016 Unsupported Unsupported Unsupported Supported Windows 7, 8, 10 Unsupported Unsupported Unsupported Supported Mac OS 10.15 Unsupported Unsupported Unsupported Supported 6.1.3. Transport Protocol Support The following table provides the support matrix for the supported access protocols with TCP/RDMA. Table 6.3. Transport Protocol Support Access Protocols TCP RDMA (Deprecated) FUSE Yes Yes SMB Yes No NFS Yes Yes Warning Using RDMA as a transport protocol is considered deprecated in Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support it on new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. Important Red Hat Gluster Storage requires certain ports to be open. You must ensure that the firewall settings allow access to the ports listed at Chapter 3, Considerations for Red Hat Gluster Storage . Gluster user is created as a part of gluster installation. The purpose of gluster user is to provide privileged access to libgfapi based application (for example, nfs-ganesha and glusterfs-coreutils ). For a normal user of an application, write access to statedump directory is restricted. As a result, attempting to write a state dump to this directory fails. Privileged access is needed by these applications in order to be able to write to the statedump directory. In order to write to this location, the user that runs the application should ensure that the application is added to the gluster user group. After the application is added, restart gluster processes to apply the new group.
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-Accessing_Data_-_Setting_Up_Clients
Chapter 5. Advanced logical volume management
Chapter 5. Advanced logical volume management LVM includes advanced features such as: Snapshots, which are point-in-time copies of logical volumes (LVs) Caching, with which you can use faster storage as a cache for slower storage Creating custom thin pools Creating custom VDO LVs 5.1. Managing logical volume snapshots A snapshot is a logical volume (LV) that mirrors the content of another LV at a specific point in time. 5.1.1. Understanding logical volume snapshots When you create a snapshot, you are creating a new LV that serves as a point-in-time copy of another LV. Initially, the snapshot LV contains no actual data. Instead, it references the data blocks of the original LV at the moment of snapshot creation. Warning It is important to regularly monitor the snapshot's storage usage. If a snapshot reaches 100% of its allocated space, it will become invalid. It is essential to extend the snapshot before it gets completely filled. This can be done manually by using the lvextend command or automatically via the /etc/lvm/lvm.conf file. Thick LV snapshots When data on the original LV changes, the copy-on-write (CoW) system copies the original, unchanged data to the snapshot before the change is made. This way, the snapshot grows in size only as changes occur, storing the state of the original volume at the time of the snapshot's creation. Thick snapshots are a type of LV that requires you to allocate some amount of storage space upfront. This amount can later be extended or reduced, however, you should consider what type of changes you intend to make to the original LV. This helps you to avoid either wasting resources by allocating too much space or needing to frequently increase the snapshot size if you allocate too little. Thin LV snapshots Thin snapshots are a type of LV created from an existing thin provisioned LV. Thin snapshots do not require allocating extra space upfront. Initially, both the original LV and its snapshot share the same data blocks. When changes are made to the original LV, it writes new data to different blocks, while the snapshot continues to reference the original blocks, preserving a point-in-time view of the LV's data at the snapshot creation. Thin provisioning is a method of optimizing and managing storage efficiently by allocating disk space on an as-needed basis. This means that you can create multiple LVs without needing to allocate a large amount of storage upfront for each LV. The storage is shared among all LVs in a thin pool, making it a more efficient use of resources. A thin pool allocates space on-demand to its LVs. Choosing between thick and thin LV snapshots The choice between thick or thin LV snapshots is directly determined by the type of LV you are taking a snapshot of. If your original LV is a thick LV, your snapshots will be thick. If your original LV is thin, your snapshots will be thin. 5.1.2. Managing thick logical volume snapshots When you create a thick LV snapshot, it is important to consider the storage requirements and the intended lifespan of your snapshot. You need to allocate enough storage for it based on the expected changes to the original volume. The snapshot must have a sufficient size to capture changes during its intended lifespan, but it cannot exceed the size of the original LV. If you expect a low rate of change, a smaller snapshot size of 10%-15% might be sufficient. For LVs with a high rate of change, you might need to allocate 30% or more. Important It is essential to extend the snapshot before it gets completely filled. If a snapshot reaches 100% of its allocated space, it becomes invalid. You can monitor the snapshot capacity with the lvs -o lv_name,data_percent,origin command. 5.1.2.1. Creating thick logical volume snapshots You can create a thick LV snapshot with the lvcreate command. Prerequisites Administrative access. You have created a physical volume. For more information, see Creating LVM physical volume . You have created a volume group. For more information, see Creating LVM volume group . You have created a logical volume. For more information, see Creating logical volumes . Procedure Identify the LV of which you want to create a snapshot: The size of the snapshot cannot exceed the size of the LV. Create a thick LV snapshot: Replace SnapshotSize with the size you want to allocate for the snapshot (e.g. 10G). Replace SnapshotName with the name you want to give to the snapshot logical volume. Replace VolumeGroupName with the name of the volume group that contains the original logical volume. Replace LogicalVolumeName with the name of the logical volume that you want to create a snapshot of. Verification Verify that the snapshot is created: Additional resources lvcreate(8) and lvs(8) man pages 5.1.2.2. Manually extending logical volume snapshots If a snapshot reaches 100% of its allocated space, it becomes invalid. It is essential to extend the snapshot before it gets completely filled. This can be done manually by using the lvextend command. Prerequisites Administrative access. Procedure List the names of volume groups, logical volumes, source volumes for snapshots, their usage percentages, and sizes: Extend the thick-provisioned snapshot: Replace AdditionalSize with how much space to add to the snapshot (for example, +1G). Replace VolumeGroupName with the name of the volume group. Replace SnapshotName with the name of the snapshot. Verification Verify that the LV is extended: 5.1.2.3. Automatically extending thick logical volume snapshots If a snapshot reaches 100% of its allocated space, it becomes invalid. It is essential to extend the snapshot before it gets completely filled. This can be done automatically. Prerequisites Administrative access. Procedure As the root user, open the /etc/lvm/lvm.conf file in an editor of your choice. Uncomment the snapshot_autoextend_threshold and snapshot_autoextend_percent lines and set each parameter to a required value: snapshot_autoextend_threshold determines the percentage at which LVM starts to auto-extend the snapshot. For example, setting the parameter to 70 means that LVM will try to extend the snapshot when it reaches 70% capacity. snapshot_autoextend_percent specifies by what percentage the snapshot should be extended when it reaches the threshold. For example, setting the parameter to 20 means the snapshot will be increased by 20% of its current size. Save the changes and exit the editor. Restart the lvm2-monitor : 5.1.2.4. Merging thick logical volume snapshots You can merge thick LV snapshot into the original logical volume from which the snapshot was created. The process of merging means that the original LV is reverted to the state it was in when the snapshot was created. Once the merge is complete, the snapshot is removed. Note The merge between the original and snapshot LV is postponed if either is active. It only proceeds once the LVs are reactivated and not in use. Prerequisites Administrative access. Procedure List the LVs, their volume groups, and their paths: Check where the LVs are mounted: Replace /dev/VolumeGroupName/LogicalVolumeName with the path to your logical volume. Replace /dev/VolumeGroupName/SnapshotName with the path to your snapshot. Unmount the LVs: Replace /LogicalVolume/MountPoint with the mounting point for your logical volume. Replace /Snapshot/MountPoint with the mounting point for your snapshot. Deactivate the LVs: Replace VolumeGroupName with the name of the volume group. Replace LogicalVolumeName with the name of the logical volume. Replace SnapshotName with the name of your snapshot. Merge the thick LV snapshot into the origin: Replace SnapshotName with the name of the snapshot. Activate the LV: Replace VolumeGroupName with the name of the volume group. Replace LogicalVolumeName with the name of the logical volume. Mount the LV: Replace /LogicalVolume/MountPoint with the mounting point for your logical volume. Verification Verify that the snapshot is removed: Additional resources The lvconvert(8) , lvs(8) man page 5.1.3. Managing thin logical volume snapshots Thin provisioning is appropriate where storage efficiency is a priority. Storage space dynamic allocation reduces initial storage costs and maximizes the use of available storage resources. In environments with dynamic workloads or where storage grows over time, thin provisioning allows for flexibility. It enables the storage system to adapt to changing needs without requiring large upfront allocations of the storage space. With dynamic allocation, over-provisioning is possible, where the total size of all LVs can exceed the physical size of the thin pool, under the assumption that not all space will be utilized at the same time. 5.1.3.1. Creating thin logical volume snapshots You can create a thin LV snapshot with the lvcreate command. When creating a thin LV snapshot, avoid specifying the snapshot size. Including a size parameter results in the creation of a thick snapshot instead. Prerequisites Administrative access. You have created a physical volume. For more information, see Creating LVM physical volume . You have created a volume group. For more information, see Creating LVM volume group . You have created a logical volume. For more information, see Creating logical volumes . Procedure Identify the LV of which you want to create a snapshot: Create a thin LV snapshot: Replace SnapshotName with the name you want to give to the snapshot logical volume. Replace VolumeGroupName with the name of the volume group that contains the original logical volume. Replace ThinVolumeName with the name of the thin logical volume that you want to create a snapshot of. Verification Verify that the snapshot is created: Additional resources lvcreate(8) and lvs(8) man pages 5.1.3.2. Merging thin logical volume snapshots You can merge thin LV snapshot into the original logical volume from which the snapshot was created. The process of merging means that the original LV is reverted to the state it was in when the snapshot was created. Once the merge is complete, the snapshot is removed. Prerequisites Administrative access. Procedure List the LVs, their volume groups, and their paths: Check where the original LV is mounted: Replace VolumeGroupName/ThinVolumeName with the path to your logical volume. Unmount the LV: Replace /ThinLogicalVolume/MountPoint with the mounting point for your logical volume. Replace /ThinSnapshot/MountPoint with the mounting point for your snapshot. Deactivate the LV: Replace VolumeGroupName with the name of the volume group. Replace ThinLogicalVolumeName with the name of the logical volume. Merge the thin LV snapshot into the origin: Replace VolumeGroupName with the name of the volume group. Replace ThinSnapshotName with the name of the snapshot. Mount the LV: Replace /ThinLogicalVolume/MountPoint with the mounting point for your logical volume. Verification Verify that the original LV is merged: Additional resources The lvremove(8) , lvs(8) man page 5.2. Caching logical volumes You can cache logical volumes by using the dm-cache or dm-writecache targets. dm-cache utilizes faster storage device (SSD) as cache for a slower storage device (HDD). It caches read and write data, optimizing access times for frequently used data. It is beneficial in mixed workload environments where enhancing read and write operations can lead to significant performance improvements. dm-writecache optimizes write operations by using a faster storage medium (SSD) to temporarily hold write data before it is committed to the primary storage device (HDD). It is beneficial for write-intensive applications where write performance can slow down the data transfer process. 5.2.1. Caching logical volumes with dm-cache When caching LV with dm-cache , a cache pool is created. A cache pool is a LV that combines both the cache data, which stores the actual cached content, and cache metadata, which tracks what content is stored in the cache. This pool is then associated with a specific LV to cache its data. dm-cache targets two types of blocks: frequently accessed (hot) blocks are moved to the cache, while less frequently accessed (cold) blocks remain on the slower device. Prerequisites Administrative access. Procedure Display the LV you want to cache and its volume group: Create the cache pool: Replace CachePoolName with the name of the cache pool. Replace Size with the size for your cache pool. Replace VolumeGroupName with the name of the volume group. Replace /FastDevicePath with the path to your fast device, for example SSD or NVME. Attach the cache pool to the LV: Verification Verify that the LV is now cached: Additional resources lvcreate(8) , lvconvert(8) , lvs(8) man pages 5.2.2. Caching logical volumes with dm-writecache When caching LVs with dm-writecache , a caching layer between the logical volume and the physical storage device is created. dm-writecache operates by temporarily storing write operations in a faster storage medium, such as an SSD, before eventually writing them back to the primary storage device, optimizing write-intensive workloads. Prerequisites Administrative access. Procedure Display the logical volume you want to cache and its volume group: Create a cache volume: Replace CacheVolumeName with the name of the cache volume. Replace Size with the size for your cache pool. Replace VolumeGroupName with the name of the volume group. Replace /FastDevicePath with the path to your fast device, for example SSD or NVME. Attach the cache volume to the LV: Replace CacheVolumeName with the name of the cache volume. Replace VolumeGroupName with the name of the volume group. Replace LogicalVolumeName with the name of the logical volume. Verification Verify that the LV is now cached: Additional resources lvcreate(8) , lvconvert(8) , lvs(8) man pages 5.2.3. Uncaching a logical volume Use two main ways to remove caching from a LV. Splitting You can detach the cache from the LV but preserve the cache volume itself. In this case the LV will no longer benefit from the caching mechanism but the cache volume and its data will remain intact. While the cache volume is preserved, the data within the cache cannot be reused and will be erased the time it is used in a caching setup. Uncaching You can detaches the cache from the LV and remove the cache volume entirely. This action effectively destroys the cache, freeing up the space. Prerequisites Administrative access. Procedure Display the cached LV: Detach or remove the cached volume: To detach the cached volume, use: To detach and remove the cached volume, use: Replace VolumeGroupName with the name of the volume group. Replace LogicalVolumeName with the name of the logical volume. Verification Verify that the LV is not cached: Additional resources lvconvert(8) , lvs(8) man pages 5.3. Creating a custom thin pool You can create custom thin pools to have a better control over the storage. Prerequisites Administrative access. Procedure Display available volume groups: List available devices: Create a LV to hold the thin pool data: Replace ThinPoolDataName with the name for your thin pool data LV. Replace Size with the size for your LV. Replace VolumeGroupName with the name of your volume group. Create a LV to hold the thin pool metadata: Combine the LVs into a thin pool: Verification Verify that the custom thin pool is created: Additional resources The vgs(8) lvs(8) , lvcreate(8) man pages 5.4. Creating a custom VDO logical volume With Logical Volume Manager (LVM), you can create a custom LV that uses Virtual Data Optimizer (VDO) pool for data storage. Prerequisites Administrative access. Procedure Display the VGs: Create a LV to be converted to a VDO pool: Replace VDOPoolName with the name for your VDO pool. Replace Size with the size for your VDO pool. Replace VolumeGroupName with the name of the VG. Convert this LV to a VDO pool. In this conversion, you are creating a new VDO LV that uses the VDO pool. Because lvcreate is creating a new VDO LV, you must specify parameters for the new VDO LV. Use --name|-n to specify the name of the new VDO LV, and --virtualsize|-V to specify the size of the new VDO LV. Replace VDOVolumeName with the name for your VDO volume. Replace VDOVolumeSize with the size for your VDO volume. Replace VolumeGroupName/VDOPoolName with the names for your VG and your VDO pool. Verification Verify that the LV is converted to the VDO pool: Additional resources The vgs(8) , lvs(8) , lvconvert(8) man pages
[ "lvs -o vg_name,lv_name,lv_size VG LV LSize VolumeGroupName LogicalVolumeName 10.00g", "lvcreate --snapshot --size SnapshotSize --name SnapshotName VolumeGroupName / LogicalVolumeName", "lvs -o lv_name,origin LV Origin LogicalVolumeName SnapshotName LogicalVolumeName", "lvs -o vg_name,lv_name,origin,data_percent,lv_size VG LV Origin Data% LSize VolumeGroupName LogicalVolumeName 10.00g VolumeGroupName SnapshotName LogicalVolumeName 82.00 5.00g", "lvextend --size + AdditionalSize VolumeGroupName / SnapshotName", "lvs -o vg_name,lv_name,origin,data_percent,lv_size VG LV Origin Data% LSize VolumeGroupName LogicalVolumeName 10.00g VolumeGroupName SnapshotName LogicalVolumeName 68.33 6.00g", "snapshot_autoextend_threshold = 70 snapshot_autoextend_percent = 20", "systemctl restart lvm2-monitor", "lvs -o lv_name,vg_name,lv_path LV VG Path LogicalVolumeName VolumeGroupName /dev/VolumeGroupName/LogicalVolumeName SnapshotName VolumeGroupName /dev/VolumeGroupName/SnapshotName", "findmnt -o SOURCE,TARGET /dev/ VolumeGroupName/LogicalVolumeName findmnt -o SOURCE,TARGET /dev/ VolumeGroupName/SnapshotName", "umount /LogicalVolume/MountPoint umount /Snapshot/MountPoint", "lvchange --activate n VolumeGroupName / LogicalVolumeName lvchange --activate n VolumeGroupName / SnapshotName", "lvconvert --merge SnapshotName", "lvchange --activate y VolumeGroupName / LogicalVolumeName", "umount /LogicalVolume/MountPoint", "lvs -o lv_name", "lvs -o lv_name,vg_name,pool_lv,lv_size LV VG Pool LSize PoolName VolumeGroupName 152.00m ThinVolumeName VolumeGroupName PoolName 100.00m", "lvcreate --snapshot --name SnapshotName VolumeGroupName / ThinVolumeName", "lvs -o lv_name,origin LV Origin PoolName SnapshotName ThinVolumeName ThinVolumeName", "lvs -o lv_name,vg_name,lv_path LV VG Path ThinPoolName VolumeGroupName ThinSnapshotName VolumeGroupName /dev/VolumeGroupName/ThinSnapshotName ThinVolumeName VolumeGroupName /dev/VolumeGroupName/ThinVolumeName", "findmnt -o SOURCE,TARGET /dev/ VolumeGroupName/ThinVolumeName", "umount /ThinLogicalVolume/MountPoint", "lvchange --activate n VolumeGroupName / ThinLogicalVolumeName", "lvconvert --mergethin VolumeGroupName/ThinSnapshotName", "umount /ThinLogicalVolume/MountPoint", "lvs -o lv_name", "lvs -o lv_name,vg_name LV VG LogicalVolumeName VolumeGroupName", "lvcreate --type cache-pool --name CachePoolName --size Size VolumeGroupName /FastDevicePath", "lvconvert --type cache --cachepool VolumeGroupName / CachePoolName VolumeGroupName / LogicalVolumeName", "lvs -o lv_name,pool_lv LV Pool LogicalVolumeName [CachePoolName_cpool]", "lvs -o lv_name,vg_name LV VG LogicalVolumeName VolumeGroupName", "lvcreate --name CacheVolumeName --size Size VolumeGroupName /FastDevicePath", "lvconvert --type writecache --cachevol CacheVolumeName VolumeGroupName/LogicalVolumeName", "lvs -o lv_name,pool_lv LV Pool LogicalVolumeName [CacheVolumeName_cvol]", "lvs -o lv_name,pool_lv,vg_name LV Pool VG LogicalVolumeName [CacheVolumeName_cvol] VolumeGroupName", "lvconvert --splitcache VolumeGroupName/LogicalVolumeName", "lvconvert --uncache VolumeGroupName/LogicalVolumeName", "lvs -o lv_name,pool_lv", "vgs -o vg_name VG VolumeGroupName", "lsblk", "lvcreate --name ThinPoolDataName --size Size VolumeGroupName /DevicePath", "lvcreate --name ThinPoolMetadataName --size Size VolumeGroupName /DevicePath", "lvconvert --type thin-pool --poolmetadata ThinPoolMetadataName VolumeGroupName/ThinPoolDataName", "lvs -o lv_name,seg_type LV Type ThinPoolDataName thin-pool", "vgs VG #PV #LV #SN Attr VSize VFree VolumeGroupName 1 0 0 wz--n- 28.87g 28.87g", "lvcreate --name VDOPoolName --size Size VolumeGroupName", "lvconvert --type vdo-pool --name VDOVolumeName --virtualsize VDOVolumeSize VolumeGroupName/VDOPoolName", "*# lvs -o lv_name,vg_name,seg_type* LV VG Type VDOPoolName VolumeGroupName vdo-pool VDOVolumeName VolumeGroupName vdo" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_logical_volumes/advanced-logical-volume-management_configuring-and-managing-logical-volumes
B.106. xorg-x11-drv-wacom and wacomcpl
B.106. xorg-x11-drv-wacom and wacomcpl B.106.1. RHBA-2011:0341 - xorg-x11-drv-wacom and wacomcpl bug fix update Updated xorg-x11-drv-wacom and wacomcpl packages that resolve several issues are now available for Red Hat Enterprise Linux 6. The xorg-x11-drv-wacom package provides an X Window System input device driver that allows the X server to handle Wacom tablets with extended functionality. The wacomcpl package provides a graphical user interface (GUI) for the xorg-x11-drv-wacom X input device driver. These updated xorg-x11-drv-wacom and wacomcpl packages provide fixes for the following bugs: BZ# 675908 Changing the screen mapping caused the wacompl GUI to become unresponsive. With this update, changing the screen mapping works as expected. BZ# 642915 Attempting to calibrate a device could have failed with an error message. With this update, calibration now succeeds. All users of xorg-x11-drv-wacom and wacomcpl are advised to upgrade to these updated packages, which resolve these issues.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/xorg-x11-drv-wacom
Chapter 4. Managing users on dashboard
Chapter 4. Managing users on dashboard As a storage administrator, you can create, edit, and delete users on the dashboard. 4.1. Creating users on dashboard The dashboard allows you to create users on the dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level of access to the Dashboard. Note The Red Hat Ceph Storage Dashboard does not support any email verification when changing a users password. This behavior is intentional, because the Dashboard supports Single Sign-On (SSO) and this feature can be delegated to the SSO provider. Procedure Log in to the Dashboard. On the upper right side of the Dashboard, click the gear icon and select User management : On Users tab, click the Create button: In the CreateUser window, set the Username and other parameters including the roles, and then click the _CreateUser_button: A notification towards the top right corner of the page indicates the user was created successfully. Additional Resources See the Creating roles on dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 4.2. Editing users on dashboard The dashboard allows you to edit users on the dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level of access to the Dashboard. User created on the dashboard. Procedure Log in to the Dashboard. On the upper right side of the Dashboard, click the gear icon and select User management : To edit the user, click the row: On Users tab, select Edit from the Edit dropdown menu: In the EditUser window, edit parameters including, and then click the EditUser button: A notification towards the top right corner of the page indicates the user was updated successfully. Additional Resources See the Creating users on the dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 4.3. Deleting users on dashboard The dashboard allows you to delete users on the dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level of access to the Dashboard. User created on the dashboard. Procedure Log in to the Dashboard. On the upper right side of the Dashboard, click the gear icon and select User management : To delete the user, click the row: On Users tab, select Delete from the Edit dropdown menu: In the Delete User dialog window, Click the Yes, I am sure box and then Click Delete user to save the settings: Additional Resources See the Creating users on the dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/dashboard_guide/managing-users-on-dashboard
Chapter 4. Profile [tuned.openshift.io/v1]
Chapter 4. Profile [tuned.openshift.io/v1] Description Profile is a specification for a Profile resource. Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object ProfileStatus is the status for a Profile resource; the status is for internal use only and its fields may be changed/removed in the future. 4.1.1. .spec Description Type object Required config Property Type Description config object profile array Tuned profiles. profile[] object A Tuned profile. 4.1.2. .spec.config Description Type object Required tunedProfile Property Type Description debug boolean option to debug TuneD daemon execution providerName string Name of the cloud provider as taken from the Node providerID: <ProviderName>://<ProviderSpecificNodeID> tunedConfig object Global configuration for the TuneD daemon as defined in tuned-main.conf tunedProfile string TuneD profile to apply 4.1.3. .spec.config.tunedConfig Description Global configuration for the TuneD daemon as defined in tuned-main.conf Type object Property Type Description reapply_sysctl boolean turn reapply_sysctl functionality on/off for the TuneD daemon: true/false 4.1.4. .spec.profile Description Tuned profiles. Type array 4.1.5. .spec.profile[] Description A Tuned profile. Type object Required data name Property Type Description data string Specification of the Tuned profile to be consumed by the Tuned daemon. name string Name of the Tuned profile to be used in the recommend section. 4.1.6. .status Description ProfileStatus is the status for a Profile resource; the status is for internal use only and its fields may be changed/removed in the future. Type object Required tunedProfile Property Type Description conditions array conditions represents the state of the per-node Profile application conditions[] object ProfileStatusCondition represents a partial state of the per-node Profile application. tunedProfile string the current profile in use by the Tuned daemon 4.1.7. .status.conditions Description conditions represents the state of the per-node Profile application Type array 4.1.8. .status.conditions[] Description ProfileStatusCondition represents a partial state of the per-node Profile application. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string message provides additional information about the current condition. This is only to be consumed by humans. reason string reason is the CamelCase reason for the condition's current status. status string status of the condition, one of True, False, Unknown. type string type specifies the aspect reported by this condition. 4.2. API endpoints The following API endpoints are available: /apis/tuned.openshift.io/v1/profiles GET : list objects of kind Profile /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles DELETE : delete collection of Profile GET : list objects of kind Profile POST : create a Profile /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles/{name} DELETE : delete a Profile GET : read the specified Profile PATCH : partially update the specified Profile PUT : replace the specified Profile /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles/{name}/status GET : read status of the specified Profile PATCH : partially update status of the specified Profile PUT : replace status of the specified Profile 4.2.1. /apis/tuned.openshift.io/v1/profiles HTTP method GET Description list objects of kind Profile Table 4.1. HTTP responses HTTP code Reponse body 200 - OK ProfileList schema 401 - Unauthorized Empty 4.2.2. /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles HTTP method DELETE Description delete collection of Profile Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Profile Table 4.3. HTTP responses HTTP code Reponse body 200 - OK ProfileList schema 401 - Unauthorized Empty HTTP method POST Description create a Profile Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body Profile schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK Profile schema 201 - Created Profile schema 202 - Accepted Profile schema 401 - Unauthorized Empty 4.2.3. /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles/{name} Table 4.7. Global path parameters Parameter Type Description name string name of the Profile HTTP method DELETE Description delete a Profile Table 4.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Profile Table 4.10. HTTP responses HTTP code Reponse body 200 - OK Profile schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Profile Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.12. HTTP responses HTTP code Reponse body 200 - OK Profile schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Profile Table 4.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.14. Body parameters Parameter Type Description body Profile schema Table 4.15. HTTP responses HTTP code Reponse body 200 - OK Profile schema 201 - Created Profile schema 401 - Unauthorized Empty 4.2.4. /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles/{name}/status Table 4.16. Global path parameters Parameter Type Description name string name of the Profile HTTP method GET Description read status of the specified Profile Table 4.17. HTTP responses HTTP code Reponse body 200 - OK Profile schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Profile Table 4.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.19. HTTP responses HTTP code Reponse body 200 - OK Profile schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Profile Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.21. Body parameters Parameter Type Description body Profile schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK Profile schema 201 - Created Profile schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/node_apis/profile-tuned-openshift-io-v1
11.4. Enabling and Disabling User Accounts
11.4. Enabling and Disabling User Accounts The administrator can disable and enable active user accounts. Disabling a user account deactivates the account. Disabled user accounts cannot be used to authenticate. A user whose account has been disabled cannot log into IdM and cannot use IdM services, such as Kerberos, or perform any tasks. Disabled user accounts still exist within IdM and all of the associated information remains unchanged. Unlike preserved user accounts, disabled user accounts remain in the active state. Therefore, they are displayed in the output of the ipa user-find command. For example: Any disabled user account can be enabled again. Note After disabling a user account, existing connections remain valid until the user's Kerberos TGT and other tickets expire. After the ticket expires, the user will not be able renew it. Enabling and Disabling User Accounts in the Web UI Select the Identity Users tab. From the Active users list, select the required user or users, and then click Disable or Enable . Figure 11.12. Disabling or Enabling a User Account Disabling and Enabling User Accounts from the Command Line To disable a user account, use the ipa user-disable command. To enable a user account, use the ipa user-enable command.
[ "ipa user-find User login: user First name: User Last name: User Home directory: /home/user Login shell: /bin/sh UID: 1453200009 GID: 1453200009 Account disabled: True Password: False Kerberos keys available: False", "ipa user-disable user_login ---------------------------- Disabled user account \"user_login\" ----------------------------", "ipa user-enable user_login ---------------------------- Enabled user account \"user_login\" ----------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/activating_and_deactivating_user_accounts
Chapter 11. Using bound service account tokens
Chapter 11. Using bound service account tokens You can use bound service account tokens, which improves the ability to integrate with cloud provider identity access management (IAM) services, such as Red Hat OpenShift Service on AWS on AWS IAM or Google Cloud Platform IAM. 11.1. About bound service account tokens You can use bound service account tokens to limit the scope of permissions for a given service account token. These tokens are audience and time-bound. This facilitates the authentication of a service account to an IAM role and the generation of temporary credentials mounted to a pod. You can request bound service account tokens by using volume projection and the TokenRequest API. 11.2. Configuring bound service account tokens using volume projection You can configure pods to request bound service account tokens by using volume projection. Prerequisites You have access to the cluster as a user with the dedicated-admin role. You have created a service account. This procedure assumes that the service account is named build-robot . Procedure Configure a pod to use a bound service account token by using volume projection. Create a file called pod-projected-svc-token.yaml with the following contents: apiVersion: v1 kind: Pod metadata: name: nginx spec: securityContext: runAsNonRoot: true 1 seccompProfile: type: RuntimeDefault 2 containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] serviceAccountName: build-robot 3 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 4 expirationSeconds: 7200 5 audience: vault 6 1 Prevents containers from running as root to minimize compromise risks. 2 Sets the default seccomp profile, limiting to essential system calls, to reduce risks. 3 A reference to an existing service account. 4 The path relative to the mount point of the file to project the token into. 5 Optionally set the expiration of the service account token, in seconds. The default value is 3600 seconds (1 hour), and this value must be at least 600 seconds (10 minutes). The kubelet starts trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours. 6 Optionally set the intended audience of the token. The recipient of a token should verify that the recipient identity matches the audience claim of the token, and should otherwise reject the token. The audience defaults to the identifier of the API server. Note In order to prevent unexpected failure, Red Hat OpenShift Service on AWS overrides the expirationSeconds value to be one year from the initial token generation with the --service-account-extend-token-expiration default of true . You cannot change this setting. Create the pod: USD oc create -f pod-projected-svc-token.yaml The kubelet requests and stores the token on behalf of the pod, makes the token available to the pod at a configurable file path, and refreshes the token as it approaches expiration. The application that uses the bound token must handle reloading the token when it rotates. The kubelet rotates the token if it is older than 80 percent of its time to live, or if the token is older than 24 hours. 11.3. Creating bound service account tokens outside the pod Prerequisites You have created a service account. This procedure assumes that the service account is named build-robot . Procedure Create the bound service account token outside the pod by running the following command: USD oc create token build-robot Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IkY2M1N4MHRvc2xFNnFSQlA4eG9GYzVPdnN3NkhIV0tRWmFrUDRNcWx4S0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9pc3N1ZXIyLnRlc3QuY29tIiwiaHR0cHM6Ly9pc3N1ZXIxLnRlc3QuY29tIiwiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTY3OTU0MzgzMCwiaWF0IjoxNjc5NTQwMjMwLCJpc3MiOiJodHRwczovL2lzc3VlcjIudGVzdC5jb20iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoidGVzdC1zYSIsInVpZCI6ImM3ZjA4MjkwLWIzOTUtNGM4NC04NjI4LTMzMTM1NTVhNWY1OSJ9fSwibmJmIjoxNjc5NTQwMjMwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp0ZXN0LXNhIn0.WyAOPvh1BFMUl3LNhBCrQeaB5wSynbnCfojWuNNPSilT4YvFnKibxwREwmzHpV4LO1xOFZHSi6bXBOmG_o-m0XNDYL3FrGHd65mymiFyluztxa2lgHVxjw5reIV5ZLgNSol3Y8bJqQqmNg3rtQQWRML2kpJBXdDHNww0E5XOypmffYkfkadli8lN5QQD-MhsCbiAF8waCYs8bj6V6Y7uUKTcxee8sCjiRMVtXKjQtooERKm-CH_p57wxCljIBeM89VdaR51NJGued4hVV5lxvVrYZFu89lBEAq4oyQN_d6N1vBWGXQMyoihnt_fQjn-NfnlJWk-3NSZDIluDJAv7e-MTEk3geDrHVQKNEzDei2-Un64hSzb-n1g1M0Vn0885wQBQAePC9UlZm8YZlMNk1tq6wIUKQTMv3HPfi5HtBRqVc2eVs0EfMX4-x-PHhPCasJ6qLJWyj6DvyQ08dP4DW_TWZVGvKlmId0hzwpg59TTcLR0iCklSEJgAVEEd13Aa_M0-faD11L3MhUGxw0qxgOsPczdXUsolSISbefs7OKymzFSIkTAn9sDQ8PHMOsuyxsK8vzfrR-E0z7MAeguZ2kaIY7cZqbN6WFy0caWgx46hrKem9vCKALefElRYbCg3hcBmowBcRTOqaFHLNnHghhU1LaRpoFzH7OUarqX9SGQ Additional resources Creating service accounts
[ "apiVersion: v1 kind: Pod metadata: name: nginx spec: securityContext: runAsNonRoot: true 1 seccompProfile: type: RuntimeDefault 2 containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] serviceAccountName: build-robot 3 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 4 expirationSeconds: 7200 5 audience: vault 6", "oc create -f pod-projected-svc-token.yaml", "oc create token build-robot", "eyJhbGciOiJSUzI1NiIsImtpZCI6IkY2M1N4MHRvc2xFNnFSQlA4eG9GYzVPdnN3NkhIV0tRWmFrUDRNcWx4S0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9pc3N1ZXIyLnRlc3QuY29tIiwiaHR0cHM6Ly9pc3N1ZXIxLnRlc3QuY29tIiwiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTY3OTU0MzgzMCwiaWF0IjoxNjc5NTQwMjMwLCJpc3MiOiJodHRwczovL2lzc3VlcjIudGVzdC5jb20iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoidGVzdC1zYSIsInVpZCI6ImM3ZjA4MjkwLWIzOTUtNGM4NC04NjI4LTMzMTM1NTVhNWY1OSJ9fSwibmJmIjoxNjc5NTQwMjMwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp0ZXN0LXNhIn0.WyAOPvh1BFMUl3LNhBCrQeaB5wSynbnCfojWuNNPSilT4YvFnKibxwREwmzHpV4LO1xOFZHSi6bXBOmG_o-m0XNDYL3FrGHd65mymiFyluztxa2lgHVxjw5reIV5ZLgNSol3Y8bJqQqmNg3rtQQWRML2kpJBXdDHNww0E5XOypmffYkfkadli8lN5QQD-MhsCbiAF8waCYs8bj6V6Y7uUKTcxee8sCjiRMVtXKjQtooERKm-CH_p57wxCljIBeM89VdaR51NJGued4hVV5lxvVrYZFu89lBEAq4oyQN_d6N1vBWGXQMyoihnt_fQjn-NfnlJWk-3NSZDIluDJAv7e-MTEk3geDrHVQKNEzDei2-Un64hSzb-n1g1M0Vn0885wQBQAePC9UlZm8YZlMNk1tq6wIUKQTMv3HPfi5HtBRqVc2eVs0EfMX4-x-PHhPCasJ6qLJWyj6DvyQ08dP4DW_TWZVGvKlmId0hzwpg59TTcLR0iCklSEJgAVEEd13Aa_M0-faD11L3MhUGxw0qxgOsPczdXUsolSISbefs7OKymzFSIkTAn9sDQ8PHMOsuyxsK8vzfrR-E0z7MAeguZ2kaIY7cZqbN6WFy0caWgx46hrKem9vCKALefElRYbCg3hcBmowBcRTOqaFHLNnHghhU1LaRpoFzH7OUarqX9SGQ" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/authentication_and_authorization/bound-service-account-tokens
Chapter 12. The NVIDIA GPU administration dashboard
Chapter 12. The NVIDIA GPU administration dashboard 12.1. Introduction The OpenShift Console NVIDIA GPU plugin is a dedicated administration dashboard for NVIDIA GPU usage visualization in the OpenShift Container Platform (OCP) Console. The visualizations in the administration dashboard provide guidance on how to best optimize GPU resources in clusters, such as when a GPU is under- or over-utilized. The OpenShift Console NVIDIA GPU plugin works as a remote bundle for the OCP console. To run the plugin the OCP console must be running. 12.2. Installing the NVIDIA GPU administration dashboard Install the NVIDIA GPU plugin by using Helm on the OpenShift Container Platform (OCP) Console to add GPU capabilities. The OpenShift Console NVIDIA GPU plugin works as a remote bundle for the OCP console. To run the OpenShift Console NVIDIA GPU plugin an instance of the OCP console must be running. Prerequisites Red Hat OpenShift 4.11+ NVIDIA GPU operator Helm Procedure Use the following procedure to install the OpenShift Console NVIDIA GPU plugin. Add the Helm repository: USD helm repo add rh-ecosystem-edge https://rh-ecosystem-edge.github.io/console-plugin-nvidia-gpu USD helm repo update Install the Helm chart in the default NVIDIA GPU operator namespace: USD helm install -n nvidia-gpu-operator console-plugin-nvidia-gpu rh-ecosystem-edge/console-plugin-nvidia-gpu Example output NAME: console-plugin-nvidia-gpu LAST DEPLOYED: Tue Aug 23 15:37:35 2022 NAMESPACE: nvidia-gpu-operator STATUS: deployed REVISION: 1 NOTES: View the Console Plugin NVIDIA GPU deployed resources by running the following command: USD oc -n {{ .Release.Namespace }} get all -l app.kubernetes.io/name=console-plugin-nvidia-gpu Enable the plugin by running the following command: # Check if a plugins field is specified USD oc get consoles.operator.openshift.io cluster --output=jsonpath="{.spec.plugins}" # if not, then run the following command to enable the plugin USD oc patch consoles.operator.openshift.io cluster --patch '{ "spec": { "plugins": ["console-plugin-nvidia-gpu"] } }' --type=merge # if yes, then run the following command to enable the plugin USD oc patch consoles.operator.openshift.io cluster --patch '[{"op": "add", "path": "/spec/plugins/-", "value": "console-plugin-nvidia-gpu" }]' --type=json # add the required DCGM Exporter metrics ConfigMap to the existing NVIDIA operator ClusterPolicy CR: oc patch clusterpolicies.nvidia.com gpu-cluster-policy --patch '{ "spec": { "dcgmExporter": { "config": { "name": "console-plugin-nvidia-gpu" } } } }' --type=merge The dashboard relies mostly on Prometheus metrics exposed by the NVIDIA DCGM Exporter, but the default exposed metrics are not enough for the dashboard to render the required gauges. Therefore, the DGCM exporter is configured to expose a custom set of metrics, as shown here. apiVersion: v1 data: dcgm-metrics.csv: | DCGM_FI_PROF_GR_ENGINE_ACTIVE, gauge, gpu utilization. DCGM_FI_DEV_MEM_COPY_UTIL, gauge, mem utilization. DCGM_FI_DEV_ENC_UTIL, gauge, enc utilization. DCGM_FI_DEV_DEC_UTIL, gauge, dec utilization. DCGM_FI_DEV_POWER_USAGE, gauge, power usage. DCGM_FI_DEV_POWER_MGMT_LIMIT_MAX, gauge, power mgmt limit. DCGM_FI_DEV_GPU_TEMP, gauge, gpu temp. DCGM_FI_DEV_SM_CLOCK, gauge, sm clock. DCGM_FI_DEV_MAX_SM_CLOCK, gauge, max sm clock. DCGM_FI_DEV_MEM_CLOCK, gauge, mem clock. DCGM_FI_DEV_MAX_MEM_CLOCK, gauge, max mem clock. kind: ConfigMap metadata: annotations: meta.helm.sh/release-name: console-plugin-nvidia-gpu meta.helm.sh/release-namespace: nvidia-gpu-operator creationTimestamp: "2022-10-26T19:46:41Z" labels: app.kubernetes.io/component: console-plugin-nvidia-gpu app.kubernetes.io/instance: console-plugin-nvidia-gpu app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: console-plugin-nvidia-gpu app.kubernetes.io/part-of: console-plugin-nvidia-gpu app.kubernetes.io/version: latest helm.sh/chart: console-plugin-nvidia-gpu-0.2.3 name: console-plugin-nvidia-gpu namespace: nvidia-gpu-operator resourceVersion: "19096623" uid: 96cdf700-dd27-437b-897d-5cbb1c255068 Install the ConfigMap and edit the NVIDIA Operator ClusterPolicy CR to add that ConfigMap in the DCGM exporter configuration. The installation of the ConfigMap is done by the new version of the Console Plugin NVIDIA GPU Helm Chart, but the ClusterPolicy CR editing is done by the user. View the deployed resources: USD oc -n nvidia-gpu-operator get all -l app.kubernetes.io/name=console-plugin-nvidia-gpu Example output NAME READY STATUS RESTARTS AGE pod/console-plugin-nvidia-gpu-7dc9cfb5df-ztksx 1/1 Running 0 2m6s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/console-plugin-nvidia-gpu ClusterIP 172.30.240.138 <none> 9443/TCP 2m6s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/console-plugin-nvidia-gpu 1/1 1 1 2m6s NAME DESIRED CURRENT READY AGE replicaset.apps/console-plugin-nvidia-gpu-7dc9cfb5df 1 1 1 2m6s 12.3. Using the NVIDIA GPU administration dashboard After deploying the OpenShift Console NVIDIA GPU plugin, log in to the OpenShift Container Platform web console using your login credentials to access the Administrator perspective. To view the changes, you need to refresh the console to see the GPUs tab under Compute . 12.3.1. Viewing the cluster GPU overview You can view the status of your cluster GPUs in the Overview page by selecting Overview in the Home section. The Overview page provides information about the cluster GPUs, including: Details about the GPU providers Status of the GPUs Cluster utilization of the GPUs 12.3.2. Viewing the GPUs dashboard You can view the NVIDIA GPU administration dashboard by selecting GPUs in the Compute section of the OpenShift Console. Charts on the GPUs dashboard include: GPU utilization : Shows the ratio of time the graphics engine is active and is based on the DCGM_FI_PROF_GR_ENGINE_ACTIVE metric. Memory utilization : Shows the memory being used by the GPU and is based on the DCGM_FI_DEV_MEM_COPY_UTIL metric. Encoder utilization : Shows the video encoder rate of utilization and is based on the DCGM_FI_DEV_ENC_UTIL metric. Decoder utilization : Encoder utilization : Shows the video decoder rate of utilization and is based on the DCGM_FI_DEV_DEC_UTIL metric. Power consumption : Shows the average power usage of the GPU in Watts and is based on the DCGM_FI_DEV_POWER_USAGE metric. GPU temperature : Shows the current GPU temperature and is based on the DCGM_FI_DEV_GPU_TEMP metric. The maximum is set to 110 , which is an empirical number, as the actual number is not exposed via a metric. GPU clock speed : Shows the average clock speed utilized by the GPU and is based on the DCGM_FI_DEV_SM_CLOCK metric. Memory clock speed : Shows the average clock speed utilized by memory and is based on the DCGM_FI_DEV_MEM_CLOCK metric. 12.3.3. Viewing the GPU Metrics You can view the metrics for the GPUs by selecting the metric at the bottom of each GPU to view the Metrics page. On the Metrics page, you can: Specify a refresh rate for the metrics Add, run, disable, and delete queries Insert Metrics Reset the zoom view
[ "helm repo add rh-ecosystem-edge https://rh-ecosystem-edge.github.io/console-plugin-nvidia-gpu", "helm repo update", "helm install -n nvidia-gpu-operator console-plugin-nvidia-gpu rh-ecosystem-edge/console-plugin-nvidia-gpu", "NAME: console-plugin-nvidia-gpu LAST DEPLOYED: Tue Aug 23 15:37:35 2022 NAMESPACE: nvidia-gpu-operator STATUS: deployed REVISION: 1 NOTES: View the Console Plugin NVIDIA GPU deployed resources by running the following command: oc -n {{ .Release.Namespace }} get all -l app.kubernetes.io/name=console-plugin-nvidia-gpu Enable the plugin by running the following command: Check if a plugins field is specified oc get consoles.operator.openshift.io cluster --output=jsonpath=\"{.spec.plugins}\" if not, then run the following command to enable the plugin oc patch consoles.operator.openshift.io cluster --patch '{ \"spec\": { \"plugins\": [\"console-plugin-nvidia-gpu\"] } }' --type=merge if yes, then run the following command to enable the plugin oc patch consoles.operator.openshift.io cluster --patch '[{\"op\": \"add\", \"path\": \"/spec/plugins/-\", \"value\": \"console-plugin-nvidia-gpu\" }]' --type=json add the required DCGM Exporter metrics ConfigMap to the existing NVIDIA operator ClusterPolicy CR: patch clusterpolicies.nvidia.com gpu-cluster-policy --patch '{ \"spec\": { \"dcgmExporter\": { \"config\": { \"name\": \"console-plugin-nvidia-gpu\" } } } }' --type=merge", "apiVersion: v1 data: dcgm-metrics.csv: | DCGM_FI_PROF_GR_ENGINE_ACTIVE, gauge, gpu utilization. DCGM_FI_DEV_MEM_COPY_UTIL, gauge, mem utilization. DCGM_FI_DEV_ENC_UTIL, gauge, enc utilization. DCGM_FI_DEV_DEC_UTIL, gauge, dec utilization. DCGM_FI_DEV_POWER_USAGE, gauge, power usage. DCGM_FI_DEV_POWER_MGMT_LIMIT_MAX, gauge, power mgmt limit. DCGM_FI_DEV_GPU_TEMP, gauge, gpu temp. DCGM_FI_DEV_SM_CLOCK, gauge, sm clock. DCGM_FI_DEV_MAX_SM_CLOCK, gauge, max sm clock. DCGM_FI_DEV_MEM_CLOCK, gauge, mem clock. DCGM_FI_DEV_MAX_MEM_CLOCK, gauge, max mem clock. kind: ConfigMap metadata: annotations: meta.helm.sh/release-name: console-plugin-nvidia-gpu meta.helm.sh/release-namespace: nvidia-gpu-operator creationTimestamp: \"2022-10-26T19:46:41Z\" labels: app.kubernetes.io/component: console-plugin-nvidia-gpu app.kubernetes.io/instance: console-plugin-nvidia-gpu app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: console-plugin-nvidia-gpu app.kubernetes.io/part-of: console-plugin-nvidia-gpu app.kubernetes.io/version: latest helm.sh/chart: console-plugin-nvidia-gpu-0.2.3 name: console-plugin-nvidia-gpu namespace: nvidia-gpu-operator resourceVersion: \"19096623\" uid: 96cdf700-dd27-437b-897d-5cbb1c255068", "oc -n nvidia-gpu-operator get all -l app.kubernetes.io/name=console-plugin-nvidia-gpu", "NAME READY STATUS RESTARTS AGE pod/console-plugin-nvidia-gpu-7dc9cfb5df-ztksx 1/1 Running 0 2m6s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/console-plugin-nvidia-gpu ClusterIP 172.30.240.138 <none> 9443/TCP 2m6s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/console-plugin-nvidia-gpu 1/1 1 1 2m6s NAME DESIRED CURRENT READY AGE replicaset.apps/console-plugin-nvidia-gpu-7dc9cfb5df 1 1 1 2m6s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring/nvidia-gpu-admin-dashboard
Chapter 8. Migrating upstream Keycloak to Red Hat build of Keycloak 22.0
Chapter 8. Migrating upstream Keycloak to Red Hat build of Keycloak 22.0 Starting with version 22, minimal differences exist between Red Hat build of Keycloak and upstream Keycloak. The following differences exist: For upstream Keycloak, the distribution artifacts are on keycloak.org ; for Red Hat build of Keycloak, the distribution artifacts are on the Red Hat customer portal . Oracle and MSSQL database drivers are bundled with upstream Keycloak, but not bundled with Red Hat build of Keycloak. See Configuring the database for detailed steps on how to install those drivers. The GELF log handler is not available in Red Hat build of Keycloak. The migration process depends on the version of Keycloak to be migrated and the type of Keycloak installation. See the following sections for details. 8.1. Matching Keycloak version The migration process depends on the version of Keycloak to be migrated. If your Keycloak project version matches the Red Hat build of Keycloak version, migrate Keycloak by using the Red Hat build of Keycloak artifacts on the Red Hat customer portal . If your Keycloak project version is an older version, use the Keycloak Upgrading Guide to upgrade Keycloak to match the Red Hat build of Keycloak version. Then, migrate Keycloak using the artifacts on the Red Hat customer portal . If your Keycloak project version is greater than the Red Hat build of Keycloak version, you cannot migrate to Red Hat build of Keycloak. Instead, create a new deployment of Red Hat build of Keycloak or wait for a future Red Hat build of Keycloak release. 8.2. Migration based on type of Keycloak installation Once you have a matching version of Keycloak, migrate Keycloak based on the type of installation. If you installed Keycloak from a ZIP distribution, migrate Keycloak by using the artifacts on the Red Hat customer portal . If you deployed the Keycloak Operator, uninstall it and install the Red Hat build of Keycloak Operator by using the Operator guide . The CRs are compatible between upstream Keycloak and Red Hat build of Keycloak. If you created a custom server container image, rebuild it by using the Red Hat build of Keycloak image. See Running Keycloak in a Container .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/migration_guide/migrating-keycloak
Performing security operations
Performing security operations Red Hat OpenStack Services on OpenShift 18.0 Operating security services in a Red Hat OpenStack Services on OpenShift environment OpenStack Documentation Team [email protected]
[ "openstack role add --user user1 --user-domain Default --project demo --project-domain Default <role>", "openstack role add --user user1 --user-domain Default --system all <role>", "openstack flavor list --os-cloud <cloud_name>", "`export OS_CLOUD=<cloud_name>`", "openstack network create internal-network", "openstack network create internal-network --project testing", "openstack role list", "openstack role show admin", "openstack role add --user user1 --user-domain Default --project demo --project-domain Default <role>", "openstack role add --user user1 --user-domain Default --system all <role>", "openstack quota show [PROJECT-ID]", "openstack quota show f0ba064c24ca4176ac55a45635ca561f +-----------------------+-------+ | Resource | Limit | +-----------------------+-------+ | cores | 20 | | instances | 10 | | ram | 51200 | | volumes | 10 | | snapshots | 10 | | gigabytes | 1000 | | backups | 10 | | volumes___DEFAULT__ | -1 | | gigabytes___DEFAULT__ | -1 | | snapshots___DEFAULT__ | -1 | | groups | 10 | | trunk | -1 | | networks | 100 | | ports | 500 | | rbac_policies | 10 | | routers | 10 | | subnets | 100 | | subnet_pools | -1 | | fixed-ips | -1 | | injected-file-size | 10240 | | injected-path-size | 255 | | injected-files | 5 | | key-pairs | 100 | | properties | 128 | | server-groups | 10 | | server-group-members | 10 | | floating-ips | 50 | | secgroup-rules | 100 | | secgroups | 10 | | backup-gigabytes | 1000 | | per-volume-gigabytes | -1 | +-----------------------+-------+", "openstack domain create corp", "+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | 6059b0b93b8d4ddb9d31ea47de93f0ab | | name | corp | | options | {} | | tags | [] | +-------------+----------------------------------+", "openstack project create private-cloud --domain corp", "+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | 6059b0b93b8d4ddb9d31ea47de93f0ab | | enabled | True | | id | e86f182e16e24441b71c7296585c2e21 | | is_domain | False | | name | private-cloud | | options | {} | | parent_id | 6059b0b93b8d4ddb9d31ea47de93f0ab | | tags | [] | +-------------+----------------------------------+", "openstack project create dev --parent private-cloud --domain corp", "+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | 6059b0b93b8d4ddb9d31ea47de93f0ab | | enabled | True | | id | 71f05cdd5b8a45d4b1a928bfe7c2e20d | | is_domain | False | | name | dev | | options | {} | | parent_id | e86f182e16e24441b71c7296585c2e21 | | tags | [] | +-------------+----------------------------------+", "openstack project create qa --parent private-cloud --domain corp", "+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | 6059b0b93b8d4ddb9d31ea47de93f0ab | | enabled | True | | id | 03801ad929b84c8fb091bf3248e6db68 | | is_domain | False | | name | qa | | options | {} | | parent_id | e86f182e16e24441b71c7296585c2e21 | | tags | [] | +-------------+----------------------------------+", "openstack role assignment list --project private-cloud", "openstack role list", "+----------------------------------+-----------------+ | ID | Name | +----------------------------------+-----------------+ | 01d92614cd224a589bdf3b171afc5488 | admin | | 034e4620ed3d45969dfe8992af001514 | member | | 0aa377a807df4149b0a8c69b9560b106 | ResellerAdmin | | cfea5760d9c948e7b362abc1d06e557f | reader | | d5cb454559e44b47aaa8821df4e11af1 | swiftoperator | | ef3d3f510a474d6c860b4098ad658a29 | service | +----------------------------------+-----------------+", "openstack role add --user user1 --user-domain corp --project private-cloud member", "openstack role add --user user1 --user-domain corp --project private-cloud member --inherited", "openstack role assignment list --effective --user user1 --user-domain corp", "+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | c50d5cf4fe2e4929b98af5abdec3fd64 | | False | | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | 11fccd8369824baa9fc87cf01023fd87 | | True | | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | b4f1d6f59ddf413fa040f062a0234871 | | True | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+", "openstack role remove --user user1 --project private-cloud member", "openstack role assignment list --effective --user user1 --user-domain corp", "+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | 11fccd8369824baa9fc87cf01023fd87 | | True | | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | b4f1d6f59ddf413fa040f062a0234871 | | True | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+", "openstack role remove --user user1 --project private-cloud member --inherited", "openstack role assignment list --effective --user user1 --user-domain corp", "openstack domain list", "+----------------------------------+------------------+---------+--------------------+ | ID | Name | Enabled | Description | +----------------------------------+------------------+---------+--------------------+ | 3abefa6f32c14db9a9703bf5ce6863e1 | TestDomain | True | | | 69436408fdcb44ab9e111691f8e9216d | corp | True | | | a4f61a8feb8d4253b260054c6aa41adb | federated_domain | True | | | default | Default | True | The default domain | +----------------------------------+------------------+---------+--------------------+", "openstack domain create TestDomain", "+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | 3abefa6f32c14db9a9703bf5ce6863e1 | | name | TestDomain | +-------------+----------------------------------+", "openstack domain show TestDomain", "+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | 3abefa6f32c14db9a9703bf5ce6863e1 | | name | TestDomain | +-------------+----------------------------------+", "openstack domain set TestDomain --disable", "openstack domain show TestDomain", "+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | False | | id | 3abefa6f32c14db9a9703bf5ce6863e1 | | name | TestDomain | +-------------+----------------------------------+", "openstack domain set TestDomain --enable", "openstack project create AppCreds", "openstack user create --project AppCreds --password-prompt AppCredsUser", "openstack role add --user AppCredsUser --project AppCreds member", "This is a clouds.yaml file, which can be used by OpenStack tools as a source of configuration on how to connect to a cloud. If this is your only cloud, just put this file in ~/.config/openstack/clouds.yaml and tools like python-openstackclient will just work with no further config. (You will need to add your password to the auth section) If you have more than one cloud account, add the cloud entry to the clouds section of your existing file and you can refer to them by name with OS_CLOUD=openstack or --os-cloud=openstack clouds: openstack: auth: auth_url: http://10.0.0.10:5000/v3 application_credential_id: \"6d141f23732b498e99db8186136c611b\" application_credential_secret: \"<example secret value>\" region_name: \"regionOne\" interface: \"public\" identity_api_version: 3 auth_type: \"v3applicationcredential\"", "openstack --os-cloud=openstack token issue +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2018-08-29T05:37:29+0000 | | id | gAAAAABbhiMJ4TxxFlTMdsYJpfStsGotPrns0lnpvJq9ILtdi-NKqisWBeNiJlUXwmnoGQDh2CMyK9OeTsuEXnJNmFfKjxiHWmcQVYzAhMKo6_QMUtu_Qm6mtpzYYHBrUGboa_Ay0LBuFDtsjtgtvJ-r8G3TsJMowbKF-yo--O_XLhERU_QQVl3hl8zmMRdmLh_P9Cbhuolt | | project_id | 1a74eabbf05c41baadd716179bb9e1da | | user_id | ef679eeddfd14f8b86becfd7e1dc84f2 | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+", "[keystone_authtoken] auth_url = http://10.0.0.10:5000/v3 auth_type = v3applicationcredential application_credential_id = \"6cb5fa6a13184e6fab65ba2108adf50c\" application_credential_secret = \"<example password>\"", "openstack application credential create --description \"App Creds - All roles\" AppCredsUser +--------------+----------------------------------------------------------------------------------------+ | Field | Value | +--------------+----------------------------------------------------------------------------------------+ | description | App Creds - All roles | | expires_at | None | | id | fc17651c2c114fd6813f86fdbb430053 | | name | AppCredsUser | | project_id | 507663d0cfe244f8bc0694e6ed54d886 | | roles | member reader admin | | secret | fVnqa6I_XeRDDkmQnB5lx361W1jHtOtw3ci_mf_tOID-09MrPAzkU7mv-by8ykEhEa1QLPFJLNV4cS2Roo9lOg | | unrestricted | False | +--------------+----------------------------------------------------------------------------------------+", "openstack application credential create --description \"App Creds - Member\" --role member AppCredsUser +--------------+----------------------------------------------------------------------------------------+ | Field | Value | +--------------+----------------------------------------------------------------------------------------+ | description | App Creds - Member | | expires_at | None | | id | e21e7f4b578240f79814085a169c9a44 | | name | AppCredsUser | | project_id | 507663d0cfe244f8bc0694e6ed54d886 | | roles | member | | secret | XCLVUTYIreFhpMqLVB5XXovs_z9JdoZWpdwrkaG1qi5GQcmBMUFG7cN2htzMlFe5T5mdPsnf5JMNbu0Ih-4aCg | | unrestricted | False | +--------------+----------------------------------------------------------------------------------------+", "openstack application credential delete AppCredsUser", "openstack application credential create --description \"App Creds 2 - Member\" --role member AppCred2", "openstack --os-cloud=openstack token issue", "+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2018-08-29T05:37:29+0000 | | id | gAAAAABbhiMJ4TxxFlTMdsYJpfStsGotPrns0lnpvJq9ILtdi-NKqisWBeNiJlUXwmnoGQDh2CMyK9OeTsuEXnJNmFfKjxiHWmcQVYzAhMKo6_QMUtu_Qm6mtpzYYHBrUGboa_Ay0LBuFDtsjtgtvJ-r8G3TsJMowbKF-yo--O_XLhERU_QQVl3hl8zmMRdmLh_P9Cbhuolt | | project_id | 1a74eabbf05c41baadd716179bb9e1da | | user_id | ef679eeddfd14f8b86becfd7e1dc84f2 | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+", "openstack secret list", "+------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+ | Secret href | Name | Created | Status | Content types | Algorithm | Bit length | Secret type | Mode | Expiration | +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+ | https://192.168.123.169:9311/v1/secrets/24845e6d-64a5-4071-ba99-0fdd1046172e | None | 2018-01-22T02:23:15+00:00 | ACTIVE | {u'default': u'application/octet-stream'} | aes | 256 | symmetric | None | None | +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+", "openstack secret store --name testSecret --payload 'TestPayload'", "+---------------+------------------------------------------------------------------------------------+ | Field | Value | +---------------+------------------------------------------------------------------------------------+ | Secret href | https://192.168.123.163:9311/v1/secrets/ecc7b2a4-f0b0-47ba-b451-0f7d42bc1746 | | Name | testSecret | | Created | None | | Status | None | | Content types | None | | Algorithm | aes | | Bit length | 256 | | Secret type | opaque | | Mode | cbc | | Expiration | None | +---------------+------------------------------------------------------------------------------------+", "openstack secret update https://192.168.123.163:9311/v1/secrets/ca34a264-fd09-44a1-8856-c6e7116c3b16 'TestPayload-updated'", "openstack secret delete https://192.168.123.163:9311/v1/secrets/ecc7b2a4-f0b0-47ba-b451-0f7d42bc1746", "openstack secret order create --name swift_key --algorithm aes --mode ctr --bit-length 256 --payload-content-type=application/octet-stream key", "+----------------+-----------------------------------------------------------------------------------+ | Field | Value | +----------------+-----------------------------------------------------------------------------------+ | Order href | https://192.168.123.173:9311/v1/orders/043383fe-d504-42cf-a9b1-bc328d0b4832 | | Type | Key | | Container href | N/A | | Secret href | None | | Created | None | | Status | None | | Error code | None | | Error message | None | +----------------+-----------------------------------------------------------------------------------+", "openstack secret order get https://192.168.123.173:9311/v1/orders/043383fe-d504-42cf-a9b1-bc328d0b4832", "+----------------+------------------------------------------------------------------------------------+ | Field | Value | +----------------+------------------------------------------------------------------------------------+ | Order href | https://192.168.123.173:9311/v1/orders/043383fe-d504-42cf-a9b1-bc328d0b4832 | | Type | Key | | Container href | N/A | | Secret href | https://192.168.123.173:9311/v1/secrets/efcfec49-b9a3-4425-a9b6-5ba69cb18719 | | Created | 2018-01-24T04:24:33+00:00 | | Status | ACTIVE | | Error code | None | | Error message | None | +----------------+------------------------------------------------------------------------------------+", "openstack secret get https://192.168.123.173:9311/v1/secrets/efcfec49-b9a3-4425-a9b6-5ba69cb18719", "+---------------+------------------------------------------------------------------------------------+ | Field | Value | +---------------+------------------------------------------------------------------------------------+ | Secret href | https://192.168.123.173:9311/v1/secrets/efcfec49-b9a3-4425-a9b6-5ba69cb18719 | | Name | swift_key | | Created | 2018-01-24T04:24:33+00:00 | | Status | ACTIVE | | Content types | {u'default': u'application/octet-stream'} | | Algorithm | aes | | Bit length | 256 | | Secret type | symmetric | | Mode | ctr | | Expiration | None | +---------------+------------------------------------------------------------------------------------+", "openstack volume type create --encryption-provider nova.volume.encryptors.luks.LuksEncryptor --encryption-cipher aes-xts-plain64 --encryption-key-size 256 --encryption-control-location front-end LuksEncryptor-Template-256", "+-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | description | None | | encryption | cipher='aes-xts-plain64', control_location='front-end', encryption_id='9df604d0-8584-4ce8-b450-e13e6316c4d3', key_size='256', provider='nova.volume.encryptors.luks.LuksEncryptor' | | id | 78898a82-8f4c-44b2-a460-40a5da9e4d59 | | is_public | True | | name | LuksEncryptor-Template-256 | +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+", "openstack volume create --size 1 --type LuksEncryptor-Template-256 'Encrypted-Test-Volume'", "+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2018-01-22T00:19:06.000000 | | description | None | | encrypted | True | | id | a361fd0b-882a-46cc-a669-c633630b5c93 | | migration_status | None | | multiattach | False | | name | Encrypted-Test-Volume | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | LuksEncryptor-Template-256 | | updated_at | None | | user_id | 0e73cb3111614365a144e7f8f1a972af | +---------------------+--------------------------------------+", "cinder --os-volume-api-version 3.64 volume show Encrypted-Test-Volume", "+------------------------------+-------------------------------------+ |Property |Value | +------------------------------+-------------------------------------+ |attached_servers |[] | |attachment_ids |[] | |availability_zone |nova | |bootable |false | |cluster_name |None | |consistencygroup_id |None | |created_at |2022-07-28T17:35:26.000000 | |description |None | |encrypted |True | |encryption_key_id |0944b8a8-de09-4413-b2ed-38f6c4591dd4 | |group_id |None | |id |a0b51b97-0392-460a-abfa-093022a120f3 | |metadata | | |migration_status |None | |multiattach |False | |name |vol | |os-vol-host-attr:host |hostgroup@tripleo_iscsi#tripleo_iscsi| |os-vol-mig-status-attr:migstat|None | |os-vol-mig-status-attr:name_id|None | |os-vol-tenant-attr:tenant_id |a2071ece39b3440aa82395ff7707996f | |provider_id |None | |replication_status |None | |service_uuid |471f0805-072e-4256-b447-c7dd10ceb807 | |shared_targets |False | |size |1 | |snapshot_id |None | |source_volid |None | |status |available | |updated_at |2022-07-28T17:35:26.000000 | |user_id |ba311b5c2b8e438c951d1137333669d4 | |volume_type |LUKS | |volume_type_id |cc188ace-f73d-4af5-bf5a-d70ccc5a401c | +------------------------------+-------------------------------------+", "openstack secret list", "+------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+ | Secret href | Name | Created | Status | Content types | Algorithm | Bit length | Secret type | Mode | Expiration | +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+ | https://192.168.123.169:9311/v1/secrets/0944b8a8-de09-4413-b2ed-38f6c4591dd4 | None | 2018-01-22T02:23:15+00:00 | ACTIVE | {u'default': u'application/octet-stream'} | aes | 256 | symmetric | None | None | +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+", "openstack server add volume testInstance Encrypted-Test-Volume", "openssl genrsa -out private_key.pem 1024", "openssl rsa -pubout -in private_key.pem -out public_key.pem", "openssl req -new -key private_key.pem -out cert_request.csr", "openssl x509 -req -days 14 -in cert_request.csr -signkey private_key.pem -out new_cert.crt", "openstack secret store --name test --algorithm RSA --secret-type certificate --payload-content-type \"application/octet-stream\" --payload-content-encoding base64 --payload \"USD(base64 new_cert.crt)\"", "+---------------+-----------------------------------------------------------------------+ | Field | Value | +---------------+-----------------------------------------------------------------------+ | Secret href | http://127.0.0.1:9311/v1/secrets/cd7cc675-e573-419c-8fff-33a72734a243 | +---------------+-----------------------------------------------------------------------+", "echo <This is my image> > <myimage>", "openssl dgst -sha512 -sign private_key.pem -sigopt rsa_padding_mode:pss -out myimage.signature myimage", "base64 -w 0 myimage.signature > myimage.signature.b64", "image_signature=USD(cat myimage.signature.b64)", "openstack image-create --name <my_signed_image> --container-format bare --disk-format qcow2 --property img_signature=\"USDimage_signature\" --property img_signature_certificate_uuid=\"USDcert_uuid\" --property img_signature_hash_method='SHA-512' --property img_signature_key_type='RSA-PSS' < myimage", "openstack image save --file <local_file_name> <snapshot_image_name>", "openstack image set --property img_signature=\"USDimage_signature\" --property img_signature_certificate_uuid=\"<cd7cc675-e573-419c-8fff-33a72734a243>\" --property img_signature_hash_method=\"SHA-512\" --property img_signature_key_type=\"RSA-PSS\" <snapshot_image_id>", "rm <local_file_name>", "get secret osp-secret -o yaml | grep BarbicanSimpleCryptoKEK | awk '{print USD2}' | base64 -d > kek.txt", "DB_PASS=USD(oc get secret osp-secret -o yaml | grep DbRootPassword | awk '{print USD2}' | base64 -d)", "oc exec -it openstack-galera-0 -- mysqldump -u root -p\"USD{DB_PASS}\" barbican > barbican_db_backup.sql", "ll total 36 -rw-rw-r--. 1 tripleo-admin tripleo-admin 36715 Jun 19 18:31 barbican_db_backup.sql", "echo <barbican_key> | base64 YmFyYmljYW5fc2ltcGxlX2N5cHRvX2tlawo=", "oc edit secret osp-secret", ". BarbicanDatabasePassword: cGFzc3dvcmQK BarbicanPassword: cGFzc3dvcmQK BarbicanSimpleCryptoKEK: YmFyYmljYW5fc2ltcGxlX2N5cHRvX2tlawo= CeilometerPassword: cGFzc3dvcmQK CinderDatabasePassword: cGFzc3dvcmQK CinderPassword: cGFzc3dvcmQK DatabasePassword: cGFzc3dvcmQK .", "DB_PASS=USD(oc get secret osp-secret -o yaml | grep DbRootPassword | awk '{print USD2}' | base64 -d)", "oc exec openstack-galera-0 -- mysql -u root -p\"USD{DB_PASS}\" barbican < <sql_backup>", "Defaulted container \"galera\" out of: galera, mysql-bootstrap (init)", "oc exec -n openstack -t openstackclient -- openstack secret list +------------------------------------------------------------------------+------------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+ | Secret href | Name | Created | Status | Content types | Algorithm | Bit length | Secret type | Mode | Expiration | +------------------------------------------------------------------------+------------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+ | http://10.0.0.104:9311/v1/secrets/93f62cfd-e008-401f-be74-bf057c88b04a | testSecret | 2018-06-19T18:25:25+00:00 | ACTIVE | {u'default': u'text/plain'} | aes | 256 | opaque | cbc | None | | http://10.0.0.104:9311/v1/secrets/f664b5cf-5221-47e5-9887-608972a5fefb | swift_key | 2018-06-19T18:24:40+00:00 | ACTIVE | {u'default': u'application/octet-stream'} | aes | 256 | symmetric | ctr | None | +------------------------------------------------------------------------+------------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html-single/performing_security_operations/index
35.4. Result
35.4. Result The Identity Management server is configured to require TLS 1.2. Identity Management clients that only support TLS versions are no longer able to communicate with the Identity Management server.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/configure-tls-result
9.6.3. Related Books
9.6.3. Related Books Managing NFS and NIS by Hal Stern, Mike Eisler, and Ricardo Labiaga; O'Reilly &Associates - Makes an excellent reference guide for the many different NFS export and mount options available. NFS Illustrated by Brent Callaghan; Addison-Wesley Publishing Company - Provides comparisons of NFS to other network file systems and shows, in detail, how NFS communication occurs. System Administrators Guide ; Red Hat, Inc - The Network File System (NFS) chapter explains concisely how to set up an NFS clients and servers. Security Guide ; Red Hat, Inc - The Server Security chapter explains ways to secure NFS and other services.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-nfs-related-books
Providing feedback on JBoss EAP documentation
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/deploying_jboss_eap_on_amazon_web_services/proc_providing-feedback-on-red-hat-documentation_default
Chapter 22. Red Hat Software Collections
Chapter 22. Red Hat Software Collections Red Hat Software Collections is a Red Hat content set that provides a set of dynamic programming languages, database servers, and related packages that you can install and use on all supported releases of Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures, the 64-bit ARM architecture, IBM Z, and IBM POWER, little endian. Certain components are available also for all supported releases of Red Hat Enterprise Linux 6 on AMD64 and Intel 64 architectures. Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection, GNU Debugger, and other development, debugging, and performance monitoring tools. Red Hat Developer Toolset is included as a separate Software Collection. Dynamic languages, database servers, and other tools distributed with Red Hat Software Collections do not replace the default system tools provided with Red Hat Enterprise Linux, nor are they used in preference to these tools. Red Hat Software Collections uses an alternative packaging mechanism based on the scl utility to provide a parallel set of packages. This set enables optional use of alternative package versions on Red Hat Enterprise Linux. By using the scl utility, users can choose which package version they want to run at any time. Important Red Hat Software Collections has a shorter life cycle and support term than Red Hat Enterprise Linux. For more information, see the Red Hat Software Collections Product Life Cycle . See the Red Hat Software Collections documentation for the components included in the set, system requirements, known problems, usage, and specifics of individual Software Collections. See the Red Hat Developer Toolset documentation for more information about the components included in this Software Collection, installation, usage, known problems, and more.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/chap-red_hat_enterprise_linux-7.6_release_notes-red_hat_software_collections
Chapter 11. Using Kerberos
Chapter 11. Using Kerberos Maintaining system security and integrity within a network is critical, and it encompasses every user, application, service, and server within the network infrastructure. It requires an understanding of everything that is running on the network and the manner in which these services are used. At the core of maintaining this security is maintaining access to these applications and services and enforcing that access. Kerberos is an authentication protocol significantly safer than normal password-based authentication. With Kerberos, passwords are never sent over the network, even when services are accessed on other machines. Kerberos provides a mechanism that allows both users and machines to identify themselves to network and receive defined, limited access to the areas and services that the administrator configured. Kerberos authenticates entities by verifying their identity, and Kerberos also secures this authenticating data so that it cannot be accessed and used or tampered with by an outsider. 11.1. About Kerberos Kerberos uses symmetric-key cryptography [3] to authenticate users to network services, which means passwords are never actually sent over the network. Consequently, when users authenticate to network services using Kerberos, unauthorized users attempting to gather passwords by monitoring network traffic are effectively thwarted. 11.1.1. The Basics of How Kerberos Works Most conventional network services use password-based authentication schemes, where a user supplies a password to access a given network server. However, the transmission of authentication information for many services is unencrypted. For such a scheme to be secure, the network has to be inaccessible to outsiders, and all computers and users on the network must be trusted and trustworthy. With simple, password-based authentication, a network that is connected to the Internet cannot be assumed to be secure. Any attacker who gains access to the network can use a simple packet analyzer, or packet sniffer , to intercept user names and passwords, compromising user accounts and, therefore, the integrity of the entire security infrastructure. Kerberos eliminates the transmission of unencrypted passwords across the network and removes the potential threat of an attacker sniffing the network. Rather than authenticating each user to each network service separately as with simple password authentication, Kerberos uses symmetric encryption and a trusted third party (a key distribution center or KDC) to authenticate users to a suite of network services. The computers managed by that KDC and any secondary KDCs constitute a realm . When a user authenticates to the KDC, the KDC sends a set of credentials (a ticket ) specific to that session back to the user's machine, and any Kerberos-aware services look for the ticket on the user's machine rather than requiring the user to authenticate using a password. As shown in Figure 11.1, "Kerberos Authentication" , each user is identified to the KDC with a unique identity, called a principal . When a user on a Kerberos-aware network logs into his workstation, his principal is sent to the KDC as part of a request for a ticket-granting ticket (or TGT) from the authentication server. This request can be sent by the login program so that it is transparent to the user or can be sent manually by a user through the kinit program after the user logs in. The KDC then checks for the principal in its database. If the principal is found, the KDC creates a TGT, encrypts it using the user's key, and sends the TGT to that user. Figure 11.1. Kerberos Authentication The login or kinit program on the client then decrypts the TGT using the user's key, which it computes from the user's password. The user's key is used only on the client machine and is not transmitted over the network. The ticket (or credentials) sent by the KDC are stored in a local store, the credential cache (ccache) , which can be checked by Kerberos-aware services. Red Hat Enterprise Linux 7 supports the following types of credential caches: The persistent KEYRING ccache type, the default cache in Red Hat Enterprise Linux 7 The System Security Services Daemon (SSSD) Kerberos Credential Manager (KCM), an alternative option since Red Hat Enterprise Linux 7.4 FILE DIR MEMORY With SSSD KCM, the Kerberos caches are not stored in a passive store, but managed by a daemon. In this setup, the Kerberos library, which is typically used by applications such as kinit , is a KCM client and the daemon is referred to as a KCM server. Having the Kerberos credential caches managed by the SSSD KCM daemon has several advantages: The daemon is stateful and can perform tasks such as Kerberos credential cache renewals or reaping old ccaches. Renewals and tracking are possible not only for tickets that SSSD itself acquired, typically via a login through pam_sss.so , but also for tickets acquired, for example, though kinit . Since the process runs in user space, it is subject to UID namespacing, unlike the Kernel KEYRING. Unlike the Kernel KEYRING-based cache, which is entirely dependent on the UID of the caller and which, in a containerized environment, is shared among all containers, the KCM server's entry point is a UNIX socket that can be bind-mounted only to selected containers. After authentication, servers can check an unencrypted list of recognized principals and their keys rather than checking kinit ; this is kept in a keytab . The TGT is set to expire after a certain period of time (usually 10 to 24 hours) and is stored in the client machine's credential cache. An expiration time is set so that a compromised TGT is of use to an attacker for only a short period of time. After the TGT has been issued, the user does not have to enter their password again until the TGT expires or until they log out and log in again. Whenever the user needs access to a network service, the client software uses the TGT to request a new ticket for that specific service from the ticket-granting server (TGS). The service ticket is then used to authenticate the user to that service transparently. 11.1.2. About Kerberos Principal Names The principal identifies not only the user or service, but also the realm that the entity belongs to. A principal name has two parts, the identifier and the realm: For a user, the identifier is only the Kerberos user name. For a service, the identifier is a combination of the service name and the host name of the machine it runs on: The service name is a case-sensitive string that is specific to the service type, like host , ldap , http , and DNS . Not all services have obvious principal identifiers; the sshd daemon, for example, uses the host service principal. The host principal is usually stored in /etc/krb5.keytab . When Kerberos requests a ticket, it always resolves the domain name aliases (DNS CNAME records) to the corresponding DNS address (A or AAAA records). The host name from the address record is then used when service or host principals are created. For example: A service attempts to connect to the host using its CNAME alias: The Kerberos server requests a ticket for the resolved host name, [email protected] , so the host principal must be host/[email protected] . 11.1.3. About the Domain-to-Realm Mapping When a client attempts to access a service running on a particular server, it knows the name of the service ( host ) and the name of the server ( foo.example.com ), but because more than one realm can be deployed on the network, it must guess at the name of the Kerberos realm in which the service resides. By default, the name of the realm is taken to be the DNS domain name of the server in all capital letters. In some configurations, this will be sufficient, but in others, the realm name which is derived will be the name of a non-existent realm. In these cases, the mapping from the server's DNS domain name to the name of its realm must be specified in the domain_realm section of the client system's /etc/krb5.conf file. For example: The configuration specifies two mappings. The first mapping specifies that any system in the example.com DNS domain belongs to the EXAMPLE.COM realm. The second specifies that a system with the exact name example.com is also in the realm. The distinction between a domain and a specific host is marked by the presence or lack of an initial period character. The mapping can also be stored directly in DNS using the "_kerberos TXT" records, for example: 11.1.4. Environmental Requirements Kerberos relies on being able to resolve machine names. Thus, it requires a working domain name service (DNS). Both DNS entries and hosts on the network must be properly configured, which is covered in the Kerberos documentation in /usr/share/doc/krb5-server- version-number . Applications that accept Kerberos authentication require time synchronization. You can set up approximate clock synchronization between the machines on the network using a service such as ntpd . For information on the ntpd service, see the documentation in /usr/share/doc/ntp- version-number /html/index.html or the ntpd (8) man page. Note Kerberos clients running Red Hat Enterprise Linux 7 support automatic time adjustment with the KDC and have no strict timing requirements. This enables better tolerance to clocking differences when deploying IdM clients with Red Hat Enterprise Linux 7. 11.1.5. Considerations for Deploying Kerberos Although Kerberos removes a common and severe security threat, it is difficult to implement for a variety of reasons: Kerberos assumes that each user is trusted but is using an untrusted host on an untrusted network. Its primary goal is to prevent unencrypted passwords from being transmitted across that network. However, if anyone other than the proper user has access to the one host that issues tickets used for authentication - the KDC - the entire Kerberos authentication system are at risk. For an application to use Kerberos, its source must be modified to make the appropriate calls into the Kerberos libraries. Applications modified in this way are considered to be Kerberos-aware . For some applications, this can be quite problematic due to the size of the application or its design. For other incompatible applications, changes must be made to the way in which the server and client communicate. Again, this can require extensive programming. Closed source applications that do not have Kerberos support by default are often the most problematic. To secure a network with Kerberos, one must either use Kerberos-aware versions of all client and server applications that transmit passwords unencrypted, or not use that client and server application at all. Migrating user passwords from a standard UNIX password database, such as /etc/passwd or /etc/shadow , to a Kerberos password database can be tedious. There is no automated mechanism to perform this task. Migration methods can vary substantially depending on the particular way Kerberos is deployed. That is why it is recommended that you use the Identity Management feature; it has specialized tools and methods for migration. Warning The Kerberos system can be compromised if a user on the network authenticates against a non-Kerberos aware service by transmitting a password in plain text. The use of non-Kerberos aware services (including telnet and FTP) is highly discouraged. Other encrypted protocols, such as SSH or SSL-secured services, are preferred to unencrypted services, but this is still not ideal. 11.1.6. Additional Resources for Kerberos Kerberos can be a complex service to implement, with a lot of flexibility in how it is deployed. Table 11.1, "External Kerberos Documentation" and Table 11.2, "Important Kerberos Man Pages" list of a few of the most important or most useful sources for more information on using Kerberos. Table 11.1. External Kerberos Documentation Documentation Location Kerberos V5 Installation Guide (in both PostScript and HTML) /usr/share/doc/krb5-server- version-number Kerberos V5 System Administrator's Guide (in both PostScript and HTML) /usr/share/doc/krb5-server- version-number Kerberos V5 UNIX User's Guide (in both PostScript and HTML) /usr/share/doc/krb5-workstation- version-number "Kerberos: The Network Authentication Protocol" web page from MIT http://web.mit.edu/kerberos/www/ Designing an Authentication System: a Dialogue in Four Scenes , originally by Bill Bryant in 1988, modified by Theodore Ts'o in 1997. This document is a conversation between two developers who are thinking through the creation of a Kerberos-style authentication system. The conversational style of the discussion makes this a good starting place for people who are completely unfamiliar with Kerberos. http://web.mit.edu/kerberos/www/dialogue.html An article for making a network Kerberos-aware. http://www.ornl.gov/~jar/HowToKerb.html Any of the manpage files can be opened by running man command_name . Table 11.2. Important Kerberos Man Pages Manpage Description Client Applications kerberos An introduction to the Kerberos system which describes how credentials work and provides recommendations for obtaining and destroying Kerberos tickets. The bottom of the man page references a number of related man pages. kinit Describes how to use this command to obtain and cache a ticket-granting ticket. kdestroy Describes how to use this command to destroy Kerberos credentials. klist Describes how to use this command to list cached Kerberos credentials. Administrative Applications kadmin Describes how to use this command to administer the Kerberos V5 database. kdb5_util Describes how to use this command to create and perform low-level administrative functions on the Kerberos V5 database. Server Applications krb5kdc Describes available command line options for the Kerberos V5 KDC. kadmind Describes available command line options for the Kerberos V5 administration server. Configuration Files krb5.conf Describes the format and options available within the configuration file for the Kerberos V5 library. kdc.conf Describes the format and options available within the configuration file for the Kerberos V5 AS and KDC. [3] A system where both the client and the server share a common key that is used to encrypt and decrypt network communication.
[ "identifier @ REALM", "service/FQDN @ REALM", "www.example.com CNAME web-01.example.com web-01.example.com A 192.0.2.145", "ssh www.example.com", "foo.example.org EXAMPLE.ORG foo.example.com EXAMPLE.COM foo.hq.example.com HQ.EXAMPLE.COM", "[domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM", "USDORIGIN example.com _kerberos TXT \"EXAMPLE.COM\"" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/using_kerberos
Chapter 3. Configuring log files in Directory Server
Chapter 3. Configuring log files in Directory Server Directory Server records events to log files. Use these files to troubleshoot, monitor, and analyze the directory activity. In Directory Server, you can enable or disable logging, configure the log level, define logging policies, compress log files, and perform other operations. 3.1. Types of log files in Directory Server Directory Server has the following log file types that are stored the /var/log/dirsrv/slapd-instance_name/ directory: Access log (access). Enabled by default Contains information on client connections and connection attempts to the Directory Server instance. Note that because the access log is buffered, you can notice a discrepancy between when the event occurs on the server and the time the event is recorded in the log. Error log (error). Enabled by default Contains detailed messages of errors and events that the directory experiences during normal operations. Warning If Directory Server fails to write messages to the error log file, the server sends an error message to the syslog service and exits. Audit log (audit). Disabled by default Records changes made to each database and to the server configuration. If you enable audit logging, Directory Server records only successful operations to the audit log file. Audit fail log (audit-failure). Disabled by default Records failed change operations. With the default settings, Directory Server writes failed operations to the same file as the audit log. To write failed operations to a separate file, set a path to this file in the nsslapd-auditfaillog configuration attribute. For details, see nsslapd-auditfaillog section. Security log (security). Enabled by default Records authentication events, authorization issues, DoS/TCP attacks, and other security events. For more detailed information about Directory Server log files, see Log files reference . 3.2. Displaying log files You can display Directory Server log files using the command line or the web console. 3.2.1. Displaying log files using the command line Use the utilities included in Red Hat Enterprise Linux, such as less , more , and cat , to view the log files. Prerequisites You enabled logging as described in Enabling or disabling logging using the command line . Procedure To display the log files, use the following commands: Note that by default the audit log and the audit fail log write operations to the same file. To display the locations of log files, use the command: Note If you have not enabled logging for a specified log type, Directory Server does not create the corresponding log file. 3.2.2. Displaying log files using the web console To view Directory Server log files use the Monitoring tab of the web console. Prerequisites You are logged in to the web console. Procedure Select the instance. Navigate to Monitoring Logging . In the list of log types, select the log you want to display: Optional: Apply the following settings to the log viewer: Set the number of records to display. Enable automatic display of new log entries by selecting Continuously Refresh . Click the Refresh button to apply the changes. 3.3. Enabling or disabling logging By default, Directory Server enables access, error, security logging, and disables audit and audit fail logging. IMPORTANT Every 2000 accesses to the directory increases the access log file by approximately 1 MB. However, before disabling the access logging, consider that this information can help to troubleshoot problems. 3.3.1. Enabling or disabling logging using the command line Use the dsconf config replace command to modify the following attributes in the cn=config DN entry that manage the Directory Server logging feature: nsslapd-accesslog-logging-enabled (access log) nsslapd-errorlog-logging-enabled (error log) nsslapd-auditlog-logging-enabled (audit log) nsslapd-auditfaillog-logging-enabled (audit fail log) nsslapd-securitylog-logging-enabled (security log) Procedure To enable logging, set the corresponding attribute value to on . For example, use the following command to enable the audit logging: Note Make sure that the nsslapd-auditlog attribute contains a valid path and a filename of the log file. Otherwise, you cannot enable the logging. To disable logging, set the corresponding attribute to off . For example, use the following command to disable the error logging: When you disable logging, Directory Server stops to record new events to a log file. However, the log file remains in the /var/log/dirsrv/slapd- instance_name / directory. Verification Check if the log directory now contains the log files: Additional resources For more details on the log enabling attributes, see corresponding sections in Core server configuration attributes description . 3.3.2. Enabling or disabling logging using the web console To enable or disable logging for an instance use the Server tab in the web console. Prerequisites You are logged in to the web console. Procedure Select the instance. Navigate to Server Logging . Select the log type you want to configure, for example, Access Log . Enable or disable the logging for the selected log type. Optional: Configure additional settings, such as the log level, the log rotation policy, and the log buffering. Click the Save Log Settings button to apply the changes. Verification Navigate to Monitoring Logging and see if Directory Server now logs the events. 3.4. Defining a log rotation policy Directory Server periodically rotates the current log file and creates a new one. However, you can change the default behavior by setting a rotation policy using the command line or the web console. You can manage the following rotation settings: Maximum number of logs Sets the maximum number of log files to keep. When the number of files is reached, Directory Server deletes the oldest log file before creating the new one. By default, it is 10 for the access log, and 1 for other logs. Maximum log size (in MB) Sets the maximum size of a log file in megabytes before it is rotated. By default, it is 100 MB for all logs. Create new log every Sets the maximum age of a log file. By default, Directory Server rotates all logs every week. Time of day Set the time when the log file is rotated. This setting is not enabled by default for all logs. Access mode The access mode sets the file permissions on newly created log files. By default, it is 600 for all logs. 3.4.1. Configuring a log rotation policy using the command line You can use the dsconf config replace command to modify the following attributes in the cn=config DN entry that manage rotation policies: access log error log audit log audit fail log security log Maximum number of logs nsslapd-accesslog-maxlogsperdir nsslapd-errorlog-maxlogsperdir nsslapd-auditlog-maxlogsperdir nsslapd-auditfaillog-maxlogsperdir nsslapd-securitylog-maxlogsperdir Maximum log size (in MB) nsslapd-accesslog-maxlogsize nsslapd-errorlog-maxlogsize nsslapd-auditlog-maxlogsize nsslapd-auditfaillog-maxlogsize nsslapd-securitylog-maxlogsize Create new log every nsslapd-accesslog-logrotationtime, nsslapd-accesslog-logrotationtimeunit nsslapd-errorlog-logrotationtime, nsslapd-errorlog-logrotationtimeunit nsslapd-auditlog-logrotationtime, nsslapd-auditlog-logrotationtimeunit nsslapd-auditfaillog-logrotationtime, nsslapd-auditfaillog-logrotationtimeunit nsslapd-securitylog-logrotationtime, nsslapd-securitylog-logrotationtimeunit Time of day nsslapd-accesslog-logrotationsynchour, nsslapd-accesslog-logrotationsyncmin nsslapd-errorlog-logrotationsynchour, nsslapd-errorlog-logrotationsyncmin nsslapd-auditlog-logrotationsynchour, nsslapd-auditlog-logrotationsyncmin nsslapd-auditfaillog-logrotationsynchour, nsslapd-auditfaillog-logrotationsyncmin nsslapd-securitylog-logrotationsynchour, nsslapd-securitylog-logrotationsyncmin Access mode nsslapd-accesslog-mode nsslapd-errorlog-mode nsslapd-auditlog-mode nsslapd-auditfaillog-mode nsslapd-securitylog-mode Procedure To configure the error log to use access mode 600, to keep maximum 2 logs, and to rotate log files with a 100 MB size or every 5 days, enter: For more details about rotation policy attributes, see corresponding sections in Core server configuration attributes description . 3.4.2. Configuring a log rotation policy using the web console To periodically archive the current log file and create a new one, set a log file rotation policy by using the web console. Prerequisites You are logged in to the web console. Procedure Select the instance. Navigate to Server Logging and select the log type, for example, Error Log . The Error Log Settings page opens. Click the Rotation Policy tab. Configure rotation policy parameters. For example, set maximum 3 log files, the log size maximum 110 MB, and creation of a new log file every 3 days. Click the Save Rotation Setting button to apply changes. Additional resources Configuring log deletion policy 3.5. Defining a log deletion policy Directory Server automatically deletes old archived log files if you set a deletion policy. Note You can only set a log file deletion policy if you have a log file rotation policy set. Directory Server applies the deletion policy at the time of log rotation. You can set the following configuration attributes to manage the log file deletion policy: Log archive exceeds (in MB) If the size of a log file of one type exceeds the configured value, the oldest log file of this type is automatically deleted. Free disk space (in MB) When the free disk space reaches this value, the oldest archived log file is automatically deleted. Log file is older than When a log file is older than the configured time, it is automatically deleted. 3.5.1. Configuring a log deletion policy using the command line You can use the dsconf config replace command to modify the following attributes in the cn=config DN entry that manage deletion policies: access log error log audit log audit fail log security log Log archive exceeds (in MB) nsslapd-accesslog-logmaxdiskspace nsslapd-errorlog-logmaxdiskspace nsslapd-auditlog-logmaxdiskspace nsslapd-auditfaillog-logmaxdiskspace nsslapd-securitylog-logmaxdiskspace Free disk space (in MB) nsslapd-accesslog-logminfreediskspace nsslapd-errorlog-logminfreediskspace nsslapd-auditlog-logminfreediskspace nsslapd-auditfaillog-logminfreediskspace nsslapd-securitylog-logminfreediskspace Log file is older than nsslapd-accesslog-logexpirationtime, nsslapd-accesslog-logexpirationtimeunit nsslapd-errorlog-logminfreediskspace, nsslapd-errorlog-logexpirationtimeunit nsslapd-auditlog-logminfreediskspace, nsslapd-auditlog-logexpirationtimeunit nsslapd-auditfaillog-logminfreediskspace, nsslapd-auditfaillog-logexpirationtimeunit nsslapd-securitylog-logminfreediskspace, nsslapd-securitylog-logexpirationtimeunit Procedure For example, to auto-delete the oldest access log file if the total size of all access log files exceeds 500 MB, enter: For more details about deletion policy attributes, see corresponding sections in Core server configuration attributes description . 3.5.2. Configuring a log deletion policy using the web console To automatically delete old archived log files, set a log deletion policy by using the web console. Prerequisites You are logged in to the web console. Procedure Select the instance. Navigate to Server Logging and select the log type, for example, Access Log . The Access Log Settings page opens. Click the Deletion Policy tab. Configure deletion policy parameters. For example, set maximum archive size to 600 MB and the log file age to 3 weeks. Click the Save Deletion Setting button to apply changes. Additional resources Configuring a log rotation policy 3.6. Manual log file rotation You can rotate log files manually only if you did not configure an automatic log file rotation or deletion policies. Procedure Stop the instance: # dsctl instance_name stop Go to the log files directory. By default, Directory Server stores access, error, audit, audit fail log, and security files in the /var/log/dirsrv/slapd-instance/ directory. Move or rename the log file you want to rotate to make it available for future reference. Start the instance: Additional resources Configuring log rotation policy Configuring log deletion policy 3.7. Configuring log levels To manage how detailed logs are, and therefore the amount of information that is logged, you can specify log levels for access logging and error logging. Note Changing the default log level can lead to very large log files. Red Hat recommends that you do not change the default logging values without being asked to do so by Red Hat technical support. 3.7.1. Configuring log levels using the command line You can adjust log levels by setting the following configuration attributes: nsslapd-accesslog-level for the access log nsslapd-errorlog-level for the error log Use the dsconf config replace command to modify the log level attributes. The attribute values are additive: for example, if you set a log level value of 12, it includes levels 8 and 4. Prerequisites You enabled access and error logging. Procedure To enable Logging internal access operations (4) and Logging for connections, operations, and results (256) for the access log, set the nsslapd-accesslog-level attribute to 260 (4 + 256) with the following command: To enable Search filter logging (32) and Config file processing (64) log levels for the error log, set the nsslapd-errorlog-level attribute to 96 (32 + 64) with the following command: Verification When you set the access log level to Logging internal access operations (4) , do the following to see if Directory Server started to log internal access events: Restart the instance to trigger internal events by command: View the access log file and find internal operation records: Additional resources Enabling or disabling logging Access log levels attribute description Error log levels attribute description 3.7.2. Configuring log levels using the web console To manage how detailed logs are, specify log levels for access logging and error logging. Prerequisites You are logged in to the web console. You enabled access and error logging. Procedure Select the instance. Navigate to Server Logging . Select the log type, for example, Access Log . Click the Show Logging Levels button to see all available log levels for the log type. Select log levels, for example, Default Logging and Internal Operations levels. Click the Save Log Setting button to apply changes. Verification To see if Directory Server started to log internal access events, do the following: Restart the instance by clicking Action button and then selecting Restart Instance . Directory Server restarts the instance and generates internal events. Navigate to Monitoring Logging Access Log . Refresh access log and view recorded internal events: Additional resources Enabling or disabling logging Access log levels attribute description Error log levels attribute description 3.8. Configuring logging for plug-ins By default, Directory Server does not log internal events which plug-ins initiate. To debug plug-in operations, you can enable access and audit logging for all plug-ins, or for specific plug-ins. 3.8.1. Configuring logging for all plug-ins Use nsslapd-plugin-logging attribute to configure logging for all plug-ins. Procedure To enable access and audit logging for all plug-ins, use the following command: # dsconf -D "cn=Directory Manager" instance_name config replace nsslapd-plugin-logging=on Additional resources For more details on the nsslapd-plugin-logging attribute, see the description sections: nsslapd-plugin-logging 3.8.2. Configuring logging for a specific plugin Use nsslapd-logAccess and nsslapd-logAudit attributes to configure logging for a plug-in. Prerequisites The nsslapd-accesslog attribute contains valid path and the filename for the access log file. The nsslapd-auditlog attribute contains valid path and the filename for the audit log file. Procedure To enable access and audit logging for a specific plug-in, modify nsslapd-logAccess and nsslapd-logAudit attributes using the LDAP interface: # ldapmodify -D "cn=Directory Manager" -W -x -H ldap://server.example.com:389 dn: cn=MemberOf Plugin,cn=plugins,cn=config changetype: modify replace: nsslapd-logAccess nsslapd-logAccess: on dn: cn=MemberOf Plugin,cn=plugins,cn=config changetype: modify replace: nsslapd-logAudit nsslapd-logAudit: on Additional resources For more details on the attributes, see the description sections: nsslapd-logAccess nsslapd-logAudit 3.9. Logging statistics per search operation During some search operations, especially with filters such as (cn=user*) , the time the server spends for receiving the tasks and then sending the result back ( etime ) can be very long. Expanding the access log with information related to indexes used during search operation helps to diagnose why etime value is resource expensive. Use the nsslapd-statlog-level attribute to enable collecting statistics, such as a number of index lookups (database read operations) and overall duration of index lookups for each search operation, with minimal impact on the server. Prerequisites You enabled access logging. Procedure Enable search operation metrics: Restart the instance: Verification Perform a search operation: View the access log file and find the search statistics records: Additional resources nsslapd-statlog-level 3.10. Compressing log files To save disc space, you can enable log file compression that compresses archived logs into .gzip files. Use the dsconf config replace command to modify the following attributes that manage log file compression: nsslapd-accesslog-compress (access log) nsslapd-errorlog-compress (error log) nsslapd-auditlog-compress (audit log) nsslapd-auditfaillog-compress (audit fail log) nsslapd-securitylog-compress (security log) By default, Directory Server compresses only archived security log files. Procedure To enable log file compression, run: The command enables compression for access and error logs. To disable log file compression, run: The command disables compression for the access log. Verification Check that the log file directory contains compressed logs files: # ls /var/log/dirsrv/ slapd-instance_name / Additional resources Description of the nsslapd-accesslog-compress attribute Description of the nsslapd-errorlog-compress attribute Description of the nsslapd-auditlog-compress attribute Description of the nsslapd-auditfaillog-compress attribute Description of the nsslapd-securitylog-compress attribute 3.11. Disabling access log buffering for debugging purposes For debugging purposes, you can disable access log buffering, which is enabled by default. With access log buffering disabled, Directory Server writes log entries directly to the disk. Warning Do not disable access logging in a normal operating environment. When you disable the buffering, Directory Server performance decreases, especially under heavy load. 3.11.1. Disabling access log buffering using the command line If you disable access log buffering, Directory Server writes log entries directly to disk. Procedure To disable access log buffering, enter: # dsconf -D "cn=Directory Manager" instance_name config replace nsslapd-accesslog-logbuffering= off Verification Display the access log in continuous mode: Perform actions in the directory, such as searches. Monitor the access log. Log entries appear without delay at the moment when users perform actions in the directory. 3.11.2. Disabling access log buffering using the web console If you disable access log buffering, Directory Server writes log entries directly to disk. Procedure Navigate to Server Logging Access Log Settings . Deselect Access Log Buffering Enabled . Click Save Log Settings . Verification Navigate to Monitoring Logging Access Log . Select Continuously Refresh . Perform actions in the directory, such as searches. Monitor the access log. Log entries appear without delay at the moment when users perform actions in the directory. 3.12. Disabling high-resolution log time stamps By default, Directory Server logs entries with nanosecond precision: [29/Jun/2022:09:10:04.300970708 -0400] conn=81 op=13 SRCH base="cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config" scope=0 filter="(objectClass=*)" attrs="cn" [29/Jun/2022:09:10:04.301010337 -0400] conn=81 op=13 RESULT err=0 tag=101 nentries=1 wtime=0.000038066 optime=0.000040347 etime=0.000077742 Use the dsconf config replace command to modify the attribute that is responsible for the log time stamps. Note Red Hat has deprecated the option to disable high-resolution log time stamps, and will remove it in future releases. Procedure To disable high-resolution log time stamps in the command line, enter the following command: Verification Verify that new log records have second precision. For example, open the access log file with the command:
[ "less /var/log/dirsrv/slapd- instance_name /access less /var/log/dirsrv/slapd- instance_name /errors less /var/log/dirsrv/slapd- instance_name /audit less /var/log/dirsrv/slapd- instance_name /access less /var/log/dirsrv/slapd- instance_name /security", "dsconf -D \"cn=Directory Manager\" instance_name config get nsslapd-accesslog nsslapd-errorlog nsslapd-auditlog nsslapd-auditfaillog nsslapd-securitylog nsslapd-accesslog: /var/log/dirsrv/slapd- instance_name /access nsslapd-errorlog: /var/log/dirsrv/slapd- instance_name /errors nsslapd-auditlog: /var/log/dirsrv/slapd- instance_name /audit nsslapd-auditfaillog: /var/log/dirsrv/slapd- instance_name /auditfail nsslapd-securitylog: /var/log/dirsrv/slapd- instance_name /security", "dsconf -D \"cn=Directory Manager\" instance_name config replace nsslapd-auditlog-logging-enabled=on", "dsconf -D \"cn=Directory Manager\" instance_name config replace nsslapd-errorlog-logging-enabled=off", "ls -la /var/log/dirsrv/slapd- instance_name / -rw-------. 1 dirsrv dirsrv 14388 Nov 29 05:23 access -rw-------. 1 dirsrv dirsrv 121554 Nov 12 05:57 audit -rw-------. 1 dirsrv dirsrv 880 Nov 20 11:53 errors -rw-------. 1 dirsrv dirsrv 3509 Nov 29 05:23 security", "dsconf -D \"cn=Directory Manager\" instance_name config replace nsslapd-errorlog-mode=600 nsslapd-errorlog-maxlogsperdir=2 nsslapd-errorlog-maxlogsize=100 nsslapd-errorlog-logrotationtime=5 nsslapd-errorlog-logrotationtimeunit=day", "dsconf -D \"cn=Directory Manager\" instance_name config replace nsslapd-accesslog-logmaxdiskspace=500", "dsctl instance_name stop", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" instance_name config replace nsslapd-accesslog-level=260", "dsconf -D \"cn=Directory Manager\" instance_name config replace nsslapd-errorlog-level=96", "dsctl instance_name restart Instance \" instance_name \" has been restarted", "cat /var/log/dirsrv/ slapd-instance_name /access [08/Nov/2022:16:29:05.556977401 -0500] conn=2 (Internal) op=1(1)(1) SRCH base=\"cn=config,cn=WritersData,cn=ldbm database,cn=plugins,cn=config\" scope=1 filter=\"objectclass=vlvsearch\" attrs=ALL [08/Nov/2022:16:29:05.557250374 -0500] conn=2 (Internal) op=1(1)(1) RESULT err=0 tag=48 nentries=0 wtime=0.000016828 optime=0.000274854 etime=0.000288952", "[08/Nov/2022:17:04:17.035502206 -0500] conn=6 (Internal) op=1(2)(1) SRCH base=\"cn=config,cn=Example database,cn=ldbm database,cn=plugins,cn=config\" scope=1 filter=\"objectclass=vlvsearch\" attrs=ALL [08/Nov/2022:17:04:17.035579829 -0500] conn=6 (Internal) op=1(2)(1) RESULT err=0 tag=48 nentries=0 wtime=0.000004563 optime=0.000078000 etime=0.000081911", "dsconf -D \"cn=Directory Manager\" instance_name config replace nsslapd-plugin-logging=on", "ldapmodify -D \"cn=Directory Manager\" -W -x -H ldap://server.example.com:389 dn: cn=MemberOf Plugin,cn=plugins,cn=config changetype: modify replace: nsslapd-logAccess nsslapd-logAccess: on dn: cn=MemberOf Plugin,cn=plugins,cn=config changetype: modify replace: nsslapd-logAudit nsslapd-logAudit: on", "dsconf -D \"cn=Directory Manager\" instance_name config replace nsslapd-statlog-level=1", "dsctl instance_name restart", "ldapsearch -D \"cn=Directory Manager\" -H ldap:// server.example.com -b \"dc=example,dc=com\" -s sub -x \"cn=user*\"", "cat /var/log/dirsrv/slapd- instance_name /access [16/Nov/2022:11:34:11.834135997 +0100] conn=1 op=73 SRCH base=\"dc=example,dc=com\" scope=2 filter=\"(cn=user )\"* attrs=ALL [16/Nov/2022:11:34:11.835750508 +0100] conn=1 op=73 STAT read index: attribute=objectclass key(eq)= referral --> count 0 [16/Nov/2022:11:34:11.836648697 +0100] conn=1 op=73 STAT read index: attribute=cn key(sub)= er_ --> count 25 [16/Nov/2022:11:34:11.837538489 +0100] conn=1 op=73 STAT read index: attribute=cn key(sub)= ser --> count 25 [16/Nov/2022:11:34:11.838814948 +0100] conn=1 op=73 STAT read index: attribute=cn key(sub)= use --> count 25 [16/Nov/2022:11:34:11.841241531 +0100] conn=1 op=73 STAT read index: attribute=cn key(sub)= ^us --> count 25 [16/Nov/2022:11:34:11.842230318 +0100] conn=1 op=73 STAT read index: duration 0.000010276 [16/Nov/2022:11:34:11.843185322 +0100] conn=1 op=73 RESULT err=0 tag=101 nentries=24 wtime=0.000078414 optime=0.001614101 etime=0.001690742", "dsconf -D \"cn=Directory Manager\" instance_name config replace nsslapd-accesslog-compress=on nsslapd-errorlog-compress=on", "dsconf -D \"cn=Directory Manager\" instance_name config replace nsslapd-accesslog-compress=off", "dsconf -D \"cn=Directory Manager\" instance_name config replace nsslapd-accesslog-logbuffering= off", "tail -f /var/log/dirsrv/slapd- instance_name /access", "[29/Jun/2022:09:10:04.300970708 -0400] conn=81 op=13 SRCH base=\"cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config\" scope=0 filter=\"(objectClass=*)\" attrs=\"cn\" [29/Jun/2022:09:10:04.301010337 -0400] conn=81 op=13 RESULT err=0 tag=101 nentries=1 wtime=0.000038066 optime=0.000040347 etime=0.000077742", "dsconf -D \"cn=Directory Manager\" instance_name config replace nsslapd-logging-hr-timestamps-enabled=off", "less /var/log/dirsrv/slapd-instance_name/access" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/monitoring_server_and_database_activity/assembly_configuring-log-files_monitoring-server-and-database-activities
6.3. Remote Authentication Using GSSAPI
6.3. Remote Authentication Using GSSAPI In the context of Red Hat Virtualization, remote authentication refers to authentication that is handled by a remote service, not the Red Hat Virtualization Manager. Remote authentication is used for user or API connections coming to the Manager from within an AD, IdM, or RHDS domain. The Red Hat Virtualization Manager must be configured by an administrator using the engine-manage-domains tool to be a part of an RHDS, AD, or IdM domain. This requires that the Manager be provided with credentials for an account from the RHDS, AD, or IdM directory server for the domain with sufficient privileges to join a system to the domain. After domains have been added, domain users can be authenticated by the Red Hat Virtualization Manager against the directory server using a password. The Manager uses a framework called the Simple Authentication and Security Layer (SASL) which in turn uses the Generic Security Services Application Program Interface (GSSAPI) to securely verify the identity of a user, and ascertain the authorization level available to the user. Figure 6.1. GSSAPI Authentication
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/remote_authentication_using_gssapi
probe::socket.sendmsg
probe::socket.sendmsg Name probe::socket.sendmsg - Message is currently being sent on a socket. Synopsis socket.sendmsg Values family Protocol family value name Name of this probe protocol Protocol value state Socket state value flags Socket flags value type Socket type value size Message size in bytes Context The message sender Description Fires at the beginning of sending a message on a socket via the sock_sendmsg function
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-socket-sendmsg
Chapter 140. KafkaMirrorMaker2ClusterSpec schema reference
Chapter 140. KafkaMirrorMaker2ClusterSpec schema reference Used in: KafkaMirrorMaker2Spec Full list of KafkaMirrorMaker2ClusterSpec schema properties Configures Kafka clusters for mirroring. 140.1. config Use the config properties to configure Kafka options. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by Streams for Apache Kafka. For client connection using a specific cipher suite for a TLS version, you can configure allowed ssl properties . You can also configure the ssl.endpoint.identification.algorithm property to enable or disable hostname verification. 140.2. KafkaMirrorMaker2ClusterSpec schema properties Property Property type Description alias string Alias used to reference the Kafka cluster. bootstrapServers string A comma-separated list of host:port pairs for establishing the connection to the Kafka cluster. tls ClientTls TLS configuration for connecting MirrorMaker 2 connectors to a cluster. authentication KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth Authentication configuration for connecting to the cluster. config map The MirrorMaker 2 cluster config. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols).
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaMirrorMaker2ClusterSpec-reference
Chapter 1. Ansible development tools
Chapter 1. Ansible development tools Ansible development tools ( ansible-dev-tools ) is a suite of tools provided with Ansible Automation Platform to help automation creators to create, test, and deploy playbook projects, execution environments, and collections. The Ansible VS Code extension by Red Hat integrates most of the Ansible development tools: you can use these tools from the VS Code user interface. Use Ansible development tools during local development of playbooks, local testing, and in a CI pipeline (linting and testing). This document describes how to use Ansible development tools to create a playbook project that contains playbooks and roles that you can reuse within the project. It also describes how to test the playbooks and deploy the project on your Ansible Automation Platform instance so that you can use the playbooks in automation jobs. 1.1. Ansible development tools components You can operate some Ansible development tools from the VS Code UI when you have installed the Ansible extension, and the remainder from the command line. VS Code is a free open-source code editor available on Linux, Mac, and Windows. Ansible VS Code extension This is not packaged with the Ansible Automation Platform RPM package, but it is an integral part of the automation creation workflow. From the VS Code UI, you can use the Ansible development tools for the following tasks: Scaffold directories for a playbook project or a collection. Write playbooks with the help of syntax highlighting and auto-completion. Debug your playbooks with a linter. Execute playbooks with Ansible Core using ansible-playbook . Execute playbooks in an execution environment with ansible-navigator . From the VS Code extension, you can also connect to Red Hat Ansible Lightspeed with IBM watsonx Code Assistant. Command-line Ansible development tools You can perform the following tasks with Ansible development tools from the command line, including the terminal in VS Code: Create an execution environment. Test your playbooks, roles, modules, plugins and collections.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/developing_automation_content/devtools-intro
probe::netdev.get_stats
probe::netdev.get_stats Name probe::netdev.get_stats - Called when someone asks the device statistics Synopsis Values dev_name The device that is going to provide the statistics
[ "netdev.get_stats" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-netdev-get-stats
31.6. Setting Module Parameters
31.6. Setting Module Parameters Like the kernel itself, modules can also take parameters that change their behavior. Most of the time, the default ones work well, but occasionally it is necessary or desirable to set custom parameters for a module. Because parameters cannot be dynamically set for a module that is already loaded into a running kernel, there are two different methods for setting them. Load a kernel module by running the modprobe command along with a list of customized parameters on the command line. If the module is already loaded, you need to first unload all its dependencies and the module itself using the modprobe -r command. This method allows you to run a kernel module with specific settings without making the changes persistent. See Section 31.6.1, "Loading a Customized Module - Temporary Changes" for more information. Alternatively, specify a list of the customized parameters in an existing or newly-created file in the /etc/modprobe.d/ directory. This method ensures that the module customization is persistent by setting the specified parameters accordingly each time the module is loaded, such as after every reboot or modprobe command. See Section 31.6.2, "Loading a Customized Module - Persistent Changes" for more information. 31.6.1. Loading a Customized Module - Temporary Changes Sometimes it is useful or necessary to run a kernel module temporarily with specific settings. To load a kernel module with customized parameters for the current system session, or until the module is reloaded with different parameters, run modprobe in the following format as root: ~]# modprobe <module_name> [ parameter = value \ufeff ] where [ parameter = value \ufeff ] represents a list of customized parameters available to that module. When loading a module with custom parameters on the command line, be aware of the following: You can enter multiple parameters and values by separating them with spaces. Some module parameters expect a list of comma-separated values as their argument. When entering the list of values, do not insert a space after each comma, or modprobe will incorrectly interpret the values following spaces as additional parameters. The modprobe command silently succeeds with an exit status of 0 if it successfully loads the module, or the module is already loaded into the kernel. Thus, you must ensure that the module is not already loaded before attempting to load it with custom parameters. The modprobe command does not automatically reload the module, or alert you that it is already loaded. The following procedure illustrates the recommended steps to load a kernel module with custom parameters on the e1000e module, which is the network driver for Intel PRO/1000 network adapters, as an example: Procedure 31.1. Loading a Kernel Module with Custom Parameters Verify whether the module is not already loaded into the kernel by running the following command: Note that the output of the command in this example indicates that the e1000e module is already loaded into the kernel. It also shows that this module has one dependency, the ptp module. If the module is already loaded into the kernel, you must unload the module and all its dependencies before proceeding with the step. See Section 31.4, "Unloading a Module" for instructions on how to safely unload it. Load the module and list all custom parameters after the module name. For example, if you wanted to load the Intel PRO/1000 network driver with the interrupt throttle rate set to 3000 interrupts per second for the first, second and third instances of the driver, and Energy Efficient Ethernet (EEE) turned on [5] , you would run, as root: This example illustrates passing multiple values to a single parameter by separating them with commas and omitting any spaces between them. [5] Despite what the example might imply, Energy Efficient Ethernet is turned on by default in the e1000e driver.
[ "~]# lsmod|grep e1000e e1000e 236338 0 ptp 9614 1 e1000e", "~]# modprobe e1000e InterruptThrottleRate=3000,3000,3000 EEE=1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-setting_module_parameters
Apache Camel Development Guide
Apache Camel Development Guide Red Hat Fuse 7.13 Develop applications with Apache Camel Red Hat Fuse Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/index
Chapter 11. Setting up PKI ACME Responder
Chapter 11. Setting up PKI ACME Responder This chapter describes the installation and initial configuration on an ACME responder on a PKI server that already has a CA subsystem. Note The following assumes you installed the CA with the default instance name (i.e. pki-tomcat ). For information on how to manage PKI ACME Responder, see the Managing PKI ACME Responder chapter in the Red Hat Certificate System Administration Guide . 11.1. Installing PKI ACME Responder To install PKI ACME Responder on your PKI server, First download and install the pki-acme RPM package: Create an ACME responder in a PKI server instance using the following command: This creates the initial configuration files in the /etc/pki/pki-tomcat/acme directory. For more information, see the pki-server-acme manpage.
[ "dnf install pki-acme", "pki-server acme-create" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/setting_up_acme_responder
Chapter 1. Accessing Red Hat Satellite
Chapter 1. Accessing Red Hat Satellite After Red Hat Satellite has been installed and configured, use the Satellite web UI interface to log in to Satellite for further configuration. 1.1. Satellite web UI overview You can manage and monitor your Satellite infrastructure from a browser with the Satellite web UI. For example, you can use the following navigation features in the Satellite web UI: Navigation feature Description Organization dropdown Choose the organization you want to manage. Location dropdown Choose the location you want to manage. Monitor Provides summary dashboards and reports. Content Provides content management tools. This includes content views, activation keys, and lifecycle environments. Hosts Provides host inventory and provisioning configuration tools. Configure Provides general configuration tools and data, including host groups and Ansible content. Infrastructure Provides tools on configuring how Satellite interacts with the environment. Provides event notifications to keep administrators informed of important environment changes. Administer Provides advanced configuration for settings such as users, role-based access control (RBAC), and general settings. 1.2. Importing the Katello root CA certificate The first time you log in to Satellite, you might see a warning informing you that you are using the default self-signed certificate and you might not be able to connect this browser to Satellite until the root CA certificate is imported in the browser. Use the following procedure to locate the root CA certificate on Satellite and to import it into your browser. To use the CLI instead of the Satellite web UI, see CLI Procedure . Prerequisites Your Red Hat Satellite is installed and configured. Procedure Identify the fully qualified domain name of your Satellite Server: Access the pub directory on your Satellite Server using a web browser pointed to the fully qualified domain name: When you access Satellite for the first time, an untrusted connection warning displays in your web browser. Accept the self-signed certificate and add the Satellite URL as a security exception to override the settings. This procedure might differ depending on the browser being used. Ensure that the Satellite URL is valid before you accept the security exception. Select katello-server-ca.crt . Import the certificate into your browser as a certificate authority and trust it to identify websites. CLI procedure From the Satellite CLI, copy the katello-server-ca.crt file to the machine you use to access the Satellite web UI: In the browser, import the katello-server-ca.crt certificate as a certificate authority and trust it to identify websites. 1.3. Logging in to Satellite Use the web user interface to log in to Satellite for further configuration. Prerequisites Ensure that the Katello root CA certificate is installed in your browser. For more information, see Section 1.2, "Importing the Katello root CA certificate" . Procedure Access Satellite Server using a web browser pointed to the fully qualified domain name: Enter the user name and password created during the configuration process. If a user was not created during the configuration process, the default user name is admin . If you have problems logging in, you can reset the password. For more information, see Section 1.8, "Resetting the administrative user password" . 1.4. Using Red Hat Identity Management credentials to log in to the Satellite Hammer CLI This section describes how to log in to your Satellite Hammer CLI with your Red Hat Identity Management (IdM) login and password. Prerequisites You have enrolled your Satellite Server into Red Hat Identity Management and configured it to use Red Hat Identity Management for authentication. More specifically, you have enabled access both to the Satellite web UI and the Satellite API. For more information, see Using Red Hat Identity Management in Installing Satellite Server in a connected network environment . The host on which you run this procedure is configured to use Red Hat Identity Management credentials to log users in to your Satellite Hammer CLI. For more information, see Configuring the Hammer CLI to Use Red Hat Identity Management User Authentication in Installing Satellite Server in a connected network environment . The host is an Red Hat Identity Management client. An Red Hat Identity Management server is running and reachable by the host. Procedure Obtain a Kerberos ticket-granting ticket (TGT) on behalf of a Satellite user: Warning If, when you were setting Red Hat Identity Management to be the authentication provider, you enabled access to both the Satellite API and the Satellite web UI, an attacker can now obtain an API session after the user receives the Kerberos TGT. The attack is possible even if the user did not previously enter the Satellite login credentials anywhere, for example in the browser. If automatic negotiate authentication is not enabled, use the TGT to authenticate to Hammer manually: Optional: Destroy all cached Kerberos tickets in the collection: You are still logged in, even after destroying the Kerberos ticket. Verification Use any hammer command to ensure that the system does not ask you to authenticate again: Note To log out of Hammer, enter: hammer auth logout . 1.5. Using Red Hat Identity Management credentials to log in to the Satellite web UI with a Firefox browser This section describes how to use the Firefox browser to log in to your Satellite web UI with your Red Hat Identity Management (IdM) login and password. Prerequisites You have enrolled your Satellite Server into Red Hat Identity Management and configured the server to use Red Hat Identity Management for authentication. For more information, see Using Red Hat Identity Management in Installing Satellite Server in a connected network environment . The host on which you are using a Firefox browser to log in to the Satellite web UI is an Red Hat Identity Management client. You have a valid Red Hat Identity Management login and password. Red Hat recommends using the latest stable Firefox browser. Your Firefox browser is configured for Single Sign-On (SSO). For more information, see Configuring Firefox to use Kerberos for single sign-on in Configuring authentication and authorization in Red Hat Enterprise Linux . An Red Hat Identity Management server is running and reachable by the host. Procedure Obtain the Kerberos ticket granting ticket (TGT) for yourself using your Red Hat Identity Management credentials: In your browser address bar, enter the URL of your Satellite Server. You are logged in automatically. Note Alternatively, you can skip the first two steps and enter your login and password in the fields displayed on the Satellite web UI. This is also the only option if the host from which you are accessing the Satellite web UI is not an Red Hat Identity Management client. 1.6. Using Red Hat Identity Management credentials to log in to the Satellite web UI with a Chrome browser This section describes how to use a Chrome browser to log in to your Satellite web UI with your Red Hat Identity Management login and password. Prerequisites You have enrolled your Satellite Server into Red Hat Identity Management and configured the server to use Red Hat Identity Management for authentication. For more information, see Using Red Hat Identity Management in Installing Satellite Server in a connected network environment . The host on which you are using the Chrome browser to log in to the Satellite web UI is an Red Hat Identity Management client. You have a valid Red Hat Identity Management login and password. Red Hat recommends using the latest stable Chrome browser. An Red Hat Identity Management server is running and reachable by the host. Procedure Enable the Chrome browser to use Kerberos authentication: Note Instead of allowlisting the whole domain, you can also allowlist a specific Satellite Server. Obtain the Kerberos ticket-granting ticket (TGT) for yourself using your Red Hat Identity Management credentials: In your browser address bar, enter the URL of your Satellite Server. You are logged in automatically. Note Alternatively, you can skip the first three steps and enter your login and password in the fields displayed on the Satellite web UI. This is also the only option if the host from which you are accessing the Satellite web UI is not an Red Hat Identity Management client. 1.7. Changing the password These steps show how to change your password. Procedure In the Satellite web UI, click your user name at the top right corner. Select My Account from the menu. In the Current Password field, enter the current password. In the Password field, enter a new password. In the Verify field, enter the new password again. Click Submit to save your new password. 1.8. Resetting the administrative user password Use the following procedures to reset the administrative password to randomly generated characters or to set a new administrative password. To reset the administrative user password Log in to the base operating system where Satellite Server is installed. Enter the following command to reset the password: Use this password to reset the password in the Satellite web UI. Edit the ~/.hammer/cli.modules.d/foreman.yml file on Satellite Server to add the new password: Unless you update the ~/.hammer/cli.modules.d/foreman.yml file, you cannot use the new password with Hammer CLI. To set a new administrative user password Log in to the base operating system where Satellite Server is installed. To set the password, enter the following command: Edit the ~/.hammer/cli.modules.d/foreman.yml file on Satellite Server to add the new password: Unless you update the ~/.hammer/cli.modules.d/foreman.yml file, you cannot use the new password with Hammer CLI. 1.9. Setting a custom message on the Login page Procedure In the Satellite web UI, navigate to Administer > Settings , and click the General tab. Click the edit button to Login page footer text , and enter the desired text to be displayed on the login page. For example, this text may be a warning message required by your company. Click Save . Log out of the Satellite web UI and verify that the custom text is now displayed on the login page below the Satellite version number.
[ "hostname -f", "https:// satellite.example.com /pub", "scp /var/www/html/pub/katello-server-ca.crt username@hostname:remotefile", "https:// satellite.example.com /", "kinit idm_user", "hammer auth login negotiate", "kdestroy -A", "hammer host list", "kinit idm_user Password for idm_user@ EXAMPLE.COM :", "google-chrome --auth-server-whitelist=\"*. example.com \" --auth-negotiate-delegate-whitelist=\"*. example.com \"", "kinit idm_user Password for idm_user@_EXAMPLE.COM :", "foreman-rake permissions:reset Reset to user: admin, password: qwJxBptxb7Gfcjj5", "vi ~/.hammer/cli.modules.d/foreman.yml", "foreman-rake permissions:reset password= new_password", "vi ~/.hammer/cli.modules.d/foreman.yml" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/accessing_server_admin
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_a_red_hat_high_availability_cluster_on_red_hat_openstack_platform/proc_providing-feedback-on-red-hat-documentation_configurng-a-red-hat-high-availability-cluster-on-red-hat-openstack-platform
Chapter 5. Managing hosted control planes
Chapter 5. Managing hosted control planes 5.1. Managing hosted control planes on AWS When you use hosted control planes for OpenShift Container Platform on Amazon Web Services (AWS), the infrastructure requirements vary based on your setup. 5.1.1. Prerequisites to manage AWS infrastructure and IAM permissions To configure hosted control planes for OpenShift Container Platform on Amazon Web Services (AWS), you must meet the following the infrastructure requirements: You configured hosted control planes before you can create hosted clusters. You created an AWS Identity and Access Management (IAM) role and AWS Security Token Service (STS) credentials. 5.1.1.1. Infrastructure requirements for AWS When you use hosted control planes on Amazon Web Services (AWS), the infrastructure requirements fit in the following categories: Prerequired and unmanaged infrastructure for the HyperShift Operator in an arbitrary AWS account Prerequired and unmanaged infrastructure in a hosted cluster AWS account Hosted control planes-managed infrastructure in a management AWS account Hosted control planes-managed infrastructure in a hosted cluster AWS account Kubernetes-managed infrastructure in a hosted cluster AWS account Prerequired means that hosted control planes requires AWS infrastructure to properly work. Unmanaged means that no Operator or controller creates the infrastructure for you. 5.1.1.2. Unmanaged infrastructure for the HyperShift Operator in an AWS account An arbitrary Amazon Web Services (AWS) account depends on the provider of the hosted control planes service. In self-managed hosted control planes, the cluster service provider controls the AWS account. The cluster service provider is the administrator who hosts cluster control planes and is responsible for uptime. In managed hosted control planes, the AWS account belongs to Red Hat. In a prerequired and unmanaged infrastructure for the HyperShift Operator, the following infrastructure requirements apply for a management cluster AWS account: One S3 Bucket OpenID Connect (OIDC) Route 53 hosted zones A domain to host private and public entries for hosted clusters 5.1.1.3. Unmanaged infrastructure requirements for a management AWS account When your infrastructure is prerequired and unmanaged in a hosted cluster Amazon Web Services (AWS) account, the infrastructure requirements for all access modes are as follows: One VPC One DHCP Option Two subnets A private subnet that is an internal data plane subnet A public subnet that enables access to the internet from the data plane One internet gateway One elastic IP One NAT gateway One security group (worker nodes) Two route tables (one private and one public) Two Route 53 hosted zones Enough quota for the following items: One Ingress service load balancer for public hosted clusters One private link endpoint for private hosted clusters Note For private link networking to work, the endpoint zone in the hosted cluster AWS account must match the zone of the instance that is resolved by the service endpoint in the management cluster AWS account. In AWS, the zone names are aliases, such as us-east-2b, which do not necessarily map to the same zone in different accounts. As a result, for private link to work, the management cluster must have subnets or workers in all zones of its region. 5.1.1.4. Infrastructure requirements for a management AWS account When your infrastructure is managed by hosted control planes in a management AWS account, the infrastructure requirements differ depending on whether your clusters are public, private, or a combination. For accounts with public clusters, the infrastructure requirements are as follows: Network load balancer: a load balancer Kube API server Kubernetes creates a security group Volumes For etcd (one or three depending on high availability) For OVN-Kube For accounts with private clusters, the infrastructure requirements are as follows: Network load balancer: a load balancer private router Endpoint service (private link) For accounts with public and private clusters, the infrastructure requirements are as follows: Network load balancer: a load balancer public router Network load balancer: a load balancer private router Endpoint service (private link) Volumes For etcd (one or three depending on high availability) For OVN-Kube 5.1.1.5. Infrastructure requirements for an AWS account in a hosted cluster When your infrastructure is managed by hosted control planes in a hosted cluster Amazon Web Services (AWS) account, the infrastructure requirements differ depending on whether your clusters are public, private, or a combination. For accounts with public clusters, the infrastructure requirements are as follows: Node pools must have EC2 instances that have Role and RolePolicy defined. For accounts with private clusters, the infrastructure requirements are as follows: One private link endpoint for each availability zone EC2 instances for node pools For accounts with public and private clusters, the infrastructure requirements are as follows: One private link endpoint for each availability zone EC2 instances for node pools 5.1.1.6. Kubernetes-managed infrastructure in a hosted cluster AWS account When Kubernetes manages your infrastructure in a hosted cluster Amazon Web Services (AWS) account, the infrastructure requirements are as follows: A network load balancer for default Ingress An S3 bucket for registry 5.1.2. Identity and Access Management (IAM) permissions In the context of hosted control planes, the consumer is responsible to create the Amazon Resource Name (ARN) roles. The consumer is an automated process to generate the permissions files. The consumer might be the CLI or OpenShift Cluster Manager. Hosted control planes can enable granularity to honor the principle of least-privilege components, which means that every component uses its own role to operate or create Amazon Web Services (AWS) objects, and the roles are limited to what is required for the product to function normally. The hosted cluster receives the ARN roles as input and the consumer creates an AWS permission configuration for each component. As a result, the component can authenticate through STS and preconfigured OIDC IDP. The following roles are consumed by some of the components from hosted control planes that run on the control plane and operate on the data plane: controlPlaneOperatorARN imageRegistryARN ingressARN kubeCloudControllerARN nodePoolManagementARN storageARN networkARN The following example shows a reference to the IAM roles from the hosted cluster: ... endpointAccess: Public region: us-east-2 resourceTags: - key: kubernetes.io/cluster/example-cluster-bz4j5 value: owned rolesRef: controlPlaneOperatorARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-control-plane-operator imageRegistryARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-openshift-image-registry ingressARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-openshift-ingress kubeCloudControllerARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-cloud-controller networkARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-cloud-network-config-controller nodePoolManagementARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-node-pool storageARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-aws-ebs-csi-driver-controller type: AWS ... The roles that hosted control planes uses are shown in the following examples: ingressARN { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "elasticloadbalancing:DescribeLoadBalancers", "tag:GetResources", "route53:ListHostedZones" ], "Resource": "\*" }, { "Effect": "Allow", "Action": [ "route53:ChangeResourceRecordSets" ], "Resource": [ "arn:aws:route53:::PUBLIC_ZONE_ID", "arn:aws:route53:::PRIVATE_ZONE_ID" ] } ] } imageRegistryARN { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:CreateBucket", "s3:DeleteBucket", "s3:PutBucketTagging", "s3:GetBucketTagging", "s3:PutBucketPublicAccessBlock", "s3:GetBucketPublicAccessBlock", "s3:PutEncryptionConfiguration", "s3:GetEncryptionConfiguration", "s3:PutLifecycleConfiguration", "s3:GetLifecycleConfiguration", "s3:GetBucketLocation", "s3:ListBucket", "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucketMultipartUploads", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": "\*" } ] } storageARN { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:AttachVolume", "ec2:CreateSnapshot", "ec2:CreateTags", "ec2:CreateVolume", "ec2:DeleteSnapshot", "ec2:DeleteTags", "ec2:DeleteVolume", "ec2:DescribeInstances", "ec2:DescribeSnapshots", "ec2:DescribeTags", "ec2:DescribeVolumes", "ec2:DescribeVolumesModifications", "ec2:DetachVolume", "ec2:ModifyVolume" ], "Resource": "\*" } ] } networkARN { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeInstances", "ec2:DescribeInstanceStatus", "ec2:DescribeInstanceTypes", "ec2:UnassignPrivateIpAddresses", "ec2:AssignPrivateIpAddresses", "ec2:UnassignIpv6Addresses", "ec2:AssignIpv6Addresses", "ec2:DescribeSubnets", "ec2:DescribeNetworkInterfaces" ], "Resource": "\*" } ] } kubeCloudControllerARN nodePoolManagementARN { "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:AllocateAddress", "ec2:AssociateRouteTable", "ec2:AttachInternetGateway", "ec2:AuthorizeSecurityGroupIngress", "ec2:CreateInternetGateway", "ec2:CreateNatGateway", "ec2:CreateRoute", "ec2:CreateRouteTable", "ec2:CreateSecurityGroup", "ec2:CreateSubnet", "ec2:CreateTags", "ec2:DeleteInternetGateway", "ec2:DeleteNatGateway", "ec2:DeleteRouteTable", "ec2:DeleteSecurityGroup", "ec2:DeleteSubnet", "ec2:DeleteTags", "ec2:DescribeAccountAttributes", "ec2:DescribeAddresses", "ec2:DescribeAvailabilityZones", "ec2:DescribeImages", "ec2:DescribeInstances", "ec2:DescribeInternetGateways", "ec2:DescribeNatGateways", "ec2:DescribeNetworkInterfaces", "ec2:DescribeNetworkInterfaceAttribute", "ec2:DescribeRouteTables", "ec2:DescribeSecurityGroups", "ec2:DescribeSubnets", "ec2:DescribeVpcs", "ec2:DescribeVpcAttribute", "ec2:DescribeVolumes", "ec2:DetachInternetGateway", "ec2:DisassociateRouteTable", "ec2:DisassociateAddress", "ec2:ModifyInstanceAttribute", "ec2:ModifyNetworkInterfaceAttribute", "ec2:ModifySubnetAttribute", "ec2:ReleaseAddress", "ec2:RevokeSecurityGroupIngress", "ec2:RunInstances", "ec2:TerminateInstances", "tag:GetResources", "ec2:CreateLaunchTemplate", "ec2:CreateLaunchTemplateVersion", "ec2:DescribeLaunchTemplates", "ec2:DescribeLaunchTemplateVersions", "ec2:DeleteLaunchTemplate", "ec2:DeleteLaunchTemplateVersions" ], "Resource": [ "\*" ], "Effect": "Allow" }, { "Condition": { "StringLike": { "iam:AWSServiceName": "elasticloadbalancing.amazonaws.com" } }, "Action": [ "iam:CreateServiceLinkedRole" ], "Resource": [ "arn:*:iam::*:role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing" ], "Effect": "Allow" }, { "Action": [ "iam:PassRole" ], "Resource": [ "arn:*:iam::*:role/*-worker-role" ], "Effect": "Allow" } ] } controlPlaneOperatorARN { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:CreateVpcEndpoint", "ec2:DescribeVpcEndpoints", "ec2:ModifyVpcEndpoint", "ec2:DeleteVpcEndpoints", "ec2:CreateTags", "route53:ListHostedZones" ], "Resource": "\*" }, { "Effect": "Allow", "Action": [ "route53:ChangeResourceRecordSets", "route53:ListResourceRecordSets" ], "Resource": "arn:aws:route53:::%s" } ] } 5.1.3. Creating AWS infrastructure and IAM resources separate By default, the hcp create cluster aws command creates cloud infrastructure with the hosted cluster and applies it. You can create the cloud infrastructure portion separately so that you can use the hcp create cluster aws command only to create the cluster, or render it to modify it before you apply it. To create the cloud infrastructure portion separately, you need to create the Amazon Web Services (AWS) infrastructure, create the AWS Identity and Access (IAM) resources, and create the cluster. 5.1.3.1. Creating the AWS infrastructure separately To create the Amazon Web Services (AWS) infrastructure, you need to create a Virtual Private Cloud (VPC) and other resources for your cluster. You can use the AWS console or an infrastructure automation and provisioning tool. For instructions to use the AWS console, see Create a VPC plus other VPC resources in the AWS Documentation. The VPC must include private and public subnets and resources for external access, such as a network address translation (NAT) gateway and an internet gateway. In addition to the VPC, you need a private hosted zone for the ingress of your cluster. If you are creating clusters that use PrivateLink ( Private or PublicAndPrivate access modes), you need an additional hosted zone for PrivateLink. Create the AWS infrastructure for your hosted cluster by using the following example configuration: --- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: clusters spec: {} status: {} --- apiVersion: v1 data: .dockerconfigjson: xxxxxxxxxxx kind: Secret metadata: creationTimestamp: null labels: hypershift.openshift.io/safe-to-delete-with-cluster: "true" name: <pull_secret_name> 1 namespace: clusters --- apiVersion: v1 data: key: xxxxxxxxxxxxxxxxx kind: Secret metadata: creationTimestamp: null labels: hypershift.openshift.io/safe-to-delete-with-cluster: "true" name: <etcd_encryption_key_name> 2 namespace: clusters type: Opaque --- apiVersion: v1 data: id_rsa: xxxxxxxxx id_rsa.pub: xxxxxxxxx kind: Secret metadata: creationTimestamp: null labels: hypershift.openshift.io/safe-to-delete-with-cluster: "true" name: <ssh-key-name> 3 namespace: clusters --- apiVersion: hypershift.openshift.io/v1beta1 kind: HostedCluster metadata: creationTimestamp: null name: <hosted_cluster_name> 4 namespace: clusters spec: autoscaling: {} configuration: {} controllerAvailabilityPolicy: SingleReplica dns: baseDomain: <dns_domain> 5 privateZoneID: xxxxxxxx publicZoneID: xxxxxxxx etcd: managed: storage: persistentVolume: size: 8Gi storageClassName: gp3-csi type: PersistentVolume managementType: Managed fips: false infraID: <infra_id> 6 issuerURL: <issuer_url> 7 networking: clusterNetwork: - cidr: 10.132.0.0/14 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes serviceNetwork: - cidr: 172.31.0.0/16 olmCatalogPlacement: management platform: aws: cloudProviderConfig: subnet: id: <subnet_xxx> 8 vpc: <vpc_xxx> 9 zone: us-west-1b endpointAccess: Public multiArch: false region: us-west-1 rolesRef: controlPlaneOperatorARN: arn:aws:iam::820196288204:role/<infra_id>-control-plane-operator imageRegistryARN: arn:aws:iam::820196288204:role/<infra_id>-openshift-image-registry ingressARN: arn:aws:iam::820196288204:role/<infra_id>-openshift-ingress kubeCloudControllerARN: arn:aws:iam::820196288204:role/<infra_id>-cloud-controller networkARN: arn:aws:iam::820196288204:role/<infra_id>-cloud-network-config-controller nodePoolManagementARN: arn:aws:iam::820196288204:role/<infra_id>-node-pool storageARN: arn:aws:iam::820196288204:role/<infra_id>-aws-ebs-csi-driver-controller type: AWS pullSecret: name: <pull_secret_name> release: image: quay.io/openshift-release-dev/ocp-release:4.16-x86_64 secretEncryption: aescbc: activeKey: name: <etcd_encryption_key_name> type: aescbc services: - service: APIServer servicePublishingStrategy: type: LoadBalancer - service: OAuthServer servicePublishingStrategy: type: Route - service: Konnectivity servicePublishingStrategy: type: Route - service: Ignition servicePublishingStrategy: type: Route - service: OVNSbDb servicePublishingStrategy: type: Route sshKey: name: <ssh_key_name> status: controlPlaneEndpoint: host: "" port: 0 --- apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: creationTimestamp: null name: <node_pool_name> 10 namespace: clusters spec: arch: amd64 clusterName: <hosted_cluster_name> management: autoRepair: true upgradeType: Replace nodeDrainTimeout: 0s platform: aws: instanceProfile: <instance_profile_name> 11 instanceType: m6i.xlarge rootVolume: size: 120 type: gp3 subnet: id: <subnet_xxx> type: AWS release: image: quay.io/openshift-release-dev/ocp-release:4.16-x86_64 replicas: 2 status: replicas: 0 1 Replace <pull_secret_name> with the name of your pull secret. 2 Replace <etcd_encryption_key_name> with the name of your etcd encryption key. 3 Replace <ssh_key_name> with the name of your SSH key. 4 Replace <hosted_cluster_name> with the name of your hosted cluster. 5 Replace <dns_domain> with your base DNS domain, such as example.com . 6 Replace <infra_id> with the value that identifies the IAM resources that are associated with the hosted cluster. 7 Replace <issuer_url> with your issuer URL, which ends with your infra_id value. For example, https://example-hosted-us-west-1.s3.us-west-1.amazonaws.com/example-hosted-infra-id . 8 Replace <subnet_xxx> with your subnet ID. Both private and public subnets need to be tagged. For public subnets, use kubernetes.io/role/elb=1 . For private subnets, use kubernetes.io/role/internal-elb=1 . 9 Replace <vpc_xxx> with your VPC ID. 10 Replace <node_pool_name> with the name of your NodePool resource. 11 Replace <instance_profile_name> with the name of your AWS instance. 5.1.3.2. Creating the AWS IAM resources In Amazon Web Services (AWS), you must create the following IAM resources: An OpenID Connect (OIDC) identity provider in IAM , which is required to enable STS authentication. Seven roles , which are separate for every component that interacts with the provider, such as the Kubernetes controller manager, cluster API provider, and registry The instance profile , which is the profile that is assigned to all worker instances of the cluster 5.1.3.3. Creating a hosted cluster separately You can create a hosted cluster separately on Amazon Web Services (AWS). To create a hosted cluster separately, enter the following command: USD hcp create cluster aws \ --infra-id <infra_id> \ 1 --name <hosted_cluster_name> \ 2 --sts-creds <path_to_sts_credential_file> \ 3 --pull-secret <path_to_pull_secret> \ 4 --generate-ssh \ 5 --node-pool-replicas 3 --role-arn <role_name> 6 1 Replace <infra_id> with the same ID that you specified in the create infra aws command. This value identifies the IAM resources that are associated with the hosted cluster. 2 Replace <hosted_cluster_name> with the name of your hosted cluster. 3 Replace <path_to_sts_credential_file> with the same name that you specified in the create infra aws command. 4 Replace <path_to_pull_secret> with the name of the file that contains a valid OpenShift Container Platform pull secret. 5 The --generate-ssh flag is optional, but is good to include in case you need to SSH to your workers. An SSH key is generated for you and is stored as a secret in the same namespace as the hosted cluster. 6 Replace <role_name> with the Amazon Resource Name (ARN), for example, arn:aws:iam::820196288204:role/myrole . Specify the Amazon Resource Name (ARN), for example, arn:aws:iam::820196288204:role/myrole . For more information about ARN roles, see "Identity and Access Management (IAM) permissions". You can also add the --render flag to the command and redirect output to a file where you can edit the resources before you apply them to the cluster. After you run the command, the following resources are applied to your cluster: A namespace A secret with your pull secret A HostedCluster A NodePool Three AWS STS secrets for control plane components One SSH key secret if you specified the --generate-ssh flag. 5.1.4. Transitioning a hosted cluster from single-architecture to multi-architecture You can transition your single-architecture 64-bit AMD hosted cluster to a multi-architecture hosted cluster on Amazon Web Services (AWS), to reduce the cost of running workloads on your cluster. For example, you can run existing workloads on 64-bit AMD while transitioning to 64-bit ARM and you can manage these workloads from a central Kubernetes cluster. A single-architecture hosted cluster can manage node pools of only one particular CPU architecture. However, a multi-architecture hosted cluster can manage node pools with different CPU architectures. On AWS, a multi-architecture hosted cluster can manage both 64-bit AMD and 64-bit ARM node pools. Prerequisites You have installed an OpenShift Container Platform management cluster for AWS on Red Hat Advanced Cluster Management (RHACM) with the multicluster engine for Kubernetes Operator. You have an existing single-architecture hosted cluster that uses 64-bit AMD variant of the OpenShift Container Platform release payload. An existing node pool that uses the same 64-bit AMD variant of the OpenShift Container Platform release payload and is managed by an existing hosted cluster. Ensure that you installed the following command-line tools: oc kubectl hcp skopeo Procedure Review an existing OpenShift Container Platform release image of the single-architecture hosted cluster by running the following command: USD oc get hostedcluster/<hosted_cluster_name> \ 1 -o jsonpath='{.spec.release.image}' 1 Replace <hosted_cluster_name> with your hosted cluster name. Example output quay.io/openshift-release-dev/ocp-release:<4.y.z>-x86_64 1 1 Replace <4.y.z> with the supported OpenShift Container Platform version that you use. In your OpenShift Container Platform release image, if you use the digest instead of a tag, find the multi-architecture tag version of your release image: Set the OCP_VERSION environment variable for the OpenShift Container Platform version by running the following command: USD OCP_VERSION=USD(oc image info quay.io/openshift-release-dev/ocp-release@sha256:ac78ebf77f95ab8ff52847ecd22592b545415e1ff6c7ff7f66bf81f158ae4f5e \ -o jsonpath='{.config.config.Labels["io.openshift.release"]}') Set the MULTI_ARCH_TAG environment variable for the multi-architecture tag version of your release image by running the following command: USD MULTI_ARCH_TAG=USD(skopeo inspect docker://quay.io/openshift-release-dev/ocp-release@sha256:ac78ebf77f95ab8ff52847ecd22592b545415e1ff6c7ff7f66bf81f158ae4f5e \ | jq -r '.RepoTags' | sed 's/"//g' | sed 's/,//g' \ | grep -w "USDOCP_VERSION-multiUSD" | xargs) Set the IMAGE environment variable for the multi-architecture release image name by running the following command: USD IMAGE=quay.io/openshift-release-dev/ocp-release:USDMULTI_ARCH_TAG To see the list of multi-architecture image digests, run the following command: USD oc image info USDIMAGE Example output OS DIGEST linux/amd64 sha256:b4c7a91802c09a5a748fe19ddd99a8ffab52d8a31db3a081a956a87f22a22ff8 linux/ppc64le sha256:66fda2ff6bd7704f1ba72be8bfe3e399c323de92262f594f8e482d110ec37388 linux/s390x sha256:b1c1072dc639aaa2b50ec99b530012e3ceac19ddc28adcbcdc9643f2dfd14f34 linux/arm64 sha256:7b046404572ac96202d82b6cb029b421dddd40e88c73bbf35f602ffc13017f21 Transition the hosted cluster from single-architecture to multi-architecture: Set the multi-architecture OpenShift Container Platform release image for the hosted cluster by ensuring that you use the same OpenShift Container Platform version as the hosted cluster. Run the following command: USD oc patch -n clusters hostedclusters/<hosted_cluster_name> -p \ '{"spec":{"release":{"image":"quay.io/openshift-release-dev/ocp-release:<4.x.y>-multi"}}}' \ 1 --type=merge 1 Replace <4.y.z> with the supported OpenShift Container Platform version that you use. Confirm that the multi-architecture image is set in your hosted cluster by running the following command: USD oc get hostedcluster/<hosted_cluster_name> \ -o jsonpath='{.spec.release.image}' Check that the status of the HostedControlPlane resource is Progressing by running the following command: USD oc get hostedcontrolplane -n <hosted_control_plane_namespace> -oyaml Example output #... - lastTransitionTime: "2024-07-28T13:07:18Z" message: HostedCluster is deploying, upgrading, or reconfiguring observedGeneration: 5 reason: Progressing status: "True" type: Progressing #... Check that the status of the HostedCluster resource is Progressing by running the following command: USD oc get hostedcluster <hosted_cluster_name> \ -n <hosted_cluster_namespace> -oyaml Verification Verify that a node pool is using the multi-architecture release image in your HostedControlPlane resource by running the following command: USD oc get hostedcontrolplane -n clusters-example -oyaml Example output #... version: availableUpdates: null desired: image: quay.io/openshift-release-dev/ocp-release:<4.x.y>-multi 1 url: https://access.redhat.com/errata/RHBA-2024:4855 version: 4.16.5 history: - completionTime: "2024-07-28T13:10:58Z" image: quay.io/openshift-release-dev/ocp-release:<4.x.y>-multi startedTime: "2024-07-28T13:10:27Z" state: Completed verified: false version: <4.x.y> 1 Replace <4.y.z> with the supported OpenShift Container Platform version that you use. Note The multi-architecture OpenShift Container Platform release image is updated in your HostedCluster , HostedControlPlane resources, and hosted control plane pods. However, your existing node pools do not transition with the multi-architecture image automatically, because the release image transition is decoupled between the hosted cluster and node pools. You must create new node pools on your new multi-architecture hosted cluster. steps Creating node pools on the multi-architecture hosted cluster 5.1.5. Creating node pools on the multi-architecture hosted cluster After transitioning your hosted cluster from single-architecture to multi-architecture, create node pools on compute machines based on 64-bit AMD and 64-bit ARM architectures. Procedure Create node pools based on 64-bit ARM architecture by entering the following command: USD hcp create nodepool aws \ --cluster-name <hosted_cluster_name> \ 1 --name <nodepool_name> \ 2 --node-count=<node_count> \ 3 --arch arm64 1 Replace <hosted_cluster_name> with your hosted cluster name. 2 Replace <nodepool_name> with your node pool name. 3 Replace <node_count> with integer for your node count, for example, 2 . Create node pools based on 64-bit AMD architecture by entering the following command: USD hcp create nodepool aws \ --cluster-name <hosted_cluster_name> \ 1 --name <nodepool_name> \ 2 --node-count=<node_count> \ 3 --arch amd64 1 Replace <hosted_cluster_name> with your hosted cluster name. 2 Replace <nodepool_name> with your node pool name. 3 Replace <node_count> with integer for your node count, for example, 2 . Verification Verify that a node pool is using the multi-architecture release image by entering the following command: USD oc get nodepool/<nodepool_name> -oyaml Example output for 64-bit AMD node pools #... spec: arch: amd64 #... release: image: quay.io/openshift-release-dev/ocp-release:<4.x.y>-multi 1 1 Replace <4.y.z> with the supported OpenShift Container Platform version that you use. Example output for 64-bit ARM node pools #... spec: arch: arm64 #... release: image: quay.io/openshift-release-dev/ocp-release:<4.x.y>-multi 5.2. Managing hosted control planes on bare metal After you deploy hosted control planes on bare metal, you can manage a hosted cluster by completing the following tasks. 5.2.1. Accessing the hosted cluster You can access the hosted cluster by either getting the kubeconfig file and kubeadmin credential directly from resources, or by using the hcp command line interface to generate a kubeconfig file. Prerequisites To access the hosted cluster by getting the kubeconfig file and credentials directly from resources, you must be familiar with the access secrets for hosted clusters. The hosted cluster (hosting) namespace contains hosted cluster resources and the access secrets. The hosted control plane namespace is where the hosted control plane runs. The secret name formats are as follows: kubeconfig secret: <hosted_cluster_namespace>-<name>-admin-kubeconfig . For example, clusters-hypershift-demo-admin-kubeconfig . kubeadmin password secret: <hosted_cluster_namespace>-<name>-kubeadmin-password . For example, clusters-hypershift-demo-kubeadmin-password . The kubeconfig secret contains a Base64-encoded kubeconfig field, which you can decode and save into a file to use with the following command: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes The kubeadmin password secret is also Base64-encoded. You can decode it and use the password to log in to the API server or console of the hosted cluster. Procedure To access the hosted cluster by using the hcp CLI to generate the kubeconfig file, take the following steps: Generate the kubeconfig file by entering the following command: USD hcp create kubeconfig --namespace <hosted_cluster_namespace> \ --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig After you save the kubeconfig file, you can access the hosted cluster by entering the following example command: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes 5.2.2. Scaling the NodePool object for a hosted cluster You can scale up the NodePool object by adding nodes to your hosted cluster. When you scale a node pool, consider the following information: When you scale a replica by the node pool, a machine is created. For every machine, the Cluster API provider finds and installs an Agent that meets the requirements that are specified in the node pool specification. You can monitor the installation of an Agent by checking its status and conditions. When you scale down a node pool, Agents are unbound from the corresponding cluster. Before you can reuse the Agents, you must restart them by using the Discovery image. Procedure Scale the NodePool object to two nodes: USD oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> --replicas 2 The Cluster API agent provider randomly picks two agents that are then assigned to the hosted cluster. Those agents go through different states and finally join the hosted cluster as OpenShift Container Platform nodes. The agents pass through states in the following order: binding discovering insufficient installing installing-in-progress added-to-existing-cluster Enter the following command: USD oc -n <hosted_control_plane_namespace> get agent Example output NAME CLUSTER APPROVED ROLE STAGE 4dac1ab2-7dd5-4894-a220-6a3473b67ee6 hypercluster1 true auto-assign d9198891-39f4-4930-a679-65fb142b108b true auto-assign da503cf1-a347-44f2-875c-4960ddb04091 hypercluster1 true auto-assign Enter the following command: USD oc -n <hosted_control_plane_namespace> get agent \ -o jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}' Example output BMH: ocp-worker-2 Agent: 4dac1ab2-7dd5-4894-a220-6a3473b67ee6 State: binding BMH: ocp-worker-0 Agent: d9198891-39f4-4930-a679-65fb142b108b State: known-unbound BMH: ocp-worker-1 Agent: da503cf1-a347-44f2-875c-4960ddb04091 State: insufficient Obtain the kubeconfig for your new hosted cluster by entering the extract command: USD oc extract -n <hosted_cluster_namespace> \ secret/<hosted_cluster_name>-admin-kubeconfig --to=- \ > kubeconfig-<hosted_cluster_name> After the agents reach the added-to-existing-cluster state, verify that you can see the OpenShift Container Platform nodes in the hosted cluster by entering the following command: USD oc --kubeconfig kubeconfig-<hosted_cluster_name> get nodes Example output NAME STATUS ROLES AGE VERSION ocp-worker-1 Ready worker 5m41s v1.24.0+3882f8f ocp-worker-2 Ready worker 6m3s v1.24.0+3882f8f Cluster Operators start to reconcile by adding workloads to the nodes. Enter the following command to verify that two machines were created when you scaled up the NodePool object: USD oc -n <hosted_control_plane_namespace> get machines Example output NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION hypercluster1-c96b6f675-m5vch hypercluster1-b2qhl ocp-worker-1 agent://da503cf1-a347-44f2-875c-4960ddb04091 Running 15m 4.x.z hypercluster1-c96b6f675-tl42p hypercluster1-b2qhl ocp-worker-2 agent://4dac1ab2-7dd5-4894-a220-6a3473b67ee6 Running 15m 4.x.z The clusterversion reconcile process eventually reaches a point where only Ingress and Console cluster operators are missing. Enter the following command: USD oc --kubeconfig kubeconfig-<hosted_cluster_name> get clusterversion,co Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version False True 40m Unable to apply 4.x.z: the cluster operator console has not yet successfully rolled out NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/console 4.12z False False False 11m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.hypercluster1.domain.com): Get "https://console-openshift-console.apps.hypercluster1.domain.com": dial tcp 10.19.3.29:443: connect: connection refused clusteroperator.config.openshift.io/csi-snapshot-controller 4.12z True False False 10m clusteroperator.config.openshift.io/dns 4.12z True False False 9m16s 5.2.2.1. Adding node pools You can create node pools for a hosted cluster by specifying a name, number of replicas, and any additional information, such as an agent label selector. Procedure To create a node pool, enter the following information: USD hcp create nodepool agent \ --cluster-name <hosted_cluster_name> \ 1 --name <nodepool_name> \ 2 --node-count <worker_node_count> \ 3 --agentLabelSelector size=medium 4 1 Replace <hosted_cluster_name> with your hosted cluster name. 2 Replace <nodepool_name> with the name of your node pool, for example, <hosted_cluster_name>-extra-cpu . 3 Replace <worker_node_count> with the worker node count, for example, 2 . 4 The --agentLabelSelector flag is optional. The node pool uses agents with the size=medium label. Check the status of the node pool by listing nodepool resources in the clusters namespace: USD oc get nodepools --namespace clusters Extract the admin-kubeconfig secret by entering the following command: USD oc extract -n <hosted_control_plane_namespace> secret/admin-kubeconfig --to=./hostedcluster-secrets --confirm Example output hostedcluster-secrets/kubeconfig After some time, you can check the status of the node pool by entering the following command: USD oc --kubeconfig ./hostedcluster-secrets get nodes Verification Verify that the number of available node pools match the number of expected node pools by entering this command: USD oc get nodepools --namespace clusters 5.2.2.2. Enabling node auto-scaling for the hosted cluster When you need more capacity in your hosted cluster and spare agents are available, you can enable auto-scaling to install new worker nodes. Procedure To enable auto-scaling, enter the following command: USD oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> \ --type=json \ -p '[{"op": "remove", "path": "/spec/replicas"},{"op":"add", "path": "/spec/autoScaling", "value": { "max": 5, "min": 2 }}]' Note In the example, the minimum number of nodes is 2, and the maximum is 5. The maximum number of nodes that you can add might be bound by your platform. For example, if you use the Agent platform, the maximum number of nodes is bound by the number of available agents. Create a workload that requires a new node. Create a YAML file that contains the workload configuration, by using the following example: apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: reversewords name: reversewords namespace: default spec: replicas: 40 selector: matchLabels: app: reversewords strategy: {} template: metadata: creationTimestamp: null labels: app: reversewords spec: containers: - image: quay.io/mavazque/reversewords:latest name: reversewords resources: requests: memory: 2Gi status: {} Save the file as workload-config.yaml . Apply the YAML by entering the following command: USD oc apply -f workload-config.yaml Extract the admin-kubeconfig secret by entering the following command: USD oc extract -n <hosted_cluster_namespace> \ secret/<hosted_cluster_name>-admin-kubeconfig \ --to=./hostedcluster-secrets --confirm Example output You can check if new nodes are in the Ready status by entering the following command: USD oc --kubeconfig ./hostedcluster-secrets get nodes To remove the node, delete the workload by entering the following command: USD oc --kubeconfig ./hostedcluster-secrets -n <namespace> \ delete deployment <deployment_name> Wait for several minutes to pass without requiring the additional capacity. On the Agent platform, the agent is decommissioned and can be reused. You can confirm that the node was removed by entering the following command: USD oc --kubeconfig ./hostedcluster-secrets get nodes Note For IBM Z agents, compute nodes are detached from the cluster only for IBM Z with KVM agents. For z/VM and LPAR, you must delete the compute nodes manually. Agents can be reused only for IBM Z with KVM. For z/VM and LPAR, re-create the agents to use them as compute nodes. 5.2.2.3. Disabling node auto-scaling for the hosted cluster To disable node auto-scaling, complete the following procedure. Procedure Enter the following command to disable node auto-scaling for the hosted cluster: USD oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> \ --type=json \ -p '[\{"op":"remove", "path": "/spec/autoScaling"}, \{"op": "add", "path": "/spec/replicas", "value": <specify_value_to_scale_replicas>]' The command removes "spec.autoScaling" from the YAML file, adds "spec.replicas" , and sets "spec.replicas" to the integer value that you specify. Additional resources Scaling down the data plane to zero 5.2.3. Handling ingress in a hosted cluster on bare metal Every OpenShift Container Platform cluster has a default application Ingress Controller that typically has an external DNS record associated with it. For example, if you create a hosted cluster named example with the base domain krnl.es , you can expect the wildcard domain *.apps.example.krnl.es to be routable. Procedure To set up a load balancer and wildcard DNS record for the *.apps domain, perform the following actions on your guest cluster: Deploy MetalLB by creating a YAML file that contains the configuration for the MetalLB Operator: apiVersion: v1 kind: Namespace metadata: name: metallb labels: openshift.io/cluster-monitoring: "true" annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator-operatorgroup namespace: metallb --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator namespace: metallb spec: channel: "stable" name: metallb-operator source: redhat-operators sourceNamespace: openshift-marketplace Save the file as metallb-operator-config.yaml . Enter the following command to apply the configuration: USD oc apply -f metallb-operator-config.yaml After the Operator is running, create the MetalLB instance: Create a YAML file that contains the configuration for the MetalLB instance: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb Save the file as metallb-instance-config.yaml . Create the MetalLB instance by entering this command: USD oc apply -f metallb-instance-config.yaml Create an IPAddressPool resource with a single IP address. This IP address must be on the same subnet as the network that the cluster nodes use. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb name: <ip_address_pool_name> 1 spec: addresses: - <ingress_ip>-<ingress_ip> 2 autoAssign: false 1 Specify the IPAddressPool resource name. 2 Specify the IP address for your environment. For example, 192.168.122.23 . Apply the configuration for the IP address pool by entering the following command: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement. Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: <l2_advertisement_name> 1 namespace: metallb spec: ipAddressPools: - <ip_address_pool_name> 2 1 Specify the L2Advertisement resource name. 2 Specify the IPAddressPool resource name. Apply the configuration by entering the following command: USD oc apply -f l2advertisement.yaml After creating a service of the LoadBalancer type, MetalLB adds an external IP address for the service. Configure a new load balancer service that routes ingress traffic to the ingress deployment by creating a YAML file named metallb-loadbalancer-service.yaml : kind: Service apiVersion: v1 metadata: annotations: metallb.io/address-pool: ingress-public-ip name: metallb-ingress namespace: openshift-ingress spec: ports: - name: http protocol: TCP port: 80 targetPort: 80 - name: https protocol: TCP port: 443 targetPort: 443 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default type: LoadBalancer Save the metallb-loadbalancer-service.yaml file. Enter the following command to apply the YAML configuration: USD oc apply -f metallb-loadbalancer-service.yaml Enter the following command to reach the OpenShift Container Platform console: USD curl -kI https://console-openshift-console.apps.example.krnl.es Example output HTTP/1.1 200 OK Check the clusterversion and clusteroperator values to verify that everything is running. Enter the following command: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion,co Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.x.y True False 3m32s Cluster version is 4.x.y NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/console 4.x.y True False False 3m50s clusteroperator.config.openshift.io/ingress 4.x.y True False False 53m Replace <4.x.y> with the supported OpenShift Container Platform version that you want to use, for example, 4.18.0-multi . Additional resources About MetalLB and the MetalLB Operator 5.2.4. Enabling machine health checks on bare metal You can enable machine health checks on bare metal to repair and replace unhealthy managed cluster nodes automatically. You must have additional agent machines that are ready to install in the managed cluster. Consider the following limitations before enabling machine health checks: You cannot modify the MachineHealthCheck object. Machine health checks replace nodes only when at least two nodes stay in the False or Unknown status for more than 8 minutes. After you enable machine health checks for the managed cluster nodes, the MachineHealthCheck object is created in your hosted cluster. Procedure To enable machine health checks in your hosted cluster, modify the NodePool resource. Complete the following steps: Verify that the spec.nodeDrainTimeout value in your NodePool resource is greater than 0s . Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace and <nodepool_name> with the node pool name. Run the following command: USD oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep nodeDrainTimeout Example output nodeDrainTimeout: 30s If the spec.nodeDrainTimeout value is not greater than 0s , modify the value by running the following command: USD oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{"spec":{"nodeDrainTimeout": "30m"}}' --type=merge Enable machine health checks by setting the spec.management.autoRepair field to true in the NodePool resource. Run the following command: USD oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{"spec": {"management": {"autoRepair":true}}}' --type=merge Verify that the NodePool resource is updated with the autoRepair: true value by running the following command: USD oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep autoRepair 5.2.5. Disabling machine health checks on bare metal To disable machine health checks for the managed cluster nodes, modify the NodePool resource. Procedure Disable machine health checks by setting the spec.management.autoRepair field to false in the NodePool resource. Run the following command: USD oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{"spec": {"management": {"autoRepair":false}}}' --type=merge Verify that the NodePool resource is updated with the autoRepair: false value by running the following command: USD oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep autoRepair Additional resources Deploying machine health checks 5.3. Managing hosted control planes on OpenShift Virtualization After you deploy a hosted cluster on OpenShift Virtualization, you can manage the cluster by completing the following procedures. 5.3.1. Accessing the hosted cluster You can access the hosted cluster by either getting the kubeconfig file and kubeadmin credential directly from resources, or by using the hcp command line interface to generate a kubeconfig file. Prerequisites To access the hosted cluster by getting the kubeconfig file and credentials directly from resources, you must be familiar with the access secrets for hosted clusters. The hosted cluster (hosting) namespace contains hosted cluster resources and the access secrets. The hosted control plane namespace is where the hosted control plane runs. The secret name formats are as follows: kubeconfig secret: <hosted_cluster_namespace>-<name>-admin-kubeconfig (clusters-hypershift-demo-admin-kubeconfig) kubeadmin password secret: <hosted_cluster_namespace>-<name>-kubeadmin-password (clusters-hypershift-demo-kubeadmin-password) The kubeconfig secret contains a Base64-encoded kubeconfig field, which you can decode and save into a file to use with the following command: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes The kubeadmin password secret is also Base64-encoded. You can decode it and use the password to log in to the API server or console of the hosted cluster. Procedure To access the hosted cluster by using the hcp CLI to generate the kubeconfig file, take the following steps: Generate the kubeconfig file by entering the following command: USD hcp create kubeconfig --namespace <hosted_cluster_namespace> \ --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig After you save the kubeconfig file, you can access the hosted cluster by entering the following example command: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes 5.3.2. Enabling node auto-scaling for the hosted cluster When you need more capacity in your hosted cluster and spare agents are available, you can enable auto-scaling to install new worker nodes. Procedure To enable auto-scaling, enter the following command: USD oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> \ --type=json \ -p '[{"op": "remove", "path": "/spec/replicas"},{"op":"add", "path": "/spec/autoScaling", "value": { "max": 5, "min": 2 }}]' Note In the example, the minimum number of nodes is 2, and the maximum is 5. The maximum number of nodes that you can add might be bound by your platform. For example, if you use the Agent platform, the maximum number of nodes is bound by the number of available agents. Create a workload that requires a new node. Create a YAML file that contains the workload configuration, by using the following example: apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: reversewords name: reversewords namespace: default spec: replicas: 40 selector: matchLabels: app: reversewords strategy: {} template: metadata: creationTimestamp: null labels: app: reversewords spec: containers: - image: quay.io/mavazque/reversewords:latest name: reversewords resources: requests: memory: 2Gi status: {} Save the file as workload-config.yaml . Apply the YAML by entering the following command: USD oc apply -f workload-config.yaml Extract the admin-kubeconfig secret by entering the following command: USD oc extract -n <hosted_cluster_namespace> \ secret/<hosted_cluster_name>-admin-kubeconfig \ --to=./hostedcluster-secrets --confirm Example output You can check if new nodes are in the Ready status by entering the following command: USD oc --kubeconfig ./hostedcluster-secrets get nodes To remove the node, delete the workload by entering the following command: USD oc --kubeconfig ./hostedcluster-secrets -n <namespace> \ delete deployment <deployment_name> Wait for several minutes to pass without requiring the additional capacity. On the Agent platform, the agent is decommissioned and can be reused. You can confirm that the node was removed by entering the following command: USD oc --kubeconfig ./hostedcluster-secrets get nodes Note For IBM Z agents, compute nodes are detached from the cluster only for IBM Z with KVM agents. For z/VM and LPAR, you must delete the compute nodes manually. Agents can be reused only for IBM Z with KVM. For z/VM and LPAR, re-create the agents to use them as compute nodes. 5.3.3. Configuring storage for hosted control planes on OpenShift Virtualization If you do not provide any advanced storage configuration, the default storage class is used for the KubeVirt virtual machine (VM) images, the KubeVirt Container Storage Interface (CSI) mapping, and the etcd volumes. The following table lists the capabilities that the infrastructure must provide to support persistent storage in a hosted cluster: Table 5.1. Persistent storage modes in a hosted cluster Infrastructure CSI provider Hosted cluster CSI provider Hosted cluster capabilities Any RWX Block CSI provider kubevirt-csi Basic: RWO Block and File , RWX Block and Snapshot Any RWX Block CSI provider Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation feature set. External mode has a smaller footprint and uses a standalone Red Hat Ceph Storage. Internal mode has a larger footprint, but is self-contained and suitable for use cases that require expanded capabilities such as RWX File . Note OpenShift Virtualization handles storage on hosted clusters, which especially helps customers whose requirements are limited to block storage. 5.3.3.1. Mapping KubeVirt CSI storage classes KubeVirt CSI supports mapping a infrastructure storage class that is capable of ReadWriteMany (RWX) access. You can map the infrastructure storage class to hosted storage class during cluster creation. Procedure To map the infrastructure storage class to the hosted storage class, use the --infra-storage-class-mapping argument by running the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_node_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <memory> \ 4 --cores <cpu> \ 5 --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class> \ 6 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the worker count, for example, 2 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify a value for memory, for example, 8Gi . 5 Specify a value for CPU, for example, 2 . 6 Replace <infrastructure_storage_class> with the infrastructure storage class name and <hosted_storage_class> with the hosted cluster storage class name. You can use the --infra-storage-class-mapping argument multiple times within the hcp create cluster command. After you create the hosted cluster, the infrastructure storage class is visible within the hosted cluster. When you create a Persistent Volume Claim (PVC) within the hosted cluster that uses one of those storage classes, KubeVirt CSI provisions that volume by using the infrastructure storage class mapping that you configured during cluster creation. Note KubeVirt CSI supports mapping only an infrastructure storage class that is capable of RWX access. The following table shows how volume and access mode capabilities map to KubeVirt CSI storage classes: Table 5.2. Mapping KubeVirt CSI storage classes to access and volume modes Infrastructure CSI capability Hosted cluster CSI capability VM live migration support Notes RWX: Block or Filesystem ReadWriteOnce (RWO) Block or Filesystem RWX Block only Supported Use Block mode because Filesystem volume mode results in degraded hosted Block mode performance. RWX Block volume mode is supported only when the hosted cluster is OpenShift Container Platform 4.16 or later. RWO Block storage RWO Block storage or Filesystem Not supported Lack of live migration support affects the ability to update the underlying infrastructure cluster that hosts the KubeVirt VMs. RWO FileSystem RWO Block or Filesystem Not supported Lack of live migration support affects the ability to update the underlying infrastructure cluster that hosts the KubeVirt VMs. Use of the infrastructure Filesystem volume mode results in degraded hosted Block mode performance. 5.3.3.2. Mapping a single KubeVirt CSI volume snapshot class You can expose your infrastructure volume snapshot class to the hosted cluster by using KubeVirt CSI. Procedure To map your volume snapshot class to the hosted cluster, use the --infra-volumesnapshot-class-mapping argument when creating a hosted cluster. Run the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_node_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <memory> \ 4 --cores <cpu> \ 5 --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class> \ 6 --infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class> 7 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the worker count, for example, 2 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify a value for memory, for example, 8Gi . 5 Specify a value for CPU, for example, 2 . 6 Replace <infrastructure_storage_class> with the storage class present in the infrastructure cluster. Replace <hosted_storage_class> with the storage class present in the hosted cluster. 7 Replace <infrastructure_volume_snapshot_class> with the volume snapshot class present in the infrastructure cluster. Replace <hosted_volume_snapshot_class> with the volume snapshot class present in the hosted cluster. Note If you do not use the --infra-storage-class-mapping and --infra-volumesnapshot-class-mapping arguments, a hosted cluster is created with the default storage class and the volume snapshot class. Therefore, you must set the default storage class and the volume snapshot class in the infrastructure cluster. 5.3.3.3. Mapping multiple KubeVirt CSI volume snapshot classes You can map multiple volume snapshot classes to the hosted cluster by assigning them to a specific group. The infrastructure storage class and the volume snapshot class are compatible with each other only if they belong to a same group. Procedure To map multiple volume snapshot classes to the hosted cluster, use the group option when creating a hosted cluster. Run the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_node_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <memory> \ 4 --cores <cpu> \ 5 --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \ 6 --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \ --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \ --infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class>,group=<group_name> \ 7 --infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class>,group=<group_name> 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the worker count, for example, 2 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify a value for memory, for example, 8Gi . 5 Specify a value for CPU, for example, 2 . 6 Replace <infrastructure_storage_class> with the storage class present in the infrastructure cluster. Replace <hosted_storage_class> with the storage class present in the hosted cluster. Replace <group_name> with the group name. For example, infra-storage-class-mygroup/hosted-storage-class-mygroup,group=mygroup and infra-storage-class-mymap/hosted-storage-class-mymap,group=mymap . 7 Replace <infrastructure_volume_snapshot_class> with the volume snapshot class present in the infrastructure cluster. Replace <hosted_volume_snapshot_class> with the volume snapshot class present in the hosted cluster. For example, infra-vol-snap-mygroup/hosted-vol-snap-mygroup,group=mygroup and infra-vol-snap-mymap/hosted-vol-snap-mymap,group=mymap . 5.3.3.4. Configuring KubeVirt VM root volume At cluster creation time, you can configure the storage class that is used to host the KubeVirt VM root volumes by using the --root-volume-storage-class argument. Procedure To set a custom storage class and volume size for KubeVirt VMs, run the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_node_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <memory> \ 4 --cores <cpu> \ 5 --root-volume-storage-class <root_volume_storage_class> \ 6 --root-volume-size <volume_size> 7 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the worker count, for example, 2 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify a value for memory, for example, 8Gi . 5 Specify a value for CPU, for example, 2 . 6 Specify a name of the storage class to host the KubeVirt VM root volumes, for example, ocs-storagecluster-ceph-rbd . 7 Specify the volume size, for example, 64 . As a result, you get a hosted cluster created with VMs hosted on PVCs. 5.3.3.5. Enabling KubeVirt VM image caching You can use KubeVirt VM image caching to optimize both cluster startup time and storage usage. KubeVirt VM image caching supports the use of a storage class that is capable of smart cloning and the ReadWriteMany access mode. For more information about smart cloning, see Cloning a data volume using smart-cloning . Image caching works as follows: The VM image is imported to a PVC that is associated with the hosted cluster. A unique clone of that PVC is created for every KubeVirt VM that is added as a worker node to the cluster. Image caching reduces VM startup time by requiring only a single image import. It can further reduce overall cluster storage usage when the storage class supports copy-on-write cloning. Procedure To enable image caching, during cluster creation, use the --root-volume-cache-strategy=PVC argument by running the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_node_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <memory> \ 4 --cores <cpu> \ 5 --root-volume-cache-strategy=PVC 6 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the worker count, for example, 2 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify a value for memory, for example, 8Gi . 5 Specify a value for CPU, for example, 2 . 6 Specify a strategy for image caching, for example, PVC . 5.3.3.6. KubeVirt CSI storage security and isolation KubeVirt Container Storage Interface (CSI) extends the storage capabilities of the underlying infrastructure cluster to hosted clusters. The CSI driver ensures secure and isolated access to the infrastructure storage classes and hosted clusters by using the following security constraints: The storage of a hosted cluster is isolated from the other hosted clusters. Worker nodes in a hosted cluster do not have a direct API access to the infrastructure cluster. The hosted cluster can provision storage on the infrastructure cluster only through the controlled KubeVirt CSI interface. The hosted cluster does not have access to the KubeVirt CSI cluster controller. As a result, the hosted cluster cannot access arbitrary storage volumes on the infrastructure cluster that are not associated with the hosted cluster. The KubeVirt CSI cluster controller runs in a pod in the hosted control plane namespace. Role-based access control (RBAC) of the KubeVirt CSI cluster controller limits the persistent volume claim (PVC) access to only the hosted control plane namespace. Therefore, KubeVirt CSI components cannot access storage from the other namespaces. Additional resources Cloning a data volume using smart-cloning 5.3.3.7. Configuring etcd storage At cluster creation time, you can configure the storage class that is used to host etcd data by using the --etcd-storage-class argument. Procedure To configure a storage class for etcd, run the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_node_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <memory> \ 4 --cores <cpu> \ 5 --etcd-storage-class=<etcd_storage_class_name> 6 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the worker count, for example, 2 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify a value for memory, for example, 8Gi . 5 Specify a value for CPU, for example, 2 . 6 Specify the etcd storage class name, for example, lvm-storageclass . If you do not provide an --etcd-storage-class argument, the default storage class is used. 5.3.4. Attaching NVIDIA GPU devices by using the hcp CLI You can attach one or more NVIDIA graphics processing unit (GPU) devices to node pools by using the hcp command-line interface (CLI) in a hosted cluster on OpenShift Virtualization. Important Attaching NVIDIA GPU devices to node pools is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have exposed the NVIDIA GPU device as a resource on the node where the GPU device resides. For more information, see NVIDIA GPU Operator with OpenShift Virtualization . You have exposed the NVIDIA GPU device as an extended resource on the node to assign it to node pools. Procedure You can attach the GPU device to node pools during cluster creation by running the following command: USD hcp create cluster kubevirt \ --name <hosted_cluster_name> \ 1 --node-pool-replicas <worker_node_count> \ 2 --pull-secret <path_to_pull_secret> \ 3 --memory <memory> \ 4 --cores <cpu> \ 5 --host-device-name="<gpu_device_name>,count:<value>" 6 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the worker count, for example, 3 . 3 Specify the path to your pull secret, for example, /user/name/pullsecret . 4 Specify a value for memory, for example, 16Gi . 5 Specify a value for CPU, for example, 2 . 6 Specify the GPU device name and the count, for example, --host-device-name="nvidia-a100,count:2" . The --host-device-name argument takes the name of the GPU device from the infrastructure node and an optional count that represents the number of GPU devices you want to attach to each virtual machine (VM) in node pools. The default count is 1 . For example, if you attach 2 GPU devices to 3 node pool replicas, all 3 VMs in the node pool are attached to the 2 GPU devices. Tip You can use the --host-device-name argument multiple times to attach multiple devices of different types. 5.3.5. Attaching NVIDIA GPU devices by using the NodePool resource You can attach one or more NVIDIA graphics processing unit (GPU) devices to node pools by configuring the nodepool.spec.platform.kubevirt.hostDevices field in the NodePool resource. Important Attaching NVIDIA GPU devices to node pools is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Procedure Attach one or more GPU devices to node pools: To attach a single GPU device, configure the NodePool resource by using the following example configuration: apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: arch: amd64 clusterName: <hosted_cluster_name> management: autoRepair: false upgradeType: Replace nodeDrainTimeout: 0s nodeVolumeDetachTimeout: 0s platform: kubevirt: attachDefaultNetwork: true compute: cores: <cpu> 3 memory: <memory> 4 hostDevices: 5 - count: <count> 6 deviceName: <gpu_device_name> 7 networkInterfaceMultiqueue: Enable rootVolume: persistent: size: 32Gi type: Persistent type: KubeVirt replicas: <worker_node_count> 8 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the name of the hosted cluster namespace, for example, clusters . 3 Specify a value for CPU, for example, 2 . 4 Specify a value for memory, for example, 16Gi . 5 The hostDevices field defines a list of different types of GPU devices that you can attach to node pools. 6 Specify the number of GPU devices you want to attach to each virtual machine (VM) in node pools. For example, if you attach 2 GPU devices to 3 node pool replicas, all 3 VMs in the node pool are attached to the 2 GPU devices. The default count is 1 . 7 Specify the GPU device name, for example, nvidia-a100 . 8 Specify the worker count, for example, 3 . To attach multiple GPU devices, configure the NodePool resource by using the following example configuration: apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: name: <hosted_cluster_name> namespace: <hosted_cluster_namespace> spec: arch: amd64 clusterName: <hosted_cluster_name> management: autoRepair: false upgradeType: Replace nodeDrainTimeout: 0s nodeVolumeDetachTimeout: 0s platform: kubevirt: attachDefaultNetwork: true compute: cores: <cpu> memory: <memory> hostDevices: - count: <count> deviceName: <gpu_device_name> - count: <count> deviceName: <gpu_device_name> - count: <count> deviceName: <gpu_device_name> - count: <count> deviceName: <gpu_device_name> networkInterfaceMultiqueue: Enable rootVolume: persistent: size: 32Gi type: Persistent type: KubeVirt replicas: <worker_node_count> 5.4. Managing hosted control planes on non-bare-metal agent machines After you deploy hosted control planes on non-bare-metal agent machines, you can manage a hosted cluster by completing the following tasks. 5.4.1. Accessing the hosted cluster You can access the hosted cluster by either getting the kubeconfig file and kubeadmin credential directly from resources, or by using the hcp command line interface to generate a kubeconfig file. Prerequisites To access the hosted cluster by getting the kubeconfig file and credentials directly from resources, you must be familiar with the access secrets for hosted clusters. The hosted cluster (hosting) namespace contains hosted cluster resources and the access secrets. The hosted control plane namespace is where the hosted control plane runs. The secret name formats are as follows: kubeconfig secret: <hosted_cluster_namespace>-<name>-admin-kubeconfig . For example, clusters-hypershift-demo-admin-kubeconfig . kubeadmin password secret: <hosted_cluster_namespace>-<name>-kubeadmin-password . For example, clusters-hypershift-demo-kubeadmin-password . The kubeconfig secret contains a Base64-encoded kubeconfig field, which you can decode and save into a file to use with the following command: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes The kubeadmin password secret is also Base64-encoded. You can decode it and use the password to log in to the API server or console of the hosted cluster. Procedure To access the hosted cluster by using the hcp CLI to generate the kubeconfig file, take the following steps: Generate the kubeconfig file by entering the following command: USD hcp create kubeconfig --namespace <hosted_cluster_namespace> \ --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig After you save the kubeconfig file, you can access the hosted cluster by entering the following example command: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes 5.4.2. Scaling the NodePool object for a hosted cluster You can scale up the NodePool object by adding nodes to your hosted cluster. When you scale a node pool, consider the following information: When you scale a replica by the node pool, a machine is created. For every machine, the Cluster API provider finds and installs an Agent that meets the requirements that are specified in the node pool specification. You can monitor the installation of an Agent by checking its status and conditions. When you scale down a node pool, Agents are unbound from the corresponding cluster. Before you can reuse the Agents, you must restart them by using the Discovery image. Procedure Scale the NodePool object to two nodes: USD oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> --replicas 2 The Cluster API agent provider randomly picks two agents that are then assigned to the hosted cluster. Those agents go through different states and finally join the hosted cluster as OpenShift Container Platform nodes. The agents pass through states in the following order: binding discovering insufficient installing installing-in-progress added-to-existing-cluster Enter the following command: USD oc -n <hosted_control_plane_namespace> get agent Example output NAME CLUSTER APPROVED ROLE STAGE 4dac1ab2-7dd5-4894-a220-6a3473b67ee6 hypercluster1 true auto-assign d9198891-39f4-4930-a679-65fb142b108b true auto-assign da503cf1-a347-44f2-875c-4960ddb04091 hypercluster1 true auto-assign Enter the following command: USD oc -n <hosted_control_plane_namespace> get agent \ -o jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}' Example output BMH: ocp-worker-2 Agent: 4dac1ab2-7dd5-4894-a220-6a3473b67ee6 State: binding BMH: ocp-worker-0 Agent: d9198891-39f4-4930-a679-65fb142b108b State: known-unbound BMH: ocp-worker-1 Agent: da503cf1-a347-44f2-875c-4960ddb04091 State: insufficient Obtain the kubeconfig for your new hosted cluster by entering the extract command: USD oc extract -n <hosted_cluster_namespace> \ secret/<hosted_cluster_name>-admin-kubeconfig --to=- \ > kubeconfig-<hosted_cluster_name> After the agents reach the added-to-existing-cluster state, verify that you can see the OpenShift Container Platform nodes in the hosted cluster by entering the following command: USD oc --kubeconfig kubeconfig-<hosted_cluster_name> get nodes Example output NAME STATUS ROLES AGE VERSION ocp-worker-1 Ready worker 5m41s v1.24.0+3882f8f ocp-worker-2 Ready worker 6m3s v1.24.0+3882f8f Cluster Operators start to reconcile by adding workloads to the nodes. Enter the following command to verify that two machines were created when you scaled up the NodePool object: USD oc -n <hosted_control_plane_namespace> get machines Example output NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION hypercluster1-c96b6f675-m5vch hypercluster1-b2qhl ocp-worker-1 agent://da503cf1-a347-44f2-875c-4960ddb04091 Running 15m 4.x.z hypercluster1-c96b6f675-tl42p hypercluster1-b2qhl ocp-worker-2 agent://4dac1ab2-7dd5-4894-a220-6a3473b67ee6 Running 15m 4.x.z The clusterversion reconcile process eventually reaches a point where only Ingress and Console cluster operators are missing. Enter the following command: USD oc --kubeconfig kubeconfig-<hosted_cluster_name> get clusterversion,co Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version False True 40m Unable to apply 4.x.z: the cluster operator console has not yet successfully rolled out NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/console 4.12z False False False 11m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.hypercluster1.domain.com): Get "https://console-openshift-console.apps.hypercluster1.domain.com": dial tcp 10.19.3.29:443: connect: connection refused clusteroperator.config.openshift.io/csi-snapshot-controller 4.12z True False False 10m clusteroperator.config.openshift.io/dns 4.12z True False False 9m16s 5.4.2.1. Adding node pools You can create node pools for a hosted cluster by specifying a name, number of replicas, and any additional information, such as an agent label selector. Procedure To create a node pool, enter the following information: USD hcp create nodepool agent \ --cluster-name <hosted_cluster_name> \ 1 --name <nodepool_name> \ 2 --node-count <worker_node_count> \ 3 --agentLabelSelector size=medium 4 1 Replace <hosted_cluster_name> with your hosted cluster name. 2 Replace <nodepool_name> with the name of your node pool, for example, <hosted_cluster_name>-extra-cpu . 3 Replace <worker_node_count> with the worker node count, for example, 2 . 4 The --agentLabelSelector flag is optional. The node pool uses agents with the size=medium label. Check the status of the node pool by listing nodepool resources in the clusters namespace: USD oc get nodepools --namespace clusters Extract the admin-kubeconfig secret by entering the following command: USD oc extract -n <hosted_control_plane_namespace> secret/admin-kubeconfig --to=./hostedcluster-secrets --confirm Example output hostedcluster-secrets/kubeconfig After some time, you can check the status of the node pool by entering the following command: USD oc --kubeconfig ./hostedcluster-secrets get nodes Verification Verify that the number of available node pools match the number of expected node pools by entering this command: USD oc get nodepools --namespace clusters 5.4.2.2. Enabling node auto-scaling for the hosted cluster When you need more capacity in your hosted cluster and spare agents are available, you can enable auto-scaling to install new worker nodes. Procedure To enable auto-scaling, enter the following command: USD oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> \ --type=json \ -p '[{"op": "remove", "path": "/spec/replicas"},{"op":"add", "path": "/spec/autoScaling", "value": { "max": 5, "min": 2 }}]' Note In the example, the minimum number of nodes is 2, and the maximum is 5. The maximum number of nodes that you can add might be bound by your platform. For example, if you use the Agent platform, the maximum number of nodes is bound by the number of available agents. Create a workload that requires a new node. Create a YAML file that contains the workload configuration, by using the following example: apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: reversewords name: reversewords namespace: default spec: replicas: 40 selector: matchLabels: app: reversewords strategy: {} template: metadata: creationTimestamp: null labels: app: reversewords spec: containers: - image: quay.io/mavazque/reversewords:latest name: reversewords resources: requests: memory: 2Gi status: {} Save the file as workload-config.yaml . Apply the YAML by entering the following command: USD oc apply -f workload-config.yaml Extract the admin-kubeconfig secret by entering the following command: USD oc extract -n <hosted_cluster_namespace> \ secret/<hosted_cluster_name>-admin-kubeconfig \ --to=./hostedcluster-secrets --confirm Example output You can check if new nodes are in the Ready status by entering the following command: USD oc --kubeconfig ./hostedcluster-secrets get nodes To remove the node, delete the workload by entering the following command: USD oc --kubeconfig ./hostedcluster-secrets -n <namespace> \ delete deployment <deployment_name> Wait for several minutes to pass without requiring the additional capacity. On the Agent platform, the agent is decommissioned and can be reused. You can confirm that the node was removed by entering the following command: USD oc --kubeconfig ./hostedcluster-secrets get nodes Note For IBM Z agents, compute nodes are detached from the cluster only for IBM Z with KVM agents. For z/VM and LPAR, you must delete the compute nodes manually. Agents can be reused only for IBM Z with KVM. For z/VM and LPAR, re-create the agents to use them as compute nodes. 5.4.2.3. Disabling node auto-scaling for the hosted cluster To disable node auto-scaling, complete the following procedure. Procedure Enter the following command to disable node auto-scaling for the hosted cluster: USD oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> \ --type=json \ -p '[\{"op":"remove", "path": "/spec/autoScaling"}, \{"op": "add", "path": "/spec/replicas", "value": <specify_value_to_scale_replicas>]' The command removes "spec.autoScaling" from the YAML file, adds "spec.replicas" , and sets "spec.replicas" to the integer value that you specify. Additional resources Scaling down the data plane to zero 5.4.3. Handling ingress in a hosted cluster on non-bare-metal agent machines Every OpenShift Container Platform cluster has a default application Ingress Controller that typically has an external DNS record associated with it. For example, if you create a hosted cluster named example with the base domain krnl.es , you can expect the wildcard domain *.apps.example.krnl.es to be routable. Procedure To set up a load balancer and wildcard DNS record for the *.apps domain, perform the following actions on your guest cluster: Deploy MetalLB by creating a YAML file that contains the configuration for the MetalLB Operator: apiVersion: v1 kind: Namespace metadata: name: metallb labels: openshift.io/cluster-monitoring: "true" annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator-operatorgroup namespace: metallb --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator namespace: metallb spec: channel: "stable" name: metallb-operator source: redhat-operators sourceNamespace: openshift-marketplace Save the file as metallb-operator-config.yaml . Enter the following command to apply the configuration: USD oc apply -f metallb-operator-config.yaml After the Operator is running, create the MetalLB instance: Create a YAML file that contains the configuration for the MetalLB instance: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb Save the file as metallb-instance-config.yaml . Create the MetalLB instance by entering this command: USD oc apply -f metallb-instance-config.yaml Create an IPAddressPool resource with a single IP address. This IP address must be on the same subnet as the network that the cluster nodes use. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb name: <ip_address_pool_name> 1 spec: addresses: - <ingress_ip>-<ingress_ip> 2 autoAssign: false 1 Specify the IPAddressPool resource name. 2 Specify the IP address for your environment. For example, 192.168.122.23 . Apply the configuration for the IP address pool by entering the following command: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement. Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: <l2_advertisement_name> 1 namespace: metallb spec: ipAddressPools: - <ip_address_pool_name> 2 1 Specify the L2Advertisement resource name. 2 Specify the IPAddressPool resource name. Apply the configuration by entering the following command: USD oc apply -f l2advertisement.yaml After creating a service of the LoadBalancer type, MetalLB adds an external IP address for the service. Configure a new load balancer service that routes ingress traffic to the ingress deployment by creating a YAML file named metallb-loadbalancer-service.yaml : kind: Service apiVersion: v1 metadata: annotations: metallb.io/address-pool: ingress-public-ip name: metallb-ingress namespace: openshift-ingress spec: ports: - name: http protocol: TCP port: 80 targetPort: 80 - name: https protocol: TCP port: 443 targetPort: 443 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default type: LoadBalancer Save the metallb-loadbalancer-service.yaml file. Enter the following command to apply the YAML configuration: USD oc apply -f metallb-loadbalancer-service.yaml Enter the following command to reach the OpenShift Container Platform console: USD curl -kI https://console-openshift-console.apps.example.krnl.es Example output HTTP/1.1 200 OK Check the clusterversion and clusteroperator values to verify that everything is running. Enter the following command: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion,co Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.x.y True False 3m32s Cluster version is 4.x.y NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/console 4.x.y True False False 3m50s clusteroperator.config.openshift.io/ingress 4.x.y True False False 53m Replace <4.x.y> with the supported OpenShift Container Platform version that you want to use, for example, 4.18.0-multi . Additional resources About MetalLB and the MetalLB Operator 5.4.4. Enabling machine health checks on non-bare-metal agent machines You can enable machine health checks on bare metal to repair and replace unhealthy managed cluster nodes automatically. You must have additional agent machines that are ready to install in the managed cluster. Consider the following limitations before enabling machine health checks: You cannot modify the MachineHealthCheck object. Machine health checks replace nodes only when at least two nodes stay in the False or Unknown status for more than 8 minutes. After you enable machine health checks for the managed cluster nodes, the MachineHealthCheck object is created in your hosted cluster. Procedure To enable machine health checks in your hosted cluster, modify the NodePool resource. Complete the following steps: Verify that the spec.nodeDrainTimeout value in your NodePool resource is greater than 0s . Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace and <nodepool_name> with the node pool name. Run the following command: USD oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep nodeDrainTimeout Example output nodeDrainTimeout: 30s If the spec.nodeDrainTimeout value is not greater than 0s , modify the value by running the following command: USD oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{"spec":{"nodeDrainTimeout": "30m"}}' --type=merge Enable machine health checks by setting the spec.management.autoRepair field to true in the NodePool resource. Run the following command: USD oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{"spec": {"management": {"autoRepair":true}}}' --type=merge Verify that the NodePool resource is updated with the autoRepair: true value by running the following command: USD oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep autoRepair 5.4.5. Disabling machine health checks on non-bare-metal agent machines To disable machine health checks for the managed cluster nodes, modify the NodePool resource. Procedure Disable machine health checks by setting the spec.management.autoRepair field to false in the NodePool resource. Run the following command: USD oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{"spec": {"management": {"autoRepair":false}}}' --type=merge Verify that the NodePool resource is updated with the autoRepair: false value by running the following command: USD oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep autoRepair Additional resources Deploying machine health checks 5.5. Managing hosted control planes on IBM Power After you deploy hosted control planes on IBM Power, you can manage a hosted cluster by completing the following tasks. 5.5.1. Creating an InfraEnv resource for hosted control planes on IBM Power An InfraEnv is a environment where hosts that are starting the live ISO can join as agents. In this case, the agents are created in the same namespace as your hosted control plane. You can create an InfraEnv resource for hosted control planes on 64-bit x86 bare metal for IBM Power compute nodes. Procedure Create a YAML file to configure an InfraEnv resource. See the following example: apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <hosted_cluster_name> \ 1 namespace: <hosted_control_plane_namespace> \ 2 spec: cpuArchitecture: ppc64le pullSecretRef: name: pull-secret sshAuthorizedKey: <path_to_ssh_public_key> 3 1 Replace <hosted_cluster_name> with the name of your hosted cluster. 2 Replace <hosted_control_plane_namespace> with the name of the hosted control plane namespace, for example, clusters-hosted . 3 Replace <path_to_ssh_public_key> with the path to your SSH public key. The default file path is ~/.ssh/id_rsa.pub . Save the file as infraenv-config.yaml . Apply the configuration by entering the following command: USD oc apply -f infraenv-config.yaml To fetch the URL to download the live ISO, which allows IBM Power machines to join as agents, enter the following command: USD oc -n <hosted_control_plane_namespace> get InfraEnv <hosted_cluster_name> \ -o json 5.5.2. Adding IBM Power agents to the InfraEnv resource You can add agents by manually configuring the machine to start with the live ISO. Procedure Download the live ISO and use it to start a bare metal or a virtual machine (VM) host. You can find the URL for the live ISO in the status.isoDownloadURL field, in the InfraEnv resource. At startup, the host communicates with the Assisted Service and registers as an agent in the same namespace as the InfraEnv resource. To list the agents and some of their properties, enter the following command: USD oc -n <hosted_control_plane_namespace> get agents Example output NAME CLUSTER APPROVED ROLE STAGE 86f7ac75-4fc4-4b36-8130-40fa12602218 auto-assign e57a637f-745b-496e-971d-1abbf03341ba auto-assign After each agent is created, you can optionally set the installation_disk_id and hostname for an agent: To set the installation_disk_id field for an agent, enter the following command: USD oc -n <hosted_control_plane_namespace> patch agent <agent_name> -p '{"spec":{"installation_disk_id":"<installation_disk_id>","approved":true}}' --type merge To set the hostname field for an agent, enter the following command: USD oc -n <hosted_control_plane_namespace> patch agent <agent_name> -p '{"spec":{"hostname":"<hostname>","approved":true}}' --type merge Verification To verify that the agents are approved for use, enter the following command: USD oc -n <hosted_control_plane_namespace> get agents Example output NAME CLUSTER APPROVED ROLE STAGE 86f7ac75-4fc4-4b36-8130-40fa12602218 true auto-assign e57a637f-745b-496e-971d-1abbf03341ba true auto-assign 5.5.3. Scaling the NodePool object for a hosted cluster on IBM Power The NodePool object is created when you create a hosted cluster. By scaling the NodePool object, you can add more compute nodes to hosted control planes. Procedure Run the following command to scale the NodePool object to two nodes: USD oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> --replicas 2 The Cluster API agent provider randomly picks two agents that are then assigned to the hosted cluster. Those agents go through different states and finally join the hosted cluster as OpenShift Container Platform nodes. The agents pass through the transition phases in the following order: binding discovering insufficient installing installing-in-progress added-to-existing-cluster Run the following command to see the status of a specific scaled agent: USD oc -n <hosted_control_plane_namespace> get agent \ -o jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}' Example output BMH: Agent: 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d State: known-unbound BMH: Agent: 5e498cd3-542c-e54f-0c58-ed43e28b568a State: insufficient Run the following command to see the transition phases: USD oc -n <hosted_control_plane_namespace> get agent Example output NAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d hosted-forwarder true auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a true auto-assign da503cf1-a347-44f2-875c-4960ddb04091 hosted-forwarder true auto-assign Run the following command to generate the kubeconfig file to access the hosted cluster: USD hcp create kubeconfig --namespace <hosted_cluster_namespace> \ --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig After the agents reach the added-to-existing-cluster state, verify that you can see the OpenShift Container Platform nodes by entering the following command: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes Example output NAME STATUS ROLES AGE VERSION worker-zvm-0.hostedn.example.com Ready worker 5m41s v1.24.0+3882f8f worker-zvm-1.hostedn.example.com Ready worker 6m3s v1.24.0+3882f8f Enter the following command to verify that two machines were created when you scaled up the NodePool object: USD oc -n <hosted_control_plane_namespace> get machine.cluster.x-k8s.io Example output NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION hosted-forwarder-79558597ff-5tbqp hosted-forwarder-crqq5 worker-zvm-0.hostedn.example.com agent://50c23cda-cedc-9bbd-bcf1-9b3a5c75804d Running 41h 4.15.0 hosted-forwarder-79558597ff-lfjfk hosted-forwarder-crqq5 worker-zvm-1.hostedn.example.com agent://5e498cd3-542c-e54f-0c58-ed43e28b568a Running 41h 4.15.0 Run the following command to check the cluster version: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.15.0 True False 40h Cluster version is 4.15.0 Run the following command to check the Cluster Operator status: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusteroperators For each component of your cluster, the output shows the following Cluster Operator statuses: NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE Additional resources Initial Operator configuration Scaling down the data plane to zero
[ "endpointAccess: Public region: us-east-2 resourceTags: - key: kubernetes.io/cluster/example-cluster-bz4j5 value: owned rolesRef: controlPlaneOperatorARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-control-plane-operator imageRegistryARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-openshift-image-registry ingressARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-openshift-ingress kubeCloudControllerARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-cloud-controller networkARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-cloud-network-config-controller nodePoolManagementARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-node-pool storageARN: arn:aws:iam::820196288204:role/example-cluster-bz4j5-aws-ebs-csi-driver-controller type: AWS", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"elasticloadbalancing:DescribeLoadBalancers\", \"tag:GetResources\", \"route53:ListHostedZones\" ], \"Resource\": \"\\*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"route53:ChangeResourceRecordSets\" ], \"Resource\": [ \"arn:aws:route53:::PUBLIC_ZONE_ID\", \"arn:aws:route53:::PRIVATE_ZONE_ID\" ] } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutBucketPublicAccessBlock\", \"s3:GetBucketPublicAccessBlock\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": \"\\*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:AttachVolume\", \"ec2:CreateSnapshot\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:DeleteSnapshot\", \"ec2:DeleteTags\", \"ec2:DeleteVolume\", \"ec2:DescribeInstances\", \"ec2:DescribeSnapshots\", \"ec2:DescribeTags\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumesModifications\", \"ec2:DetachVolume\", \"ec2:ModifyVolume\" ], \"Resource\": \"\\*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeInstances\", \"ec2:DescribeInstanceStatus\", \"ec2:DescribeInstanceTypes\", \"ec2:UnassignPrivateIpAddresses\", \"ec2:AssignPrivateIpAddresses\", \"ec2:UnassignIpv6Addresses\", \"ec2:AssignIpv6Addresses\", \"ec2:DescribeSubnets\", \"ec2:DescribeNetworkInterfaces\" ], \"Resource\": \"\\*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Action\": [ \"ec2:DescribeInstances\", \"ec2:DescribeImages\", \"ec2:DescribeRegions\", \"ec2:DescribeRouteTables\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeSubnets\", \"ec2:DescribeVolumes\", \"ec2:CreateSecurityGroup\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:ModifyInstanceAttribute\", \"ec2:ModifyVolume\", \"ec2:AttachVolume\", \"ec2:AuthorizeSecurityGroupIngress\", \"ec2:CreateRoute\", \"ec2:DeleteRoute\", \"ec2:DeleteSecurityGroup\", \"ec2:DeleteVolume\", \"ec2:DetachVolume\", \"ec2:RevokeSecurityGroupIngress\", \"ec2:DescribeVpcs\", \"elasticloadbalancing:AddTags\", \"elasticloadbalancing:AttachLoadBalancerToSubnets\", \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\", \"elasticloadbalancing:CreateLoadBalancer\", \"elasticloadbalancing:CreateLoadBalancerPolicy\", \"elasticloadbalancing:CreateLoadBalancerListeners\", \"elasticloadbalancing:ConfigureHealthCheck\", \"elasticloadbalancing:DeleteLoadBalancer\", \"elasticloadbalancing:DeleteLoadBalancerListeners\", \"elasticloadbalancing:DescribeLoadBalancers\", \"elasticloadbalancing:DescribeLoadBalancerAttributes\", \"elasticloadbalancing:DetachLoadBalancerFromSubnets\", \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\", \"elasticloadbalancing:ModifyLoadBalancerAttributes\", \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\", \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\", \"elasticloadbalancing:AddTags\", \"elasticloadbalancing:CreateListener\", \"elasticloadbalancing:CreateTargetGroup\", \"elasticloadbalancing:DeleteListener\", \"elasticloadbalancing:DeleteTargetGroup\", \"elasticloadbalancing:DescribeListeners\", \"elasticloadbalancing:DescribeLoadBalancerPolicies\", \"elasticloadbalancing:DescribeTargetGroups\", \"elasticloadbalancing:DescribeTargetHealth\", \"elasticloadbalancing:ModifyListener\", \"elasticloadbalancing:ModifyTargetGroup\", \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\", \"iam:CreateServiceLinkedRole\", \"kms:DescribeKey\" ], \"Resource\": [ \"\\*\" ], \"Effect\": \"Allow\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Action\": [ \"ec2:AllocateAddress\", \"ec2:AssociateRouteTable\", \"ec2:AttachInternetGateway\", \"ec2:AuthorizeSecurityGroupIngress\", \"ec2:CreateInternetGateway\", \"ec2:CreateNatGateway\", \"ec2:CreateRoute\", \"ec2:CreateRouteTable\", \"ec2:CreateSecurityGroup\", \"ec2:CreateSubnet\", \"ec2:CreateTags\", \"ec2:DeleteInternetGateway\", \"ec2:DeleteNatGateway\", \"ec2:DeleteRouteTable\", \"ec2:DeleteSecurityGroup\", \"ec2:DeleteSubnet\", \"ec2:DeleteTags\", \"ec2:DescribeAccountAttributes\", \"ec2:DescribeAddresses\", \"ec2:DescribeAvailabilityZones\", \"ec2:DescribeImages\", \"ec2:DescribeInstances\", \"ec2:DescribeInternetGateways\", \"ec2:DescribeNatGateways\", \"ec2:DescribeNetworkInterfaces\", \"ec2:DescribeNetworkInterfaceAttribute\", \"ec2:DescribeRouteTables\", \"ec2:DescribeSecurityGroups\", \"ec2:DescribeSubnets\", \"ec2:DescribeVpcs\", \"ec2:DescribeVpcAttribute\", \"ec2:DescribeVolumes\", \"ec2:DetachInternetGateway\", \"ec2:DisassociateRouteTable\", \"ec2:DisassociateAddress\", \"ec2:ModifyInstanceAttribute\", \"ec2:ModifyNetworkInterfaceAttribute\", \"ec2:ModifySubnetAttribute\", \"ec2:ReleaseAddress\", \"ec2:RevokeSecurityGroupIngress\", \"ec2:RunInstances\", \"ec2:TerminateInstances\", \"tag:GetResources\", \"ec2:CreateLaunchTemplate\", \"ec2:CreateLaunchTemplateVersion\", \"ec2:DescribeLaunchTemplates\", \"ec2:DescribeLaunchTemplateVersions\", \"ec2:DeleteLaunchTemplate\", \"ec2:DeleteLaunchTemplateVersions\" ], \"Resource\": [ \"\\*\" ], \"Effect\": \"Allow\" }, { \"Condition\": { \"StringLike\": { \"iam:AWSServiceName\": \"elasticloadbalancing.amazonaws.com\" } }, \"Action\": [ \"iam:CreateServiceLinkedRole\" ], \"Resource\": [ \"arn:*:iam::*:role/aws-service-role/elasticloadbalancing.amazonaws.com/AWSServiceRoleForElasticLoadBalancing\" ], \"Effect\": \"Allow\" }, { \"Action\": [ \"iam:PassRole\" ], \"Resource\": [ \"arn:*:iam::*:role/*-worker-role\" ], \"Effect\": \"Allow\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:CreateVpcEndpoint\", \"ec2:DescribeVpcEndpoints\", \"ec2:ModifyVpcEndpoint\", \"ec2:DeleteVpcEndpoints\", \"ec2:CreateTags\", \"route53:ListHostedZones\" ], \"Resource\": \"\\*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"route53:ChangeResourceRecordSets\", \"route53:ListResourceRecordSets\" ], \"Resource\": \"arn:aws:route53:::%s\" } ] }", "--- apiVersion: v1 kind: Namespace metadata: creationTimestamp: null name: clusters spec: {} status: {} --- apiVersion: v1 data: .dockerconfigjson: xxxxxxxxxxx kind: Secret metadata: creationTimestamp: null labels: hypershift.openshift.io/safe-to-delete-with-cluster: \"true\" name: <pull_secret_name> 1 namespace: clusters --- apiVersion: v1 data: key: xxxxxxxxxxxxxxxxx kind: Secret metadata: creationTimestamp: null labels: hypershift.openshift.io/safe-to-delete-with-cluster: \"true\" name: <etcd_encryption_key_name> 2 namespace: clusters type: Opaque --- apiVersion: v1 data: id_rsa: xxxxxxxxx id_rsa.pub: xxxxxxxxx kind: Secret metadata: creationTimestamp: null labels: hypershift.openshift.io/safe-to-delete-with-cluster: \"true\" name: <ssh-key-name> 3 namespace: clusters --- apiVersion: hypershift.openshift.io/v1beta1 kind: HostedCluster metadata: creationTimestamp: null name: <hosted_cluster_name> 4 namespace: clusters spec: autoscaling: {} configuration: {} controllerAvailabilityPolicy: SingleReplica dns: baseDomain: <dns_domain> 5 privateZoneID: xxxxxxxx publicZoneID: xxxxxxxx etcd: managed: storage: persistentVolume: size: 8Gi storageClassName: gp3-csi type: PersistentVolume managementType: Managed fips: false infraID: <infra_id> 6 issuerURL: <issuer_url> 7 networking: clusterNetwork: - cidr: 10.132.0.0/14 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes serviceNetwork: - cidr: 172.31.0.0/16 olmCatalogPlacement: management platform: aws: cloudProviderConfig: subnet: id: <subnet_xxx> 8 vpc: <vpc_xxx> 9 zone: us-west-1b endpointAccess: Public multiArch: false region: us-west-1 rolesRef: controlPlaneOperatorARN: arn:aws:iam::820196288204:role/<infra_id>-control-plane-operator imageRegistryARN: arn:aws:iam::820196288204:role/<infra_id>-openshift-image-registry ingressARN: arn:aws:iam::820196288204:role/<infra_id>-openshift-ingress kubeCloudControllerARN: arn:aws:iam::820196288204:role/<infra_id>-cloud-controller networkARN: arn:aws:iam::820196288204:role/<infra_id>-cloud-network-config-controller nodePoolManagementARN: arn:aws:iam::820196288204:role/<infra_id>-node-pool storageARN: arn:aws:iam::820196288204:role/<infra_id>-aws-ebs-csi-driver-controller type: AWS pullSecret: name: <pull_secret_name> release: image: quay.io/openshift-release-dev/ocp-release:4.16-x86_64 secretEncryption: aescbc: activeKey: name: <etcd_encryption_key_name> type: aescbc services: - service: APIServer servicePublishingStrategy: type: LoadBalancer - service: OAuthServer servicePublishingStrategy: type: Route - service: Konnectivity servicePublishingStrategy: type: Route - service: Ignition servicePublishingStrategy: type: Route - service: OVNSbDb servicePublishingStrategy: type: Route sshKey: name: <ssh_key_name> status: controlPlaneEndpoint: host: \"\" port: 0 --- apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: creationTimestamp: null name: <node_pool_name> 10 namespace: clusters spec: arch: amd64 clusterName: <hosted_cluster_name> management: autoRepair: true upgradeType: Replace nodeDrainTimeout: 0s platform: aws: instanceProfile: <instance_profile_name> 11 instanceType: m6i.xlarge rootVolume: size: 120 type: gp3 subnet: id: <subnet_xxx> type: AWS release: image: quay.io/openshift-release-dev/ocp-release:4.16-x86_64 replicas: 2 status: replicas: 0", "hcp create cluster aws --infra-id <infra_id> \\ 1 --name <hosted_cluster_name> \\ 2 --sts-creds <path_to_sts_credential_file> \\ 3 --pull-secret <path_to_pull_secret> \\ 4 --generate-ssh \\ 5 --node-pool-replicas 3 --role-arn <role_name> 6", "oc get hostedcluster/<hosted_cluster_name> \\ 1 -o jsonpath='{.spec.release.image}'", "quay.io/openshift-release-dev/ocp-release:<4.y.z>-x86_64 1", "OCP_VERSION=USD(oc image info quay.io/openshift-release-dev/ocp-release@sha256:ac78ebf77f95ab8ff52847ecd22592b545415e1ff6c7ff7f66bf81f158ae4f5e -o jsonpath='{.config.config.Labels[\"io.openshift.release\"]}')", "MULTI_ARCH_TAG=USD(skopeo inspect docker://quay.io/openshift-release-dev/ocp-release@sha256:ac78ebf77f95ab8ff52847ecd22592b545415e1ff6c7ff7f66bf81f158ae4f5e | jq -r '.RepoTags' | sed 's/\"//g' | sed 's/,//g' | grep -w \"USDOCP_VERSION-multiUSD\" | xargs)", "IMAGE=quay.io/openshift-release-dev/ocp-release:USDMULTI_ARCH_TAG", "oc image info USDIMAGE", "OS DIGEST linux/amd64 sha256:b4c7a91802c09a5a748fe19ddd99a8ffab52d8a31db3a081a956a87f22a22ff8 linux/ppc64le sha256:66fda2ff6bd7704f1ba72be8bfe3e399c323de92262f594f8e482d110ec37388 linux/s390x sha256:b1c1072dc639aaa2b50ec99b530012e3ceac19ddc28adcbcdc9643f2dfd14f34 linux/arm64 sha256:7b046404572ac96202d82b6cb029b421dddd40e88c73bbf35f602ffc13017f21", "oc patch -n clusters hostedclusters/<hosted_cluster_name> -p '{\"spec\":{\"release\":{\"image\":\"quay.io/openshift-release-dev/ocp-release:<4.x.y>-multi\"}}}' \\ 1 --type=merge", "oc get hostedcluster/<hosted_cluster_name> -o jsonpath='{.spec.release.image}'", "oc get hostedcontrolplane -n <hosted_control_plane_namespace> -oyaml", "# - lastTransitionTime: \"2024-07-28T13:07:18Z\" message: HostedCluster is deploying, upgrading, or reconfiguring observedGeneration: 5 reason: Progressing status: \"True\" type: Progressing #", "oc get hostedcluster <hosted_cluster_name> -n <hosted_cluster_namespace> -oyaml", "oc get hostedcontrolplane -n clusters-example -oyaml", "# version: availableUpdates: null desired: image: quay.io/openshift-release-dev/ocp-release:<4.x.y>-multi 1 url: https://access.redhat.com/errata/RHBA-2024:4855 version: 4.16.5 history: - completionTime: \"2024-07-28T13:10:58Z\" image: quay.io/openshift-release-dev/ocp-release:<4.x.y>-multi startedTime: \"2024-07-28T13:10:27Z\" state: Completed verified: false version: <4.x.y>", "hcp create nodepool aws --cluster-name <hosted_cluster_name> \\ 1 --name <nodepool_name> \\ 2 --node-count=<node_count> \\ 3 --arch arm64", "hcp create nodepool aws --cluster-name <hosted_cluster_name> \\ 1 --name <nodepool_name> \\ 2 --node-count=<node_count> \\ 3 --arch amd64", "oc get nodepool/<nodepool_name> -oyaml", "# spec: arch: amd64 # release: image: quay.io/openshift-release-dev/ocp-release:<4.x.y>-multi 1", "# spec: arch: arm64 # release: image: quay.io/openshift-release-dev/ocp-release:<4.x.y>-multi", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes", "hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes", "oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> --replicas 2", "oc -n <hosted_control_plane_namespace> get agent", "NAME CLUSTER APPROVED ROLE STAGE 4dac1ab2-7dd5-4894-a220-6a3473b67ee6 hypercluster1 true auto-assign d9198891-39f4-4930-a679-65fb142b108b true auto-assign da503cf1-a347-44f2-875c-4960ddb04091 hypercluster1 true auto-assign", "oc -n <hosted_control_plane_namespace> get agent -o jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\\.openshift\\.io/bmh} Agent: {@.metadata.name} State: {@.status.debugInfo.state}{\"\\n\"}{end}'", "BMH: ocp-worker-2 Agent: 4dac1ab2-7dd5-4894-a220-6a3473b67ee6 State: binding BMH: ocp-worker-0 Agent: d9198891-39f4-4930-a679-65fb142b108b State: known-unbound BMH: ocp-worker-1 Agent: da503cf1-a347-44f2-875c-4960ddb04091 State: insufficient", "oc extract -n <hosted_cluster_namespace> secret/<hosted_cluster_name>-admin-kubeconfig --to=- > kubeconfig-<hosted_cluster_name>", "oc --kubeconfig kubeconfig-<hosted_cluster_name> get nodes", "NAME STATUS ROLES AGE VERSION ocp-worker-1 Ready worker 5m41s v1.24.0+3882f8f ocp-worker-2 Ready worker 6m3s v1.24.0+3882f8f", "oc -n <hosted_control_plane_namespace> get machines", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION hypercluster1-c96b6f675-m5vch hypercluster1-b2qhl ocp-worker-1 agent://da503cf1-a347-44f2-875c-4960ddb04091 Running 15m 4.x.z hypercluster1-c96b6f675-tl42p hypercluster1-b2qhl ocp-worker-2 agent://4dac1ab2-7dd5-4894-a220-6a3473b67ee6 Running 15m 4.x.z", "oc --kubeconfig kubeconfig-<hosted_cluster_name> get clusterversion,co", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version False True 40m Unable to apply 4.x.z: the cluster operator console has not yet successfully rolled out NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/console 4.12z False False False 11m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.hypercluster1.domain.com): Get \"https://console-openshift-console.apps.hypercluster1.domain.com\": dial tcp 10.19.3.29:443: connect: connection refused clusteroperator.config.openshift.io/csi-snapshot-controller 4.12z True False False 10m clusteroperator.config.openshift.io/dns 4.12z True False False 9m16s", "hcp create nodepool agent --cluster-name <hosted_cluster_name> \\ 1 --name <nodepool_name> \\ 2 --node-count <worker_node_count> \\ 3 --agentLabelSelector size=medium 4", "oc get nodepools --namespace clusters", "oc extract -n <hosted_control_plane_namespace> secret/admin-kubeconfig --to=./hostedcluster-secrets --confirm", "hostedcluster-secrets/kubeconfig", "oc --kubeconfig ./hostedcluster-secrets get nodes", "oc get nodepools --namespace clusters", "oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> --type=json -p '[{\"op\": \"remove\", \"path\": \"/spec/replicas\"},{\"op\":\"add\", \"path\": \"/spec/autoScaling\", \"value\": { \"max\": 5, \"min\": 2 }}]'", "apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: reversewords name: reversewords namespace: default spec: replicas: 40 selector: matchLabels: app: reversewords strategy: {} template: metadata: creationTimestamp: null labels: app: reversewords spec: containers: - image: quay.io/mavazque/reversewords:latest name: reversewords resources: requests: memory: 2Gi status: {}", "oc apply -f workload-config.yaml", "oc extract -n <hosted_cluster_namespace> secret/<hosted_cluster_name>-admin-kubeconfig --to=./hostedcluster-secrets --confirm", "hostedcluster-secrets/kubeconfig", "oc --kubeconfig ./hostedcluster-secrets get nodes", "oc --kubeconfig ./hostedcluster-secrets -n <namespace> delete deployment <deployment_name>", "oc --kubeconfig ./hostedcluster-secrets get nodes", "oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> --type=json -p '[\\{\"op\":\"remove\", \"path\": \"/spec/autoScaling\"}, \\{\"op\": \"add\", \"path\": \"/spec/replicas\", \"value\": <specify_value_to_scale_replicas>]'", "apiVersion: v1 kind: Namespace metadata: name: metallb labels: openshift.io/cluster-monitoring: \"true\" annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator-operatorgroup namespace: metallb --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator namespace: metallb spec: channel: \"stable\" name: metallb-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc apply -f metallb-operator-config.yaml", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb", "oc apply -f metallb-instance-config.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb name: <ip_address_pool_name> 1 spec: addresses: - <ingress_ip>-<ingress_ip> 2 autoAssign: false", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: <l2_advertisement_name> 1 namespace: metallb spec: ipAddressPools: - <ip_address_pool_name> 2", "oc apply -f l2advertisement.yaml", "kind: Service apiVersion: v1 metadata: annotations: metallb.io/address-pool: ingress-public-ip name: metallb-ingress namespace: openshift-ingress spec: ports: - name: http protocol: TCP port: 80 targetPort: 80 - name: https protocol: TCP port: 443 targetPort: 443 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default type: LoadBalancer", "oc apply -f metallb-loadbalancer-service.yaml", "curl -kI https://console-openshift-console.apps.example.krnl.es", "HTTP/1.1 200 OK", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion,co", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.x.y True False 3m32s Cluster version is 4.x.y NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/console 4.x.y True False False 3m50s clusteroperator.config.openshift.io/ingress 4.x.y True False False 53m", "oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep nodeDrainTimeout", "nodeDrainTimeout: 30s", "oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{\"spec\":{\"nodeDrainTimeout\": \"30m\"}}' --type=merge", "oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{\"spec\": {\"management\": {\"autoRepair\":true}}}' --type=merge", "oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep autoRepair", "oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{\"spec\": {\"management\": {\"autoRepair\":false}}}' --type=merge", "oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep autoRepair", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes", "hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes", "oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> --type=json -p '[{\"op\": \"remove\", \"path\": \"/spec/replicas\"},{\"op\":\"add\", \"path\": \"/spec/autoScaling\", \"value\": { \"max\": 5, \"min\": 2 }}]'", "apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: reversewords name: reversewords namespace: default spec: replicas: 40 selector: matchLabels: app: reversewords strategy: {} template: metadata: creationTimestamp: null labels: app: reversewords spec: containers: - image: quay.io/mavazque/reversewords:latest name: reversewords resources: requests: memory: 2Gi status: {}", "oc apply -f workload-config.yaml", "oc extract -n <hosted_cluster_namespace> secret/<hosted_cluster_name>-admin-kubeconfig --to=./hostedcluster-secrets --confirm", "hostedcluster-secrets/kubeconfig", "oc --kubeconfig ./hostedcluster-secrets get nodes", "oc --kubeconfig ./hostedcluster-secrets -n <namespace> delete deployment <deployment_name>", "oc --kubeconfig ./hostedcluster-secrets get nodes", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <worker_node_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <memory> \\ 4 --cores <cpu> \\ 5 --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class> \\ 6", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <worker_node_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <memory> \\ 4 --cores <cpu> \\ 5 --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class> \\ 6 --infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class> 7", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <worker_node_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <memory> \\ 4 --cores <cpu> \\ 5 --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> \\ 6 --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> --infra-storage-class-mapping=<infrastructure_storage_class>/<hosted_storage_class>,group=<group_name> --infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class>,group=<group_name> \\ 7 --infra-volumesnapshot-class-mapping=<infrastructure_volume_snapshot_class>/<hosted_volume_snapshot_class>,group=<group_name>", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <worker_node_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <memory> \\ 4 --cores <cpu> \\ 5 --root-volume-storage-class <root_volume_storage_class> \\ 6 --root-volume-size <volume_size> 7", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <worker_node_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <memory> \\ 4 --cores <cpu> \\ 5 --root-volume-cache-strategy=PVC 6", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <worker_node_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <memory> \\ 4 --cores <cpu> \\ 5 --etcd-storage-class=<etcd_storage_class_name> 6", "hcp create cluster kubevirt --name <hosted_cluster_name> \\ 1 --node-pool-replicas <worker_node_count> \\ 2 --pull-secret <path_to_pull_secret> \\ 3 --memory <memory> \\ 4 --cores <cpu> \\ 5 --host-device-name=\"<gpu_device_name>,count:<value>\" 6", "apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: arch: amd64 clusterName: <hosted_cluster_name> management: autoRepair: false upgradeType: Replace nodeDrainTimeout: 0s nodeVolumeDetachTimeout: 0s platform: kubevirt: attachDefaultNetwork: true compute: cores: <cpu> 3 memory: <memory> 4 hostDevices: 5 - count: <count> 6 deviceName: <gpu_device_name> 7 networkInterfaceMultiqueue: Enable rootVolume: persistent: size: 32Gi type: Persistent type: KubeVirt replicas: <worker_node_count> 8", "apiVersion: hypershift.openshift.io/v1beta1 kind: NodePool metadata: name: <hosted_cluster_name> namespace: <hosted_cluster_namespace> spec: arch: amd64 clusterName: <hosted_cluster_name> management: autoRepair: false upgradeType: Replace nodeDrainTimeout: 0s nodeVolumeDetachTimeout: 0s platform: kubevirt: attachDefaultNetwork: true compute: cores: <cpu> memory: <memory> hostDevices: - count: <count> deviceName: <gpu_device_name> - count: <count> deviceName: <gpu_device_name> - count: <count> deviceName: <gpu_device_name> - count: <count> deviceName: <gpu_device_name> networkInterfaceMultiqueue: Enable rootVolume: persistent: size: 32Gi type: Persistent type: KubeVirt replicas: <worker_node_count>", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes", "hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes", "oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> --replicas 2", "oc -n <hosted_control_plane_namespace> get agent", "NAME CLUSTER APPROVED ROLE STAGE 4dac1ab2-7dd5-4894-a220-6a3473b67ee6 hypercluster1 true auto-assign d9198891-39f4-4930-a679-65fb142b108b true auto-assign da503cf1-a347-44f2-875c-4960ddb04091 hypercluster1 true auto-assign", "oc -n <hosted_control_plane_namespace> get agent -o jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\\.openshift\\.io/bmh} Agent: {@.metadata.name} State: {@.status.debugInfo.state}{\"\\n\"}{end}'", "BMH: ocp-worker-2 Agent: 4dac1ab2-7dd5-4894-a220-6a3473b67ee6 State: binding BMH: ocp-worker-0 Agent: d9198891-39f4-4930-a679-65fb142b108b State: known-unbound BMH: ocp-worker-1 Agent: da503cf1-a347-44f2-875c-4960ddb04091 State: insufficient", "oc extract -n <hosted_cluster_namespace> secret/<hosted_cluster_name>-admin-kubeconfig --to=- > kubeconfig-<hosted_cluster_name>", "oc --kubeconfig kubeconfig-<hosted_cluster_name> get nodes", "NAME STATUS ROLES AGE VERSION ocp-worker-1 Ready worker 5m41s v1.24.0+3882f8f ocp-worker-2 Ready worker 6m3s v1.24.0+3882f8f", "oc -n <hosted_control_plane_namespace> get machines", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION hypercluster1-c96b6f675-m5vch hypercluster1-b2qhl ocp-worker-1 agent://da503cf1-a347-44f2-875c-4960ddb04091 Running 15m 4.x.z hypercluster1-c96b6f675-tl42p hypercluster1-b2qhl ocp-worker-2 agent://4dac1ab2-7dd5-4894-a220-6a3473b67ee6 Running 15m 4.x.z", "oc --kubeconfig kubeconfig-<hosted_cluster_name> get clusterversion,co", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version False True 40m Unable to apply 4.x.z: the cluster operator console has not yet successfully rolled out NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/console 4.12z False False False 11m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.hypercluster1.domain.com): Get \"https://console-openshift-console.apps.hypercluster1.domain.com\": dial tcp 10.19.3.29:443: connect: connection refused clusteroperator.config.openshift.io/csi-snapshot-controller 4.12z True False False 10m clusteroperator.config.openshift.io/dns 4.12z True False False 9m16s", "hcp create nodepool agent --cluster-name <hosted_cluster_name> \\ 1 --name <nodepool_name> \\ 2 --node-count <worker_node_count> \\ 3 --agentLabelSelector size=medium 4", "oc get nodepools --namespace clusters", "oc extract -n <hosted_control_plane_namespace> secret/admin-kubeconfig --to=./hostedcluster-secrets --confirm", "hostedcluster-secrets/kubeconfig", "oc --kubeconfig ./hostedcluster-secrets get nodes", "oc get nodepools --namespace clusters", "oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> --type=json -p '[{\"op\": \"remove\", \"path\": \"/spec/replicas\"},{\"op\":\"add\", \"path\": \"/spec/autoScaling\", \"value\": { \"max\": 5, \"min\": 2 }}]'", "apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: reversewords name: reversewords namespace: default spec: replicas: 40 selector: matchLabels: app: reversewords strategy: {} template: metadata: creationTimestamp: null labels: app: reversewords spec: containers: - image: quay.io/mavazque/reversewords:latest name: reversewords resources: requests: memory: 2Gi status: {}", "oc apply -f workload-config.yaml", "oc extract -n <hosted_cluster_namespace> secret/<hosted_cluster_name>-admin-kubeconfig --to=./hostedcluster-secrets --confirm", "hostedcluster-secrets/kubeconfig", "oc --kubeconfig ./hostedcluster-secrets get nodes", "oc --kubeconfig ./hostedcluster-secrets -n <namespace> delete deployment <deployment_name>", "oc --kubeconfig ./hostedcluster-secrets get nodes", "oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> --type=json -p '[\\{\"op\":\"remove\", \"path\": \"/spec/autoScaling\"}, \\{\"op\": \"add\", \"path\": \"/spec/replicas\", \"value\": <specify_value_to_scale_replicas>]'", "apiVersion: v1 kind: Namespace metadata: name: metallb labels: openshift.io/cluster-monitoring: \"true\" annotations: workload.openshift.io/allowed: management --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator-operatorgroup namespace: metallb --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator namespace: metallb spec: channel: \"stable\" name: metallb-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc apply -f metallb-operator-config.yaml", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb", "oc apply -f metallb-instance-config.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb name: <ip_address_pool_name> 1 spec: addresses: - <ingress_ip>-<ingress_ip> 2 autoAssign: false", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: <l2_advertisement_name> 1 namespace: metallb spec: ipAddressPools: - <ip_address_pool_name> 2", "oc apply -f l2advertisement.yaml", "kind: Service apiVersion: v1 metadata: annotations: metallb.io/address-pool: ingress-public-ip name: metallb-ingress namespace: openshift-ingress spec: ports: - name: http protocol: TCP port: 80 targetPort: 80 - name: https protocol: TCP port: 443 targetPort: 443 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default type: LoadBalancer", "oc apply -f metallb-loadbalancer-service.yaml", "curl -kI https://console-openshift-console.apps.example.krnl.es", "HTTP/1.1 200 OK", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion,co", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.x.y True False 3m32s Cluster version is 4.x.y NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/console 4.x.y True False False 3m50s clusteroperator.config.openshift.io/ingress 4.x.y True False False 53m", "oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep nodeDrainTimeout", "nodeDrainTimeout: 30s", "oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{\"spec\":{\"nodeDrainTimeout\": \"30m\"}}' --type=merge", "oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{\"spec\": {\"management\": {\"autoRepair\":true}}}' --type=merge", "oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep autoRepair", "oc patch nodepool -n <hosted_cluster_namespace> <nodepool_name> -p '{\"spec\": {\"management\": {\"autoRepair\":false}}}' --type=merge", "oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -o yaml | grep autoRepair", "apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <hosted_cluster_name> \\ 1 namespace: <hosted_control_plane_namespace> \\ 2 spec: cpuArchitecture: ppc64le pullSecretRef: name: pull-secret sshAuthorizedKey: <path_to_ssh_public_key> 3", "oc apply -f infraenv-config.yaml", "oc -n <hosted_control_plane_namespace> get InfraEnv <hosted_cluster_name> -o json", "oc -n <hosted_control_plane_namespace> get agents", "NAME CLUSTER APPROVED ROLE STAGE 86f7ac75-4fc4-4b36-8130-40fa12602218 auto-assign e57a637f-745b-496e-971d-1abbf03341ba auto-assign", "oc -n <hosted_control_plane_namespace> patch agent <agent_name> -p '{\"spec\":{\"installation_disk_id\":\"<installation_disk_id>\",\"approved\":true}}' --type merge", "oc -n <hosted_control_plane_namespace> patch agent <agent_name> -p '{\"spec\":{\"hostname\":\"<hostname>\",\"approved\":true}}' --type merge", "oc -n <hosted_control_plane_namespace> get agents", "NAME CLUSTER APPROVED ROLE STAGE 86f7ac75-4fc4-4b36-8130-40fa12602218 true auto-assign e57a637f-745b-496e-971d-1abbf03341ba true auto-assign", "oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> --replicas 2", "oc -n <hosted_control_plane_namespace> get agent -o jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\\.openshift\\.io/bmh} Agent: {@.metadata.name} State: {@.status.debugInfo.state}{\"\\n\"}{end}'", "BMH: Agent: 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d State: known-unbound BMH: Agent: 5e498cd3-542c-e54f-0c58-ed43e28b568a State: insufficient", "oc -n <hosted_control_plane_namespace> get agent", "NAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d hosted-forwarder true auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a true auto-assign da503cf1-a347-44f2-875c-4960ddb04091 hosted-forwarder true auto-assign", "hcp create kubeconfig --namespace <hosted_cluster_namespace> --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes", "NAME STATUS ROLES AGE VERSION worker-zvm-0.hostedn.example.com Ready worker 5m41s v1.24.0+3882f8f worker-zvm-1.hostedn.example.com Ready worker 6m3s v1.24.0+3882f8f", "oc -n <hosted_control_plane_namespace> get machine.cluster.x-k8s.io", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION hosted-forwarder-79558597ff-5tbqp hosted-forwarder-crqq5 worker-zvm-0.hostedn.example.com agent://50c23cda-cedc-9bbd-bcf1-9b3a5c75804d Running 41h 4.15.0 hosted-forwarder-79558597ff-lfjfk hosted-forwarder-crqq5 worker-zvm-1.hostedn.example.com agent://5e498cd3-542c-e54f-0c58-ed43e28b568a Running 41h 4.15.0", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.15.0 True False 40h Cluster version is 4.15.0", "oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusteroperators" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/hosted_control_planes/managing-hosted-control-planes
Chapter 185. JSon Jackson DataFormat
Chapter 185. JSon Jackson DataFormat Available as of Camel version 2.0 Jackson is a Data Format which uses the Jackson Library from("activemq:My.Queue"). marshal().json(JsonLibrary.Jackson). to("mqseries:Another.Queue"); 185.1. Jackson Options The JSon Jackson dataformat supports 19 options, which are listed below. Name Default Java Type Description objectMapper String Lookup and use the existing ObjectMapper with the given id when using Jackson. useDefaultObjectMapper true Boolean Whether to lookup and use default Jackson ObjectMapper from the registry. prettyPrint false Boolean To enable pretty printing output nicely formatted. Is by default false. library XStream JsonLibrary Which json library to use. unmarshalTypeName String Class name of the java type to use when unarmshalling jsonView Class When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations include String If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL allowJmsType false Boolean Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. collectionTypeName String Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. useList false Boolean To unarmshal to a List of Map or a List of Pojo. enableJaxbAnnotationModule false Boolean Whether to enable the JAXB annotations module when using jackson. When enabled then JAXB annotations can be used by Jackson. moduleClassNames String To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. moduleRefs String To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. enableFeatures String Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma disableFeatures String Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma permissions String Adds permissions that controls which Java packages and classes XStream is allowed to use during unmarshal from xml/json to Java beans. A permission must be configured either here or globally using a JVM system property. The permission can be specified in a syntax where a plus sign is allow, and minus sign is deny. Wildcards is supported by using . as prefix. For example to allow com.foo and all subpackages then specfy com.foo.. Multiple permissions can be configured separated by comma, such as com.foo.,-com.foo.bar.MySecretBean. The following default permission is always included: -,java.lang.,java.util. unless its overridden by specifying a JVM system property with they key org.apache.camel.xstream.permissions. allowUnmarshallType false Boolean If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. timezone String If set then Jackson will use the Timezone when marshalling/unmarshalling. This option will have no effect on the others Json DataFormat, like gson, fastjson and xstream. contentTypeHeader true Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 185.2. Spring Boot Auto-Configuration The component supports 20 options, which are listed below. Name Description Default Type camel.dataformat.json-jackson.allow-jms-type Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. false Boolean camel.dataformat.json-jackson.allow-unmarshall-type If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. false Boolean camel.dataformat.json-jackson.collection-type-name Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. String camel.dataformat.json-jackson.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.json-jackson.disable-features Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma String camel.dataformat.json-jackson.enable-features Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma String camel.dataformat.json-jackson.enable-jaxb-annotation-module Whether to enable the JAXB annotations module when using jackson. When enabled then JAXB annotations can be used by Jackson. false Boolean camel.dataformat.json-jackson.enabled Enable json-jackson dataformat true Boolean camel.dataformat.json-jackson.include If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL String camel.dataformat.json-jackson.json-view When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations Class camel.dataformat.json-jackson.library Which json library to use. JsonLibrary camel.dataformat.json-jackson.module-class-names To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. String camel.dataformat.json-jackson.module-refs To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. String camel.dataformat.json-jackson.object-mapper Lookup and use the existing ObjectMapper with the given id when using Jackson. String camel.dataformat.json-jackson.permissions Adds permissions that controls which Java packages and classes XStream is allowed to use during unmarshal from xml/json to Java beans. A permission must be configured either here or globally using a JVM system property. The permission can be specified in a syntax where a plus sign is allow, and minus sign is deny. Wildcards is supported by using . as prefix. For example to allow com.foo and all subpackages then specfy com.foo.. Multiple permissions can be configured separated by comma, such as com.foo.,-com.foo.bar.MySecretBean. The following default permission is always included: -,java.lang.,java.util. unless its overridden by specifying a JVM system property with they key org.apache.camel.xstream.permissions. String camel.dataformat.json-jackson.pretty-print To enable pretty printing output nicely formatted. Is by default false. false Boolean camel.dataformat.json-jackson.timezone If set then Jackson will use the Timezone when marshalling/unmarshalling. This option will have no effect on the others Json DataFormat, like gson, fastjson and xstream. String camel.dataformat.json-jackson.unmarshal-type-name Class name of the java type to use when unarmshalling String camel.dataformat.json-jackson.use-default-object-mapper Whether to lookup and use default Jackson ObjectMapper from the registry. true Boolean camel.dataformat.json-jackson.use-list To unarmshal to a List of Map or a List of Pojo. false Boolean 185.3. Using custom ObjectMapper You can configure JacksonDataFormat to use a custom ObjectMapper in case you need more control of the mapping configuration. If you setup a single ObjectMapper in the registry, then Camel will automatic lookup and use this ObjectMapper . For example if you use Spring Boot, then Spring Boot can provide a default ObjectMapper for you if you have Spring MVC enabled. And this would allow Camel to detect that there is one bean of ObjectMapper class type in the Spring Boot bean registry and then use it. When this happens you should set a INFO logging from Camel. 185.4. Dependencies To use Jackson in your camel routes you need to add the dependency on camel-jackson which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jackson</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 185.5. Jackson ObjectMapper 185.5.1. What is object mapping? Jackson provides a mechanism for serializing Java objects, using the com.fasterxml.jackson.databind.ObjectMapper class. For example, you can serialize a MyClass java object using ObjectMapper , as follows: The object, myobject , gets serialized to JSON format and written to the file, myobject.json (Jackson also supports conversion to XML and YAML formats). To deserialize the JSON contents of the file, myobject.json , you can invoke the ObjectMapper as follows: Note that the receiver needs to know the type of the class in advance and must specify the type, MyClass.class , as the second argument to readValue() . 185.5.2. What is polymorphic object mapping? In some cases, it is impossible for the receiver of a serialized object to know the object's type in advance. For example, this applies to the case of a polymorphic object array. Consider the abstract type, Shape , and its subtypes, Triangle , Square (and so on): You can instantiate and serialize an array list of shapes ( ListOfShape ) as follows: But there is now a problem on the receiver side. You can tell the receiver to expect a ListOfShape object, by specifying this type as the second argument to readValue() : However, there is no way that the receiver can know that the first element of the list is Triangle and the second element is Square . To get around this problem, you need to enable polymorphic object mapping as described in the section. 185.5.3. How to enable polymorphic object mapping Polymorphic object mapping is a mechanism that makes it possible to serialize and deserialize arrays of abstract classes, by providing additional metadata in the serialized array, which identifies the type of the objects in the array. Important Polymorphic object mapping poses an inherent security risk, because the mechanism allows the sender to choose which class to instantiate, which can form the basis of an attack by the sender. Red Hat's distribution of the FasterXML Jackson library features a whitelist mechanism, which provides an extra level of protection against this threat. You must ensure that you are using Red Hat's distribution of the jackson-databind library (provided with Fuse versions 7.7 and later) in order to get this additional layer of protection. For more details, see Section 185.5.5, "Security risk from polymorphic deserialization" . To make it possible for the receiver to deserialize the objects in an array, it is necessary to provide type metadata in the serialized data. By default, Jackson does not encode any type metadata for serialized objects, so you need to write some additional code to enable this feature. To enable polymorphic object mapping, perform the following steps (using ListOfShape as an example): For each of the classes that can be elements of the list (subclasses of Shape ), annotate the class with @JsonTypeInfo , as follows: When the Triangle class is serialized to JSON format, it has the following format: The receiver must be configured to allow deserialization of the Triangle , Square , and other shape classes, by adding these classes to the deserialization whitelist. To configure the whitelist, set the jackson.deserialization.whitelist.packages system property to a comma-separated list of classes and packages. For example, to allow deserialization of the Triangle , Square classes, set the system property as follows: Alternatively, you could set the system property to allow the entire com.example package: Note This whitelist mechanism is available only for Red Hat's distribution of the jackson-databind library. The standard jackson-databind library uses a blacklist mechanism instead, which needs to be updated every time a potentially dangerous new gadget class is discovered. 185.5.4. Default mapping for polymorphic deserialization If a given Java class, com.example.MyClass , is not whitelisted, it is still possible to serialize instances of the class, but on the receiving side, the instances will be deserialized using a generic, default mapping. When polymorphic object mapping is enabled in Jackson, there are a few alternative ways of encoding an object: With @JsonTypeInfo(use=JsonTypeInfo.Id.CLASS, include=JsonTypeInfo.As.PROPERTY) : In this case, the instance will be deserialized to an Object with properties. With @JsonTypeInfo(use=JsonTypeInfo.Id.CLASS, include=JsonTypeInfo.As.WRAPPER_ARRAY) : In this case, the instance will be deserialized to a JSON array containing two fields: String with value com.example.MyClass Object with two (or more) properties With @JsonTypeInfo(use=JsonTypeInfo.Id.CLASS, include=JsonTypeInfo.As.WRAPPER_OBJECT) : In this case, the instance will be deserialized to a JSON map with a single field, com.example.MyClass , and the value as Object having two (or more) properties. 185.5.5. Security risk from polymorphic deserialization Applications that that use the FasterXML jackson-databind library to instantiate Java objects by deserializing JSON content are potentially vulnerable to a remote code execution attack. The vulnerability is not automatic, however, and it can be avoided if you take the appropriate mitigation steps. At a minimum, the following prerequisites must all be satisfied before an attack becomes possible: You have enabled polymorphic type handling for deserialization of JSON content in jackson-databind . There are two alternative ways of enabling polymorphic type handling in Jackson JSON: Using a combination of the @JsonTypeInfo and @JsonSubTypes annotations. By calling the ObjectMapper.enableDefaultTyping() method. This option is particularly dangerous, as it effectively enables polymorphic typing globally. There are one or more gadget classes in your Java classpath. A gadget class is defined as any class that performs a sensitive (potentially exploitable) operation as a side effect of executing a constructor or a setter method (which are the methods that can be called during a deserialization). One or more gadget classes in your Java classpath have not yet been blacklisted by the current version of jackson-databind . If you are using the standard distribution of the jackson-databind library, the gadget blacklist maintained by the Jackson JSON library is the last line of defence against the remote code execution vulnerability. (Red Hat distribution of jackson-databind library only) You explicitly added one of the gadget classes to the deserialization whitelist on the receiver (by setting the jackson.deserialization.whitelist.packages system property). As this is something you are unlikely to do, the whitelist mechanism provides effective protection against all gadget classes by default.
[ "from(\"activemq:My.Queue\"). marshal().json(JsonLibrary.Jackson). to(\"mqseries:Another.Queue\");", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jackson</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "ObjectMapper objectMapper = new ObjectMapper(); MyClass myobject = new MyClass(\"foo\", \"bar\"); objectMapper.writeValue(new File(\"myobject.json\"), myobject);", "ObjectMapper objectMapper = new ObjectMapper(); MyClass myobject = objectMapper.readValue(new File(\"myobject.json\"), MyClass.class);", "package com.example; public abstract class Shape { } public class Triangle extends Shape { } public class Square extends Shape { } public class ListOfShape { public List<Shape> shapes; }", "ObjectMapper objectMapper = new ObjectMapper(); ListOfShape shapeList = new ListOfShape(); shapeList.shapes = new ArrayList<Shape>(); shapeList.shapes.add(new Triangle()); shapeList.shapes.add(new Square()); String serialized = objectMapper.writeValueAsString(shapeList);", "MyClass myobject = objectMapper.readValue(serialized, ListOfShape.class); ObjectMapper objectMapper = new ObjectMapper();", "@JsonTypeInfo(use=JsonTypeInfo.Id.CLASS, include=JsonTypeInfo.As.PROPERTY) public class Triangle extends Shape { }", "{\"@class\":\"com.example.Triangle\", \"property1\":\"value1\", \"property2\":\"value2\", ...}", "-Djackson.deserialization.whitelist.packages=com.example.Triangle,com.example.Square", "-Djackson.deserialization.whitelist.packages=com.example", "{\"@class\":\"com.example.MyClass\", \"property1\":\"value1\", \"property2\":\"value2\", ...}", "[\"com.example.MyClass\", {\"property1\":\"value1\", \"property2\":\"value2\", ...}]", "{\"com.example.MyClass\":{\"property1\":\"value1\", \"property2\":\"value2\", ...}}" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/json-jackson-dataformat
3.13. Optimizations in User Space
3.13. Optimizations in User Space Reducing the amount of work performed by system hardware is fundamental to saving power. Therefore, although the changes described in Chapter 3, Core Infrastructure and Mechanics permit the system to operate in various states of reduced power consumption, applications in user space that request unnecessary work from system hardware prevent the hardware from entering those states. During the development of Red Hat Enterprise Linux 6, audits were undertaken in the following areas to reduce unnecessary demands on hardware: Reduced wakeups Red Hat Enterprise Linux 6 uses a tickless kernel (refer to Section 3.6, "Tickless Kernel" ), which allows the CPUs to remain in deeper idle states longer. However, the timer tick is not the only source of excessive CPU wakeups, and function calls from applications can also prevent the CPU from entering or remaining in idle states. Unnecessary function calls were reduced in over 50 applications. Reduced storage and network I/O Input or output (I/O) to storage devices and network interfaces forces devices to consume power. In storage and network devices that feature reduced power states when idle (for example, ALPM or ASPM), this traffic can prevent the device from entering or remaining in an idle state, and can prevent hard drives from spinning down when not in use. Excessive and unnecessary demands on storage have been minimized in several applications. In particular, those demands that prevented hard drives from spinning down. Initscript audit Services that start automatically whether required or not have great potential to waste system resources. Services instead should default to "off" or "on demand" wherever possible. For example, the BlueZ service that enables Bluetooth support previously ran automatically when the system started, whether Bluetooth hardware was present or not. The BlueZ initscript now checks that Bluetooth hardware is present on the system before starting the service.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/power_management_guide/optimizations_in_user_space
Chapter 5. Optimizing the replica topology
Chapter 5. Optimizing the replica topology A robust replica topology distributes workloads and reduces replication delays. Follow these guidelines to optimize the layout of your replica topology. 5.1. Guidelines for determining the appropriate number of IdM replicas in a topology Plan IdM topology to match your organization's requirements and ensure optimal performance and service availability. Set up at least two replicas in each data center Deploy at least two replicas in each data center to ensure that if one server fails, the replica can take over and handle requests. Set up a sufficient number of servers to serve your clients One Identity Management (IdM) server can provide services to 2000 - 3000 clients. This assumes the clients query the servers multiple times a day, but not, for example, every minute. If you expect frequent queries, plan for more servers. Set up a sufficient number of Certificate Authority (CA) replicas Only replicas with the CA role installed can replicate certificate data. If you use the IdM CA, ensure your environment has at least two CA replicas with certificate replication agreements between them. Set up a maximum of 60 replicas in a single IdM domain Red Hat supports environments with up to 60 replicas. 5.2. Guidelines for connecting IdM replicas in a topology Connect each replica to at least two other replicas This ensures that information is replicated not just between the initial replica and the first server you installed, but between other replicas as well. Connect a replica to a maximum of four other replicas (not a hard requirement) A large number of replication agreements per server does not add significant benefits. A receiving replica can only be updated by one other replica at a time and meanwhile, the other replication agreements are idle. More than four replication agreements per replica typically means a waste of resources. Note This recommendation applies to both certificate replication and domain replication agreements. There are two exceptions to the limit of four replication agreements per replica: You want failover paths if certain replicas are not online or responding. In larger deployments, you want additional direct links between specific nodes. Configuring a high number of replication agreements can have a negative impact on overall performance: when multiple replication agreements in the topology are sending updates, certain replicas can experience a high contention on the changelog database file between incoming updates and the outgoing updates. If you decide to use more replication agreements per replica, ensure that you do not experience replication issues and latency. However, note that large distances and high numbers of intermediate nodes can also cause latency problems. Connect the replicas in a data center with each other This ensures domain replication within the data center. Connect each data center to at least two other data centers This ensures domain replication between data centers. Connect data centers using at least a pair of replication agreements If data centers A and B have a replication agreement from A1 to B1, having a replication agreement from A2 to B2 ensures that if one of the servers is down, the replication can continue between the two data centers. 5.3. Replica topology examples You can create a reliable replica topology by using one of the following examples. Figure 5.1. Replica topology with four data centers, each with four servers that are connected with replication agreements Figure 5.2. Replica topology with three data centers, each with a different number of servers that are all interconnected through replication agreements 5.4. Uninstalling the IdM CA service from an IdM server If you have more than four Identity Management (IdM) replicas with the CA Role in your topology and you run into performance problems due to redundant certificate replication, remove redundant CA service instances from IdM replicas. To do this, you must first decommission the affected IdM replicas completely before re-installing IdM on them, this time without the CA service. Note While you can add the CA role to an IdM replica, IdM does not provide a method to remove only the CA role from an IdM replica: the ipa-ca-install command does not have an --uninstall option. Prerequisites You have the IdM CA service installed on more than four IdM servers in your topology. Procedure Identify the redundant CA service and follow the procedure in Uninstalling an IdM server on the IdM replica that hosts this service. On the same host, follow the procedure in Installing an IdM server: With integrated DNS, without a CA . 5.5. Additional resources Planning the replica topology . Managing replication topology .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/tuning_performance_in_identity_management/optimizing-the-replica-topology_tuning-performance-in-idm
function::symfileline
function::symfileline Name function::symfileline - Return the file name and line number of an address. Synopsis Arguments addr The address to translate. Description Returns the file name and the (approximate) line number of the given address, if known. If the file name or the line number cannot be found, the hex string representation of the address will be returned.
[ "symfileline:string(addr:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-symfileline
Appendix B. Revision history
Appendix B. Revision history 0.5-0 Tue March 18 2025, Gabriela Fialova ( [email protected] ) Added a Known Issue in RHEL-82566 (Installer) 0.2-8 Tue March 11 2025, Gabriela Fialova ( [email protected] ) Added a Feature in RHELDOCS-19755 (RHEL in cloud environments) Updated a Deprecated functionality in RHEL-30730 (Filesystems and storage) 0.2-7 Thu March 6 2025, Gabriela Fialova ( [email protected] ) Updated a Technology Preview in RHELPLAN-145900 (IdM) 0.2-6 Thu February 27 2025, Marc Muehlfeld ( [email protected] ) Added a Technology Preview in RHELDOCS-19773 (Networking) 0.2-5 Mon February 24 2025, Gabriela Fialova ( [email protected] ) Added a Known Issue in RHELDOCS-19626 (Security) Updated an Enhancement in RHEL-14941 (Dynamic programming languages) 0.2-4 Thu Jan 30 2025, Gabriela Fialova ( [email protected] ) Added an Known Issue RHELDOCS-19603 (IdM SSSD) 0.2-3 Mon Jan 20 2025, Gabriela Fialova ( [email protected] ) Added an Known Issue RHEL-13837 (Installer) 0.2-2 Wed Dec 4 2024, Gabriela Fialova ( [email protected] ) Updated the Customer Portal labs section Updated the Installation section 0.2-1 Fri Nov 22 2024, Gabi Fialova ( [email protected] ) Added a Bug Fix RHEL-56786 (IdM) 0.2-0 Tue Nov 19 2024, Gabi Fialova ( [email protected] ) Removed a Known Issue BZ-2057471 (IdM) Updated a Known Issue RHEL-4888 (IdM) 0.1-9 Thu Oct 31 2024, Gabriela Fialova ( [email protected] ) Added an Known Issue RHEL-17614 (Virtualization) 0.1-8 Wed Oct 09 2024, Gabriela Fialova ( [email protected] ) Updated a Deprecated Functionality RHELPLAN-100639 (Identity Management) 0.1-7 Thu Oct 03 2024, Gabriela Fialova ( [email protected] ) Added an Known Issue RHEL-56135 (Installer) 0.1-6 Fri Sep 27 2024, Gabriela Fialova ( [email protected] ) Added a new Known Issue RHELDOCS-18924 (Installer) 0.1-5 Wed Aug 14 2024, Gabriela Fialova ( [email protected] ) Added a new Technology Preview RHEL-7936 (File Systems and Storage) 0.1-4 Thu Aug 8 2024, Gabriela Fialova ( [email protected] ) Added a new Known Issue RHEL-45727 (Security) 0.1-3 Thu Jul 25 2024, Gabriela Fialova ( [email protected] ) Updated the text in Enhancement RHEL-14485 (Networking) 0.1-2 Thu Jul 18 2024, Gabriela Fialova ( [email protected] ) Updated a Deprecated Functionality BZ-1899167 (Installer) Updated the abstract in the Deprecated functionalities section 0.1-1 Thu Jul 11 2024, Lenka Spackova ( [email protected] ) Added a Known Issue RHEL-45705 (System Roles) 0.1-0 Mon Jul 08 2024, Lenka Spackova ( [email protected] ) Fixed formatting and reference in RHEL-23798 (Compilers and development tools) 0.0-9 Thu Jun 27 2024, Gabriela Fialova ( [email protected] ) Removed a Known Issue Jira-RHELDOCS-17720 (System roles) 0.0-8 Tue Jun 25 2024, Lenka Spackova ( [email protected] ) Added a Known Issue RHELDOCS-18435 (Dynamic programming languages, web and database servers) 0.0-7 Wed Jun 12 2024, Brian Angelica ( [email protected] ) Updated an Enhancement RHELPLAN-169666 (Identity Management) 0.0-6 Wed May 29 2024, Gabriela Fialova ( [email protected] ) Added an Enhancement RHEL-12490 (Compilers and development tools) Added an Enhancement RHEL-12491 (Compilers and development tools) Updated an Enhancement RHEL-13760 (RHEL System Roles) Updated the In-place upgrade section 0.0-5 Tue May 28 2024, Lenka Spackova ( [email protected] ) Fixed formatting of RHEL-16629 (Compilers and development tools) 0.0-4 Thu May 23 2024, Gabriela Fialova ( [email protected] ) Updated the Enhancement RHEL-23798 (Compilers and development tools) 0.0-3 Tue May 21 2024, Lenka Spackova ( [email protected] ) Added an Enhancement RHEL-35685 (Dynamic programming languages, web and database servers) 0.0-2 Thu May 16 2024, Gabriela Fialova ( [email protected] ) Added an Enhancement RHEL-16336 (RHEL System Roles) Added an Enhancement RHEL-13760 (RHEL System Roles) 0.0-1 Wed May 01 2024, Gabriela Fialova ( [email protected] ) Release of the Red Hat Enterprise Linux 9.4 Release Notes. 0.0-0 Wed March 27 2024, Gabriela Fialova ( [email protected] ) Release of the Red Hat Enterprise Linux 9.4 Beta Release Notes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.4_release_notes/revision_history
Chapter 2. Why Use Virtualization?
Chapter 2. Why Use Virtualization? Virtualization can be useful both for server deployments and individual desktop stations. Desktop virtualization offers cost-efficient centralized management and better disaster recovery. In addition, by using connection tools such as ssh , it is possible to connect to a desktop remotely. When used for servers, virtualization can benefit not only larger networks, but also deployments with more than a single server. Virtualization provides live migration, high availability, fault tolerance, and streamlined backups. 2.1. Virtualization Costs Virtualization can be expensive to introduce, but it often saves money in the long term. Consider the following benefits: Less power Using virtualization negates much of the need for multiple physical platforms. This equates to less power being drawn for machine operation and cooling, resulting in reduced energy costs. The initial cost of purchasing multiple physical platforms, combined with the machines' power consumption and required cooling, is drastically cut by using virtualization. Less maintenance Provided that adequate planning is performed before migrating physical systems to virtualized ones, less time is needed to maintain them. This means less money needs to be spent on parts and labor. Extended life for installed software Older versions of software may not be able to run directly on more recent bare-metal machines. By running older software virtually on a larger, faster system, the life of the software may be extended while taking advantage of better performance from a newer system. Predictable costs A Red Hat Enterprise Linux subscription provides support for virtualization at a fixed rate, making it easy to predict costs. Less space Consolidating servers onto fewer machines means less physical space is required for computer systems.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_getting_started_guide/chap-virtualization_getting_started-advantages
Appendix A. Upgrading GFS
Appendix A. Upgrading GFS To upgrade a node to Red Hat GFS 6.1 from earlier versions of Red Hat GFS, you must convert the GFS cluster configuration archive (CCA) to a Red Hat Cluster Suite cluster configuration system (CCS) configuration file ( /etc/cluster/cluster.conf ) and convert GFS pool volumes to LVM2 volumes. This appendix contains instructions for upgrading from GFS 6.0 (or GFS 5.2.1) to Red Hat GFS 6.1, using GULM as the lock manager. Note You must retain GULM lock management for the upgrade to Red Hat GFS 6.1; that is, you cannot change from GULM lock management to DLM lock management during the upgrade to Red Hat GFS 6.1. However, after the upgrade to GFS 6.1, you can change lock managers. The following procedure demonstrates upgrading to Red Hat GFS 6.1 from a GFS 6.0 (or GFS 5.2.1) configuration with an example pool configuration for a pool volume named argus . Halt the GFS nodes and the lock server nodes as follows: Unmount GFS file systems from all nodes. Stop the lock servers; at each lock server node, stop the lock server as follows: Stop ccsd at all nodes; at each node, stop ccsd as follows: Deactivate pools; at each node, deactivate GFS pool volumes as follows: Uninstall Red Hat GFS RPMs. Install new software: Install Red Hat Enterprise Linux version 4 software (or verify that it is installed). Install Red Hat Cluster Suite and Red Hat GFS RPMs. At all GFS 6.1 nodes, create a cluster configuration file directory ( /etc/cluster ) and upgrade the CCA (in this example, located in /dev/pool/cca ) to the new Red Hat Cluster Suite CCS configuration file format by running the ccs_tool upgrade command as shown in the following example: At all GFS 6.1 nodes, start ccsd , run the lock_gulmd -c command, and start clvmd as shown in the following example: Note Ignore the warning message following the lock_gulmd -c command. Because the cluster name is already included in the converted configuration file, there is no need to specify a cluster name when issuing the lock_gulmd -c command. At all GFS 6.1 nodes, run vgscan as shown in the following example: At one GFS 6.1 node, convert the pool volume to an LVM2 volume by running the vgconvert command as shown in the following example: At all GFS 6.1 nodes, run vgchange -ay as shown in the following example: At the first node to mount a GFS file system, run the mount command with the upgrade option as shown in the following example: Note This step only needs to be done once - on the first mount of the GFS file system. Note If static minor numbers were used on pool volumes and the GFS 6.1 nodes are using LVM2 for other purposes (root file system) there may be problems activating the pool volumes under GFS 6.1. That is because of static minor conflicts. Refer to the following Bugzilla report for more information: https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=146035
[ "poolname argus subpools 1 subpool 0 512 1 gfs_data pooldevice 0 0 /dev/sda1", "service lock_gulmd stop", "service ccsd stop", "service pool stop", "mkdir /etc/cluster ccs_tool upgrade /dev/pool/cca > /etc/cluster/cluster.conf", "ccsd lock_gulmd -c Warning! You didn't specify a cluster name before --use_ccs Letting ccsd choose which cluster we belong to. clvmd", "vgscan Reading all physical volumes. This may take a while Found volume group \"argus\" using metadata type pool", "vgconvert -M2 argus Volume group argus successfully converted", "vgchange -ay 1 logical volume(s) in volume group \"argus\" now active", "mount -t gfs -o upgrade /dev/pool/argus /mnt/gfs1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/gfs_upgrade
Chapter 47. Managing host groups using Ansible playbooks
Chapter 47. Managing host groups using Ansible playbooks To learn more about host groups in Identity Management (IdM) and using Ansible to perform operations involving host groups in Identity Management (IdM), see the following: Host groups in IdM Ensuring the presence of IdM host groups Ensuring the presence of hosts in IdM host groups Nesting IdM host groups Ensuring the presence of member managers in IdM host groups Ensuring the absence of hosts from IdM host groups Ensuring the absence of nested host groups from IdM host groups Ensuring the absence of member managers from IdM host groups 47.1. Host groups in IdM IdM host groups can be used to centralize control over important management tasks, particularly access control. Definition of host groups A host group is an entity that contains a set of IdM hosts with common access control rules and other characteristics. For example, you can define host groups based on company departments, physical locations, or access control requirements. A host group in IdM can include: IdM servers and clients Other IdM host groups Host groups created by default By default, the IdM server creates the host group ipaservers for all IdM server hosts. Direct and indirect group members Group attributes in IdM apply to both direct and indirect members: when host group B is a member of host group A, all members of host group B are considered indirect members of host group A. 47.2. Ensuring the presence of IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of host groups in Identity Management (IdM) using Ansible playbooks. Note Without Ansible, host group entries are created in IdM using the ipa hostgroup-add command. The result of adding a host group to IdM is the state of the host group being present in IdM. Because of the Ansible reliance on idempotence, to add a host group to IdM using Ansible, you must create a playbook in which you define the state of the host group as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. For example, to ensure the presence of a host group named databases , specify name: databases in the - ipahostgroup task. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-hostgroup-is-present.yml file. In the playbook, state: present signifies a request to add the host group to IdM unless it already exists there. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group whose presence in IdM you wanted to ensure: The databases host group exists in IdM. 47.3. Ensuring the presence of hosts in IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of hosts in host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The hosts you want to reference in your Ansible playbook exist in IdM. For details, see Ensuring the presence of an IdM host entry using Ansible playbooks . The host groups you reference from the Ansible playbook file have been added to IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host information. Specify the name of the host group using the name parameter of the ipahostgroup variable. Specify the name of the host with the host parameter of the ipahostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-present-in-hostgroup.yml file: This playbook adds the db.idm.example.com host to the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to add the databases group itself. Instead, only an attempt is made to add db.idm.example.com to databases . Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about a host group to see which hosts are present in it: The db.idm.example.com host is present as a member of the databases host group. 47.4. Nesting IdM host groups using Ansible playbooks Follow this procedure to ensure the presence of nested host groups in Identity Management (IdM) host groups using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. To ensure that a nested host group A exists in a host group B : in the Ansible playbook, specify, among the - ipahostgroup variables, the name of the host group B using the name variable. Specify the name of the nested hostgroup A with the hostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-present-in-hostgroup.yml file: This Ansible playbook ensures the presence of the myqsl-server and oracle-server host groups in the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to add the databases group itself to IdM. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group in which nested host groups are present: The mysql-server and oracle-server host groups exist in the databases host group. 47.5. Ensuring the presence of member managers in IDM host groups using Ansible Playbooks The following procedure describes ensuring the presence of member managers in IdM hosts and host groups using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the host or host group you are adding as member managers and the name of the host group you want them to manage. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary host and host group member management information: Run the playbook: Verification You can verify if the group_name group contains example_member and project_admins as member managers by using the ipa group-show command: Log into ipaserver as administrator: Display information about testhostgroup : Additional resources See ipa hostgroup-add-member-manager --help . See the ipa man page on your system. 47.6. Ensuring the absence of hosts from IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of hosts from host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The hosts you want to reference in your Ansible playbook exist in IdM. For details, see Ensuring the presence of an IdM host entry using Ansible playbooks . The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host and host group information. Specify the name of the host group using the name parameter of the ipahostgroup variable. Specify the name of the host whose absence from the host group you want to ensure using the host parameter of the ipahostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-absent-in-hostgroup.yml file: This playbook ensures the absence of the db.idm.example.com host from the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to remove the databases group itself. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group and the hosts it contains: The db.idm.example.com host does not exist in the databases host group. 47.7. Ensuring the absence of nested host groups from IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of nested host groups from outer host groups in Identity Management (IdM) using Ansible playbooks. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The host groups you reference from the Ansible playbook file exist in IdM. For details, see Ensuring the presence of IdM host groups using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. Specify, among the - ipahostgroup variables, the name of the outer host group using the name variable. Specify the name of the nested hostgroup with the hostgroup variable. To simplify this step, you can copy and modify the examples in the /usr/share/doc/ansible-freeipa/playbooks/hostgroup/ensure-hosts-and-hostgroups-are-absent-in-hostgroup.yml file: This playbook makes sure that the mysql-server and oracle-server host groups are absent from the databases host group. The action: member line indicates that when the playbook is run, no attempt is made to ensure the databases group itself is deleted from IdM. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group from which nested host groups should be absent: The output confirms that the mysql-server and oracle-server nested host groups are absent from the outer databases host group. 47.8. Ensuring the absence of IdM host groups using Ansible playbooks Follow this procedure to ensure the absence of host groups in Identity Management (IdM) using Ansible playbooks. Note Without Ansible, host group entries are removed from IdM using the ipa hostgroup-del command. The result of removing a host group from IdM is the state of the host group being absent from IdM. Because of the Ansible reliance on idempotence, to remove a host group from IdM using Ansible, you must create a playbook in which you define the state of the host group as absent: state: absent . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it with the list of IdM servers to target: Create an Ansible playbook file with the necessary host group information. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/user/ensure-hostgroup-is-absent.yml file. This playbook ensures the absence of the databases host group from IdM. The state: absent means a request to delete the host group from IdM unless it is already deleted. Run the playbook: Verification Log into ipaserver as admin: Request a Kerberos ticket for admin: Display information about the host group whose absence you ensured: The databases host group does not exist in IdM. 47.9. Ensuring the absence of member managers from IdM host groups using Ansible playbooks The following procedure describes ensuring the absence of member managers in IdM hosts and host groups using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the user or user group you are removing as member managers and the name of the host group they are managing. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary host and host group member management information: Run the playbook: Verification You can verify if the group_name group does not contain example_member or project_admins as member managers by using the ipa group-show command: Log into ipaserver as administrator: Display information about testhostgroup : Additional resources See ipa hostgroup-add-member-manager --help . See the ipa man page on your system.
[ "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: present", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-present.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are present in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com Member host-groups: mysql-server, oracle-server", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user example_member is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member - name: Ensure member manager group project_admins is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_group: project_admins", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-member-managers-host-groups.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2 Membership managed by groups: project_admins Membership managed by users: example_member", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is absent - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases Member host-groups: mysql-server, oracle-server", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are absent in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases Host-group: databases", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - Ensure host-group databases is absent ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-absent.yml", "ssh [email protected] Password: [admin@server /]USD", "kinit admin Password for [email protected]:", "ipa hostgroup-show databases ipa: ERROR: databases: host group not found", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager host and host group members are absent for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member membermanager_group: project_admins action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-member-managers-host-groups-are-absent.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-host-groups-using-Ansible-playbooks_managing-users-groups-hosts
Remediation, fencing, and maintenance
Remediation, fencing, and maintenance Workload Availability for Red Hat OpenShift 24.4 Workload Availability remediation, fencing, and maintenance Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/24.4/html/remediation_fencing_and_maintenance/index
Chapter 4. Configuration
Chapter 4. Configuration This chapter describes the process for binding the AMQ OpenWire JMS implementation to your JMS application and setting configuration options. JMS uses the Java Naming Directory Interface (JNDI) to register and look up API implementations and other resources. This enables you to write code to the JMS API without tying it to a particular implementation. Configuration options are exposed as query parameters on the connection URI. For more information about configuring AMQ OpenWire JMS, see the ActiveMQ user guide . 4.1. Configuring the JNDI initial context JMS applications use a JNDI InitialContext object obtained from an InitialContextFactory to look up JMS objects such as the connection factory. AMQ OpenWire JMS provides an implementation of the InitialContextFactory in the org.apache.activemq.jndi.ActiveMQInitialContextFactory class. The InitialContextFactory implementation is discovered when the InitialContext object is instantiated: javax.naming.Context context = new javax.naming.InitialContext(); To find an implementation, JNDI must be configured in your environment. There are three ways of achieving this: using a jndi.properties file, using a system property, or using the initial context API. Using a jndi.properties file Create a file named jndi.properties and place it on the Java classpath. Add a property with the key java.naming.factory.initial . Example: Setting the JNDI initial context factory using a jndi.properties file java.naming.factory.initial = org.apache.activemq.jndi.ActiveMQInitialContextFactory In Maven-based projects, the jndi.properties file is placed in the <project-dir> /src/main/resources directory. Using a system property Set the java.naming.factory.initial system property. Example: Setting the JNDI initial context factory using a system property USD java -Djava.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory ... Using the initial context API Use the JNDI initial context API to set properties programatically. Example: Setting JNDI properties programatically Hashtable<Object, Object> env = new Hashtable<>(); env.put("java.naming.factory.initial", "org.apache.activemq.jndi.ActiveMQInitialContextFactory"); InitialContext context = new InitialContext(env); Note that you can use the same API to set the JNDI properties for connection factories, queues, and topics. 4.2. Configuring the connection factory The JMS connection factory is the entry point for creating connections. It uses a connection URI that encodes your application-specific configuration settings. To set the factory name and connection URI, create a property in the format below. You can store this configuration in a jndi.properties file or set the corresponding system property. The JNDI property format for connection factories connectionFactory. <lookup-name> = <connection-uri> For example, this is how you might configure a factory named app1 : Example: Setting the connection factory in a jndi.properties file connectionFactory.app1 = tcp://example.net:61616?jms.clientID=backend You can then use the JNDI context to look up your configured connection factory using the name app1 : ConnectionFactory factory = (ConnectionFactory) context.lookup("app1"); 4.3. Connection URIs Connections are configured using a connection URI. The connection URI specifies the remote host, port, and a set of configuration options, which are set as query parameters. For more information about the available options, see Chapter 5, Configuration options . The connection URI format The scheme is tcp for unencrypted connections and ssl for SSL/TLS connections. For example, the following is a connection URI that connects to host example.net at port 61616 and sets the client ID to backend : Example: A connection URI Failover URIs URIs used for reconnect and failover can contain multiple connection URIs. They take the following form: The failover URI format Transport options prefixed with nested. are applied to each connection URI in the list. 4.4. Configuring queue and topic names JMS provides the option of using JNDI to look up deployment-specific queue and topic resources. To set queue and topic names in JNDI, create properties in the following format. Either place this configuration in a jndi.properties file or set corresponding system properties. The JNDI property format for queues and topics queue. <lookup-name> = <queue-name> topic. <lookup-name> = <topic-name> For example, the following properties define the names jobs and notifications for two deployment-specific resources: Example: Setting queue and topic names in a jndi.properties file queue.jobs = app1/work-items topic.notifications = app1/updates You can then look up the resources by their JNDI names: Queue queue = (Queue) context.lookup("jobs"); Topic topic = (Topic) context.lookup("notifications");
[ "javax.naming.Context context = new javax.naming.InitialContext();", "java.naming.factory.initial = org.apache.activemq.jndi.ActiveMQInitialContextFactory", "java -Djava.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory", "Hashtable<Object, Object> env = new Hashtable<>(); env.put(\"java.naming.factory.initial\", \"org.apache.activemq.jndi.ActiveMQInitialContextFactory\"); InitialContext context = new InitialContext(env);", "connectionFactory. <lookup-name> = <connection-uri>", "connectionFactory.app1 = tcp://example.net:61616?jms.clientID=backend", "ConnectionFactory factory = (ConnectionFactory) context.lookup(\"app1\");", "<scheme>://<host>:<port>[?<option>=<value>[&<option>=<value>...]]", "tcp://example.net:61616?jms.clientID=backend", "failover:(<connection-uri>[,<connection-uri>])[?<option>=<value>[&<option>=<value>...]]", "queue. <lookup-name> = <queue-name> topic. <lookup-name> = <topic-name>", "queue.jobs = app1/work-items topic.notifications = app1/updates", "Queue queue = (Queue) context.lookup(\"jobs\"); Topic topic = (Topic) context.lookup(\"notifications\");" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_openwire_jms_client/configuration
Chapter 6. Functions
Chapter 6. Functions 6.1. Setting up OpenShift Serverless Functions To improve the process of deployment of your application code, you can use OpenShift Serverless to deploy stateless, event-driven functions as a Knative service on OpenShift Container Platform. If you want to develop functions, you must complete the set up steps. 6.1.1. Prerequisites To enable the use of OpenShift Serverless Functions on your cluster, you must complete the following steps: The OpenShift Serverless Operator and Knative Serving are installed on your cluster. Note Functions are deployed as a Knative service. If you want to use event-driven architecture with your functions, you must also install Knative Eventing. You have the oc CLI installed. You have the Knative ( kn ) CLI installed. Installing the Knative CLI enables the use of kn func commands which you can use to create and manage functions. You have installed Docker Container Engine or Podman version 3.4.7 or higher. You have access to an available image registry, such as the OpenShift Container Registry. If you are using Quay.io as the image registry, you must ensure that either the repository is not private, or that you have followed the OpenShift Container Platform documentation on Allowing pods to reference images from other secured registries . If you are using the OpenShift Container Registry, a cluster administrator must expose the registry . 6.1.2. Setting up Podman To use advanced container management features, you might want to use Podman with OpenShift Serverless Functions. To do so, you need to start the Podman service and configure the Knative ( kn ) CLI to connect to it. Procedure Start the Podman service that serves the Docker API on a UNIX socket at USD{XDG_RUNTIME_DIR}/podman/podman.sock : USD systemctl start --user podman.socket Note On most systems, this socket is located at /run/user/USD(id -u)/podman/podman.sock . Establish the environment variable that is used to build a function: USD export DOCKER_HOST="unix://USD{XDG_RUNTIME_DIR}/podman/podman.sock" Run the build command inside your function project directory with the -v flag to see verbose output. You should see a connection to your local UNIX socket: USD kn func build -v 6.1.3. Setting up Podman on macOS To use advanced container management features, you might want to use Podman with OpenShift Serverless Functions. To do so on macOS, you need to start the Podman machine and configure the Knative ( kn ) CLI to connect to it. Procedure Create the Podman machine: USD podman machine init --memory=8192 --cpus=2 --disk-size=20 Start the Podman machine, which serves the Docker API on a UNIX socket: USD podman machine start Starting machine "podman-machine-default" Waiting for VM ... Mounting volume... /Users/myuser:/Users/user [...truncated output...] You can still connect Docker API clients by setting DOCKER_HOST using the following command in your terminal session: export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock' Machine "podman-machine-default" started successfully Note On most macOS systems, this socket is located at /Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock . Establish the environment variable that is used to build a function: USD export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock' Run the build command inside your function project directory with the -v flag to see verbose output. You should see a connection to your local UNIX socket: USD kn func build -v 6.1.4. steps For more information about Docker Container Engine or Podman, see Container build tool options . See Getting started with functions . 6.2. Getting started with functions Function lifecycle management includes creating, building, and deploying a function. Optionally, you can also test a deployed function by invoking it. You can do all of these operations on OpenShift Serverless using the kn func tool. 6.2.1. Prerequisites Before you can complete the following procedures, you must ensure that you have completed all of the prerequisite tasks in Setting up OpenShift Serverless Functions . 6.2.2. Creating functions Before you can build and deploy a function, you must create it by using the Knative ( kn ) CLI. You can specify the path, runtime, template, and image registry as flags on the command line, or use the -c flag to start the interactive experience in the terminal. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. Procedure Create a function project: USD kn func create -r <repository> -l <runtime> -t <template> <path> Accepted runtime values include quarkus , node , typescript , go , python , springboot , and rust . Accepted template values include http and cloudevents . Example command USD kn func create -l typescript -t cloudevents examplefunc Example output Created typescript function in /home/user/demo/examplefunc Alternatively, you can specify a repository that contains a custom template. Example command USD kn func create -r https://github.com/boson-project/templates/ -l node -t hello-world examplefunc Example output Created node function in /home/user/demo/examplefunc 6.2.3. Running a function locally You can use the kn func run command to run a function locally in the current directory or in the directory specified by the --path flag. If the function that you are running has never previously been built, or if the project files have been modified since the last time it was built, the kn func run command builds the function before running it by default. Example command to run a function in the current directory USD kn func run Example command to run a function in a directory specified as a path USD kn func run --path=<directory_path> You can also force a rebuild of an existing image before running the function, even if there have been no changes to the project files, by using the --build flag: Example run command using the build flag USD kn func run --build If you set the build flag as false, this disables building of the image, and runs the function using the previously built image: Example run command using the build flag USD kn func run --build=false You can use the help command to learn more about kn func run command options: Build help command USD kn func help run 6.2.4. Building functions Before you can run a function, you must build the function project. If you are using the kn func run command, the function is built automatically. However, you can use the kn func build command to build a function without running it, which can be useful for advanced users or debugging scenarios. The kn func build command creates an OCI container image that can be run locally on your computer or on an OpenShift Container Platform cluster. This command uses the function project name and the image registry name to construct a fully qualified image name for your function. 6.2.4.1. Image container types By default, kn func build creates a container image by using Red Hat Source-to-Image (S2I) technology. Example build command using Red Hat Source-to-Image (S2I) USD kn func build 6.2.4.2. Image registry types The OpenShift Container Registry is used by default as the image registry for storing function images. Example build command using OpenShift Container Registry USD kn func build Example output Building function image Function image has been built, image: registry.redhat.io/example/example-function:latest You can override using OpenShift Container Registry as the default image registry by using the --registry flag: Example build command overriding OpenShift Container Registry to use quay.io USD kn func build --registry quay.io/username Example output Building function image Function image has been built, image: quay.io/username/example-function:latest 6.2.4.3. Push flag You can add the --push flag to a kn func build command to automatically push the function image after it is successfully built: Example build command using OpenShift Container Registry USD kn func build --push 6.2.4.4. Help command You can use the help command to learn more about kn func build command options: Build help command USD kn func help build 6.2.5. Deploying functions You can deploy a function to your cluster as a Knative service by using the kn func deploy command. If the targeted function is already deployed, it is updated with a new container image that is pushed to a container image registry, and the Knative service is updated. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You must have already created and initialized the function that you want to deploy. Procedure Deploy a function: USD kn func deploy [-n <namespace> -p <path> -i <image>] Example output Function deployed at: http://func.example.com If no namespace is specified, the function is deployed in the current namespace. The function is deployed from the current directory, unless a path is specified. The Knative service name is derived from the project name, and cannot be changed using this command. 6.2.6. Invoking a deployed function with a test event You can use the kn func invoke CLI command to send a test request to invoke a function either locally or on your OpenShift Container Platform cluster. You can use this command to test that a function is working and able to receive events correctly. Invoking a function locally is useful for a quick test during function development. Invoking a function on the cluster is useful for testing that is closer to the production environment. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You must have already deployed the function that you want to invoke. Procedure Invoke a function: USD kn func invoke The kn func invoke command only works when there is either a local container image currently running, or when there is a function deployed in the cluster. The kn func invoke command executes on the local directory by default, and assumes that this directory is a function project. 6.2.7. Deleting a function You can delete a function by using the kn func delete command. This is useful when a function is no longer required, and can help to save resources on your cluster. Procedure Delete a function: USD kn func delete [<function_name> -n <namespace> -p <path>] If the name or path of the function to delete is not specified, the current directory is searched for a func.yaml file that is used to determine the function to delete. If the namespace is not specified, it defaults to the namespace value in the func.yaml file. 6.2.8. Additional resources Exposing a default registry manually Marketplace page for the Intellij Knative plugin Marketplace page for the Visual Studio Code Knative plugin 6.2.9. steps See Using functions with Knative Eventing 6.3. On-cluster function building and deploying Instead of building a function locally, you can build a function directly on the cluster. When using this workflow on a local development machine, you only need to work with the function source code. This is useful, for example, when you cannot install on-cluster function building tools, such as docker or podman. 6.3.1. Building and deploying functions on the cluster You can use the Knative ( kn ) CLI to initiate a function project build and then deploy the function directly on the cluster. To build a function project in this way, the source code for your function project must exist in a Git repository branch that is accessible to your cluster. Prerequisites Red Hat OpenShift Pipelines must be installed on your cluster. You have installed the OpenShift CLI ( oc ). You have installed the Knative ( kn ) CLI. Procedure In each namespace where you want to run Pipelines and deploy a function, you must create the following resources: Create the s2i Tekton task to be able to use Source-to-Image in the pipeline: USD oc apply -f https://raw.githubusercontent.com/openshift-knative/kn-plugin-func/serverless-1.28.0/pipelines/resources/tekton/task/func-s2i/0.1/func-s2i.yaml Create the kn func deploy Tekton task to be able to deploy the function in the pipeline: USD oc apply -f https://raw.githubusercontent.com/openshift-knative/kn-plugin-func/serverless-1.28.0/pipelines/resources/tekton/task/func-deploy/0.1/func-deploy.yaml Create a function: USD kn func create <function_name> -l <runtime> After you have created a new function project, you must add the project to a Git repository and ensure that the repository is available to the cluster. Information about this Git repository is used to update the func.yaml file in the step. Update the configuration in the func.yaml file for your function project to enable on-cluster builds for the Git repository: ... git: url: <git_repository_url> 1 revision: main 2 contextDir: <directory_path> 3 ... 1 Required. Specify the Git repository that contains your function's source code. 2 Optional. Specify the Git repository revision to be used. This can be a branch, tag, or commit. 3 Optional. Specify the function's directory path if the function is not located in the Git repository root folder. Implement the business logic of your function. Then, use Git to commit and push the changes. Deploy your function: USD kn func deploy --remote If you are not logged into the container registry referenced in your function configuration, you are prompted to provide credentials for the remote container registry that hosts the function image: Example output and prompts πŸ•• Creating Pipeline resources Please provide credentials for image registry used by Pipeline. ? Server: https://index.docker.io/v1/ ? Username: my-repo ? Password: ******** Function deployed at URL: http://test-function.default.svc.cluster.local To update your function, commit and push new changes by using Git, then run the kn func deploy --remote command again. 6.3.2. Specifying function revision When building and deploying a function on the cluster, you must specify the location of the function code by specifying the Git repository, branch, and subdirectory within the repository. You do not need to specify the branch if you use the main branch. Similarly, you do not need to specify the subdirectory if your function is at the root of the repository. You can specify these parameters in the func.yaml configuration file, or by using flags with the kn func deploy command. Prerequisites Red Hat OpenShift Pipelines must be installed on your cluster. You have installed the OpenShift ( oc ) CLI. You have installed the Knative ( kn ) CLI. Procedure Deploy your function: USD kn func deploy --remote \ 1 --git-url <repo-url> \ 2 [--git-branch <branch>] \ 3 [--git-dir <function-dir>] 4 1 With the --remote flag, the build runs remotely. 2 Substitute <repo-url> with the URL of the Git repository. 3 Substitute <branch> with the Git branch, tag, or commit. If using the latest commit on the main branch, you can skip this flag. 4 Substitute <function-dir> with the directory containing the function if it is different than the repository root directory. For example: USD kn func deploy --remote \ --git-url https://example.com/alice/myfunc.git \ --git-branch my-feature \ --git-dir functions/example-func/ 6.4. Developing Quarkus functions After you have created a Quarkus function project , you can modify the template files provided to add business logic to your function. This includes configuring function invocation and the returned headers and status codes. 6.4.1. Prerequisites Before you can develop functions, you must complete the setup steps in Setting up OpenShift Serverless Functions . 6.4.2. Quarkus function template structure When you create a Quarkus function by using the Knative ( kn ) CLI, the project directory looks similar to a typical Maven project. Additionally, the project contains the func.yaml file, which is used for configuring the function. Both http and event trigger functions have the same template structure: Template structure . β”œβ”€β”€ func.yaml 1 β”œβ”€β”€ mvnw β”œβ”€β”€ mvnw.cmd β”œβ”€β”€ pom.xml 2 β”œβ”€β”€ README.md └── src β”œβ”€β”€ main β”‚ β”œβ”€β”€ java β”‚ β”‚ └── functions β”‚ β”‚ β”œβ”€β”€ Function.java 3 β”‚ β”‚ β”œβ”€β”€ Input.java β”‚ β”‚ └── Output.java β”‚ └── resources β”‚ └── application.properties └── test └── java └── functions 4 β”œβ”€β”€ FunctionTest.java └── NativeFunctionIT.java 1 Used to determine the image name and registry. 2 The Project Object Model (POM) file contains project configuration, such as information about dependencies. You can add additional dependencies by modifying this file. Example of additional dependencies ... <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> <dependency> <groupId>org.assertj</groupId> <artifactId>assertj-core</artifactId> <version>3.8.0</version> <scope>test</scope> </dependency> </dependencies> ... Dependencies are downloaded during the first compilation. 3 The function project must contain a Java method annotated with @Funq . You can place this method in the Function.java class. 4 Contains simple test cases that can be used to test your function locally. 6.4.3. About invoking Quarkus functions You can create a Quarkus project that responds to cloud events, or one that responds to simple HTTP requests. Cloud events in Knative are transported over HTTP as a POST request, so either function type can listen and respond to incoming HTTP requests. When an incoming request is received, Quarkus functions are invoked with an instance of a permitted type. Table 6.1. Function invocation options Invocation method Data type contained in the instance Example of data HTTP POST request JSON object in the body of the request { "customerId": "0123456", "productId": "6543210" } HTTP GET request Data in the query string ?customerId=0123456&productId=6543210 CloudEvent JSON object in the data property { "customerId": "0123456", "productId": "6543210" } The following example shows a function that receives and processes the customerId and productId purchase data that is listed in the table: Example Quarkus function public class Functions { @Funq public void processPurchase(Purchase purchase) { // process the purchase } } The corresponding Purchase JavaBean class that contains the purchase data looks as follows: Example class public class Purchase { private long customerId; private long productId; // getters and setters } 6.4.3.1. Invocation examples The following example code defines three functions named withBeans , withCloudEvent , and withBinary ; Example import io.quarkus.funqy.Funq; import io.quarkus.funqy.knative.events.CloudEvent; public class Input { private String message; // getters and setters } public class Output { private String message; // getters and setters } public class Functions { @Funq public Output withBeans(Input in) { // function body } @Funq public CloudEvent<Output> withCloudEvent(CloudEvent<Input> in) { // function body } @Funq public void withBinary(byte[] in) { // function body } } The withBeans function of the Functions class can be invoked by: An HTTP POST request with a JSON body: USD curl "http://localhost:8080/withBeans" -X POST \ -H "Content-Type: application/json" \ -d '{"message": "Hello there."}' An HTTP GET request with query parameters: USD curl "http://localhost:8080/withBeans?message=Hello%20there." -X GET A CloudEvent object in binary encoding: USD curl "http://localhost:8080/" -X POST \ -H "Content-Type: application/json" \ -H "Ce-SpecVersion: 1.0" \ -H "Ce-Type: withBeans" \ -H "Ce-Source: cURL" \ -H "Ce-Id: 42" \ -d '{"message": "Hello there."}' A CloudEvent object in structured encoding: USD curl http://localhost:8080/ \ -H "Content-Type: application/cloudevents+json" \ -d '{ "data": {"message":"Hello there."}, "datacontenttype": "application/json", "id": "42", "source": "curl", "type": "withBeans", "specversion": "1.0"}' The withCloudEvent function of the Functions class can be invoked by using a CloudEvent object, similarly to the withBeans function. However, unlike withBeans , withCloudEvent cannot be invoked with a plain HTTP request. The withBinary function of the Functions class can be invoked by: A CloudEvent object in binary encoding: A CloudEvent object in structured encoding: 6.4.4. CloudEvent attributes If you need to read or write the attributes of a CloudEvent, such as type or subject , you can use the CloudEvent<T> generic interface and the CloudEventBuilder builder. The <T> type parameter must be one of the permitted types. In the following example, CloudEventBuilder is used to return success or failure of processing the purchase: public class Functions { private boolean _processPurchase(Purchase purchase) { // do stuff } public CloudEvent<Void> processPurchase(CloudEvent<Purchase> purchaseEvent) { System.out.println("subject is: " + purchaseEvent.subject()); if (!_processPurchase(purchaseEvent.data())) { return CloudEventBuilder.create() .type("purchase.error") .build(); } return CloudEventBuilder.create() .type("purchase.success") .build(); } } 6.4.5. Quarkus function return values Functions can return an instance of any type from the list of permitted types. Alternatively, they can return the Uni<T> type, where the <T> type parameter can be of any type from the permitted types. The Uni<T> type is useful if a function calls asynchronous APIs, because the returned object is serialized in the same format as the received object. For example: If a function receives an HTTP request, then the returned object is sent in the body of an HTTP response. If a function receives a CloudEvent object in binary encoding, then the returned object is sent in the data property of a binary-encoded CloudEvent object. The following example shows a function that fetches a list of purchases: Example command public class Functions { @Funq public List<Purchase> getPurchasesByName(String name) { // logic to retrieve purchases } } Invoking this function through an HTTP request produces an HTTP response that contains a list of purchases in the body of the response. Invoking this function through an incoming CloudEvent object produces a CloudEvent response with a list of purchases in the data property. 6.4.5.1. Permitted types The input and output of a function can be any of the void , String , or byte[] types. Additionally, they can be primitive types and their wrappers, for example, int and Integer . They can also be the following complex objects: Javabeans, maps, lists, arrays, and the special CloudEvents<T> type. Maps, lists, arrays, the <T> type parameter of the CloudEvents<T> type, and attributes of Javabeans can only be of types listed here. Example public class Functions { public List<Integer> getIds(); public Purchase[] getPurchasesByName(String name); public String getNameById(int id); public Map<String,Integer> getNameIdMapping(); public void processImage(byte[] img); } 6.4.6. Testing Quarkus functions Quarkus functions can be tested locally on your computer. In the default project that is created when you create a function using kn func create , there is the src/test/ directory, which contains basic Maven tests. These tests can be extended as needed. Prerequisites You have created a Quarkus function. You have installed the Knative ( kn ) CLI. Procedure Navigate to the project folder for your function. Run the Maven tests: USD ./mvnw test 6.4.7. steps Build and deploy a function. 6.5. Developing Node.js functions After you have created a Node.js function project , you can modify the template files provided to add business logic to your function. This includes configuring function invocation and the returned headers and status codes. 6.5.1. Prerequisites Before you can develop functions, you must complete the steps in Setting up OpenShift Serverless Functions . 6.5.2. Node.js function template structure When you create a Node.js function using the Knative ( kn ) CLI, the project directory looks like a typical Node.js project. The only exception is the additional func.yaml file, which is used to configure the function. Both http and event trigger functions have the same template structure: Template structure . β”œβ”€β”€ func.yaml 1 β”œβ”€β”€ index.js 2 β”œβ”€β”€ package.json 3 β”œβ”€β”€ README.md └── test 4 β”œβ”€β”€ integration.js └── unit.js 1 The func.yaml configuration file is used to determine the image name and registry. 2 Your project must contain an index.js file which exports a single function. 3 You are not restricted to the dependencies provided in the template package.json file. You can add additional dependencies as you would in any other Node.js project. Example of adding npm dependencies npm install --save opossum When the project is built for deployment, these dependencies are included in the created runtime container image. 4 Integration and unit test scripts are provided as part of the function template. 6.5.3. About invoking Node.js functions When using the Knative ( kn ) CLI to create a function project, you can generate a project that responds to CloudEvents, or one that responds to simple HTTP requests. CloudEvents in Knative are transported over HTTP as a POST request, so both function types listen for and respond to incoming HTTP events. Node.js functions can be invoked with a simple HTTP request. When an incoming request is received, functions are invoked with a context object as the first parameter. 6.5.3.1. Node.js context objects Functions are invoked by providing a context object as the first parameter. This object provides access to the incoming HTTP request information. Example context object function handle(context, data) This information includes the HTTP request method, any query strings or headers sent with the request, the HTTP version, and the request body. Incoming requests that contain a CloudEvent attach the incoming instance of the CloudEvent to the context object so that it can be accessed by using context.cloudevent . 6.5.3.1.1. Context object methods The context object has a single method, cloudEventResponse() , that accepts a data value and returns a CloudEvent. In a Knative system, if a function deployed as a service is invoked by an event broker sending a CloudEvent, the broker examines the response. If the response is a CloudEvent, this event is handled by the broker. Example context object method // Expects to receive a CloudEvent with customer data function handle(context, customer) { // process the customer const processed = handle(customer); return context.cloudEventResponse(customer) .source('/handle') .type('fn.process.customer') .response(); } 6.5.3.1.2. CloudEvent data If the incoming request is a CloudEvent, any data associated with the CloudEvent is extracted from the event and provided as a second parameter. For example, if a CloudEvent is received that contains a JSON string in its data property that is similar to the following: { "customerId": "0123456", "productId": "6543210" } When invoked, the second parameter to the function, after the context object, will be a JavaScript object that has customerId and productId properties. Example signature function handle(context, data) The data parameter in this example is a JavaScript object that contains the customerId and productId properties. 6.5.4. Node.js function return values Functions can return any valid JavaScript type or can have no return value. When a function has no return value specified, and no failure is indicated, the caller receives a 204 No Content response. Functions can also return a CloudEvent or a Message object in order to push events into the Knative Eventing system. In this case, the developer is not required to understand or implement the CloudEvent messaging specification. Headers and other relevant information from the returned values are extracted and sent with the response. Example function handle(context, customer) { // process customer and return a new CloudEvent return new CloudEvent({ source: 'customer.processor', type: 'customer.processed' }) } 6.5.4.1. Returning headers You can set a response header by adding a headers property to the return object. These headers are extracted and sent with the response to the caller. Example response header function handle(context, customer) { // process customer and return custom headers // the response will be '204 No content' return { headers: { customerid: customer.id } }; } 6.5.4.2. Returning status codes You can set a status code that is returned to the caller by adding a statusCode property to the return object: Example status code function handle(context, customer) { // process customer if (customer.restricted) { return { statusCode: 451 } } } Status codes can also be set for errors that are created and thrown by the function: Example error status code function handle(context, customer) { // process customer if (customer.restricted) { const err = new Error('Unavailable for legal reasons'); err.statusCode = 451; throw err; } } 6.5.5. Testing Node.js functions Node.js functions can be tested locally on your computer. In the default project that is created when you create a function by using kn func create , there is a test folder that contains some simple unit and integration tests. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function by using kn func create . Procedure Navigate to the test folder for your function. Run the tests: USD npm test 6.5.6. steps See the Node.js context object reference documentation. Build and deploy a function. 6.6. Developing TypeScript functions After you have created a TypeScript function project , you can modify the template files provided to add business logic to your function. This includes configuring function invocation and the returned headers and status codes. 6.6.1. Prerequisites Before you can develop functions, you must complete the steps in Setting up OpenShift Serverless Functions . 6.6.2. TypeScript function template structure When you create a TypeScript function using the Knative ( kn ) CLI, the project directory looks like a typical TypeScript project. The only exception is the additional func.yaml file, which is used for configuring the function. Both http and event trigger functions have the same template structure: Template structure . β”œβ”€β”€ func.yaml 1 β”œβ”€β”€ package.json 2 β”œβ”€β”€ package-lock.json β”œβ”€β”€ README.md β”œβ”€β”€ src β”‚ └── index.ts 3 β”œβ”€β”€ test 4 β”‚ β”œβ”€β”€ integration.ts β”‚ └── unit.ts └── tsconfig.json 1 The func.yaml configuration file is used to determine the image name and registry. 2 You are not restricted to the dependencies provided in the template package.json file. You can add additional dependencies as you would in any other TypeScript project. Example of adding npm dependencies npm install --save opossum When the project is built for deployment, these dependencies are included in the created runtime container image. 3 Your project must contain an src/index.js file which exports a function named handle . 4 Integration and unit test scripts are provided as part of the function template. 6.6.3. About invoking TypeScript functions When using the Knative ( kn ) CLI to create a function project, you can generate a project that responds to CloudEvents or one that responds to simple HTTP requests. CloudEvents in Knative are transported over HTTP as a POST request, so both function types listen for and respond to incoming HTTP events. TypeScript functions can be invoked with a simple HTTP request. When an incoming request is received, functions are invoked with a context object as the first parameter. 6.6.3.1. TypeScript context objects To invoke a function, you provide a context object as the first parameter. Accessing properties of the context object can provide information about the incoming HTTP request. Example context object function handle(context:Context): string This information includes the HTTP request method, any query strings or headers sent with the request, the HTTP version, and the request body. Incoming requests that contain a CloudEvent attach the incoming instance of the CloudEvent to the context object so that it can be accessed by using context.cloudevent . 6.6.3.1.1. Context object methods The context object has a single method, cloudEventResponse() , that accepts a data value and returns a CloudEvent. In a Knative system, if a function deployed as a service is invoked by an event broker sending a CloudEvent, the broker examines the response. If the response is a CloudEvent, this event is handled by the broker. Example context object method // Expects to receive a CloudEvent with customer data export function handle(context: Context, cloudevent?: CloudEvent): CloudEvent { // process the customer const customer = cloudevent.data; const processed = processCustomer(customer); return context.cloudEventResponse(customer) .source('/customer/process') .type('customer.processed') .response(); } 6.6.3.1.2. Context types The TypeScript type definition files export the following types for use in your functions. Exported type definitions // Invokable is the expeted Function signature for user functions export interface Invokable { (context: Context, cloudevent?: CloudEvent): any } // Logger can be used for structural logging to the console export interface Logger { debug: (msg: any) => void, info: (msg: any) => void, warn: (msg: any) => void, error: (msg: any) => void, fatal: (msg: any) => void, trace: (msg: any) => void, } // Context represents the function invocation context, and provides // access to the event itself as well as raw HTTP objects. export interface Context { log: Logger; req: IncomingMessage; query?: Record<string, any>; body?: Record<string, any>|string; method: string; headers: IncomingHttpHeaders; httpVersion: string; httpVersionMajor: number; httpVersionMinor: number; cloudevent: CloudEvent; cloudEventResponse(data: string|object): CloudEventResponse; } // CloudEventResponse is a convenience class used to create // CloudEvents on function returns export interface CloudEventResponse { id(id: string): CloudEventResponse; source(source: string): CloudEventResponse; type(type: string): CloudEventResponse; version(version: string): CloudEventResponse; response(): CloudEvent; } 6.6.3.1.3. CloudEvent data If the incoming request is a CloudEvent, any data associated with the CloudEvent is extracted from the event and provided as a second parameter. For example, if a CloudEvent is received that contains a JSON string in its data property that is similar to the following: { "customerId": "0123456", "productId": "6543210" } When invoked, the second parameter to the function, after the context object, will be a JavaScript object that has customerId and productId properties. Example signature function handle(context: Context, cloudevent?: CloudEvent): CloudEvent The cloudevent parameter in this example is a JavaScript object that contains the customerId and productId properties. 6.6.4. TypeScript function return values Functions can return any valid JavaScript type or can have no return value. When a function has no return value specified, and no failure is indicated, the caller receives a 204 No Content response. Functions can also return a CloudEvent or a Message object in order to push events into the Knative Eventing system. In this case, the developer is not required to understand or implement the CloudEvent messaging specification. Headers and other relevant information from the returned values are extracted and sent with the response. Example export const handle: Invokable = function ( context: Context, cloudevent?: CloudEvent ): Message { // process customer and return a new CloudEvent const customer = cloudevent.data; return HTTP.binary( new CloudEvent({ source: 'customer.processor', type: 'customer.processed' }) ); }; 6.6.4.1. Returning headers You can set a response header by adding a headers property to the return object. These headers are extracted and sent with the response to the caller. Example response header export function handle(context: Context, cloudevent?: CloudEvent): Record<string, any> { // process customer and return custom headers const customer = cloudevent.data as Record<string, any>; return { headers: { 'customer-id': customer.id } }; } 6.6.4.2. Returning status codes You can set a status code that is returned to the caller by adding a statusCode property to the return object: Example status code export function handle(context: Context, cloudevent?: CloudEvent): Record<string, any> { // process customer const customer = cloudevent.data as Record<string, any>; if (customer.restricted) { return { statusCode: 451 } } // business logic, then return { statusCode: 240 } } Status codes can also be set for errors that are created and thrown by the function: Example error status code export function handle(context: Context, cloudevent?: CloudEvent): Record<string, string> { // process customer const customer = cloudevent.data as Record<string, any>; if (customer.restricted) { const err = new Error('Unavailable for legal reasons'); err.statusCode = 451; throw err; } } 6.6.5. Testing TypeScript functions TypeScript functions can be tested locally on your computer. In the default project that is created when you create a function using kn func create , there is a test folder that contains some simple unit and integration tests. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function by using kn func create . Procedure If you have not previously run tests, install the dependencies first: USD npm install Navigate to the test folder for your function. Run the tests: USD npm test 6.6.6. steps See the TypeScript context object reference documentation. Build and deploy a function. See the Pino API documentation for more information about logging with functions. 6.7. Developing Python functions Important OpenShift Serverless Functions with Python is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . After you have created a Python function project , you can modify the template files provided to add business logic to your function. This includes configuring function invocation and the returned headers and status codes. 6.7.1. Prerequisites Before you can develop functions, you must complete the steps in Setting up OpenShift Serverless Functions . 6.7.2. Python function template structure When you create a Python function by using the Knative ( kn ) CLI, the project directory looks similar to a typical Python project. Python functions have very few restrictions. The only requirements are that your project contains a func.py file that contains a main() function, and a func.yaml configuration file. Developers are not restricted to the dependencies provided in the template requirements.txt file. Additional dependencies can be added as they would be in any other Python project. When the project is built for deployment, these dependencies will be included in the created runtime container image. Both http and event trigger functions have the same template structure: Template structure fn β”œβ”€β”€ func.py 1 β”œβ”€β”€ func.yaml 2 β”œβ”€β”€ requirements.txt 3 └── test_func.py 4 1 Contains a main() function. 2 Used to determine the image name and registry. 3 Additional dependencies can be added to the requirements.txt file as they are in any other Python project. 4 Contains a simple unit test that can be used to test your function locally. 6.7.3. About invoking Python functions Python functions can be invoked with a simple HTTP request. When an incoming request is received, functions are invoked with a context object as the first parameter. The context object is a Python class with two attributes: The request attribute is always present, and contains the Flask request object. The second attribute, cloud_event , is populated if the incoming request is a CloudEvent object. Developers can access any CloudEvent data from the context object. Example context object def main(context: Context): """ The context parameter contains the Flask request object and any CloudEvent received with the request. """ print(f"Method: {context.request.method}") print(f"Event data {context.cloud_event.data}") # ... business logic here 6.7.4. Python function return values Functions can return any value supported by Flask . This is because the invocation framework proxies these values directly to the Flask server. Example def main(context: Context): body = { "message": "Howdy!" } headers = { "content-type": "application/json" } return body, 200, headers Functions can set both headers and response codes as secondary and tertiary response values from function invocation. 6.7.4.1. Returning CloudEvents Developers can use the @event decorator to tell the invoker that the function return value must be converted to a CloudEvent before sending the response. Example @event("event_source"="/my/function", "event_type"="my.type") def main(context): # business logic here data = do_something() # more data processing return data This example sends a CloudEvent as the response value, with a type of "my.type" and a source of "/my/function" . The CloudEvent data property is set to the returned data variable. The event_source and event_type decorator attributes are both optional. 6.7.5. Testing Python functions You can test Python functions locally on your computer. The default project contains a test_func.py file, which provides a simple unit test for functions. Note The default test framework for Python functions is unittest . You can use a different test framework if you prefer. Prerequisites To run Python functions tests locally, you must install the required dependencies: USD pip install -r requirements.txt Procedure Navigate to the folder for your function that contains the test_func.py file. Run the tests: USD python3 test_func.py 6.7.6. steps Build and deploy a function. 6.8. Using functions with Knative Eventing Functions are deployed as Knative services on an OpenShift Container Platform cluster. You can connect functions to Knative Eventing components so that they can receive incoming events. 6.8.1. Connect an event source to a function using the Developer perspective Functions are deployed as Knative services on an OpenShift Container Platform cluster. When you create an event source by using the OpenShift Container Platform web console, you can specify a deployed function that events are sent to from that source. Prerequisites The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Developer perspective. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created and deployed a function. Procedure Create an event source of any type, by navigating to +Add Event Source and selecting the event source type that you want to create. In the Sink section of the Create Event Source form view, select your function in the Resource list. Click Create . Verification You can verify that the event source was created and is connected to the function by viewing the Topology page. In the Developer perspective, navigate to Topology . View the event source and click the connected function to see the function details in the right panel. 6.9. Function project configuration in func.yaml The func.yaml file contains the configuration for your function project. Values specified in func.yaml are used when you execute a kn func command. For example, when you run the kn func build command, the value in the build field is used. In some cases, you can override these values with command line flags or environment variables. 6.9.1. Configurable fields in func.yaml Many of the fields in func.yaml are generated automatically when you create, build, and deploy your function. However, there are also fields that you modify manually to change things, such as the function name or the image name. 6.9.1.1. buildEnvs The buildEnvs field enables you to set environment variables to be available to the environment that builds your function. Unlike variables set using envs , a variable set using buildEnv is not available during function runtime. You can set a buildEnv variable directly from a value. In the following example, the buildEnv variable named EXAMPLE1 is directly assigned the one value: buildEnvs: - name: EXAMPLE1 value: one You can also set a buildEnv variable from a local environment variable. In the following example, the buildEnv variable named EXAMPLE2 is assigned the value of the LOCAL_ENV_VAR local environment variable: buildEnvs: - name: EXAMPLE1 value: '{{ env:LOCAL_ENV_VAR }}' 6.9.1.2. envs The envs field enables you to set environment variables to be available to your function at runtime. You can set an environment variable in several different ways: Directly from a value. From a value assigned to a local environment variable. See the section "Referencing local environment variables from func.yaml fields" for more information. From a key-value pair stored in a secret or config map. You can also import all key-value pairs stored in a secret or config map, with keys used as names of the created environment variables. This examples demonstrates the different ways to set an environment variable: name: test namespace: "" runtime: go ... envs: - name: EXAMPLE1 1 value: value - name: EXAMPLE2 2 value: '{{ env:LOCAL_ENV_VALUE }}' - name: EXAMPLE3 3 value: '{{ secret:mysecret:key }}' - name: EXAMPLE4 4 value: '{{ configMap:myconfigmap:key }}' - value: '{{ secret:mysecret2 }}' 5 - value: '{{ configMap:myconfigmap2 }}' 6 1 An environment variable set directly from a value. 2 An environment variable set from a value assigned to a local environment variable. 3 An environment variable assigned from a key-value pair stored in a secret. 4 An environment variable assigned from a key-value pair stored in a config map. 5 A set of environment variables imported from key-value pairs of a secret. 6 A set of environment variables imported from key-value pairs of a config map. 6.9.1.3. builder The builder field specifies the strategy used by the function to build the image. It accepts values of pack or s2i . 6.9.1.4. build The build field indicates how the function should be built. The value local indicates that the function is built locally on your machine. The value git indicates that the function is built on a cluster by using the values specified in the git field. 6.9.1.5. volumes The volumes field enables you to mount secrets and config maps as a volume accessible to the function at the specified path, as shown in the following example: name: test namespace: "" runtime: go ... volumes: - secret: mysecret 1 path: /workspace/secret - configMap: myconfigmap 2 path: /workspace/configmap 1 The mysecret secret is mounted as a volume residing at /workspace/secret . 2 The myconfigmap config map is mounted as a volume residing at /workspace/configmap . 6.9.1.6. options The options field enables you to modify Knative Service properties for the deployed function, such as autoscaling. If these options are not set, the default ones are used. These options are available: scale min : The minimum number of replicas. Must be a non-negative integer. The default is 0. max : The maximum number of replicas. Must be a non-negative integer. The default is 0, which means no limit. metric : Defines which metric type is watched by the Autoscaler. It can be set to concurrency , which is the default, or rps . target : Recommendation for when to scale up based on the number of concurrently incoming requests. The target option can be a float value greater than 0.01. The default is 100, unless the options.resources.limits.concurrency is set, in which case target defaults to its value. utilization : Percentage of concurrent requests utilization allowed before scaling up. It can be a float value between 1 and 100. The default is 70. resources requests cpu : A CPU resource request for the container with deployed function. memory : A memory resource request for the container with deployed function. limits cpu : A CPU resource limit for the container with deployed function. memory : A memory resource limit for the container with deployed function. concurrency : Hard Limit of concurrent requests to be processed by a single replica. It can be integer value greater than or equal to 0, default is 0 - meaning no limit. This is an example configuration of the scale options: name: test namespace: "" runtime: go ... options: scale: min: 0 max: 10 metric: concurrency target: 75 utilization: 75 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 1000m memory: 256Mi concurrency: 100 6.9.1.7. image The image field sets the image name for your function after it has been built. You can modify this field. If you do, the time you run kn func build or kn func deploy , the function image will be created with the new name. 6.9.1.8. imageDigest The imageDigest field contains the SHA256 hash of the image manifest when the function is deployed. Do not modify this value. 6.9.1.9. labels The labels field enables you to set labels on a deployed function. You can set a label directly from a value. In the following example, the label with the role key is directly assigned the value of backend : labels: - key: role value: backend You can also set a label from a local environment variable. In the following example, the label with the author key is assigned the value of the USER local environment variable: labels: - key: author value: '{{ env:USER }}' 6.9.1.10. name The name field defines the name of your function. This value is used as the name of your Knative service when it is deployed. You can change this field to rename the function on subsequent deployments. 6.9.1.11. namespace The namespace field specifies the namespace in which your function is deployed. 6.9.1.12. runtime The runtime field specifies the language runtime for your function, for example, python . 6.9.2. Referencing local environment variables from func.yaml fields If you want to avoid storing sensitive information such as an API key in the function configuration, you can add a reference to an environment variable available in the local environment. You can do this by modifying the envs field in the func.yaml file. Prerequisites You need to have the function project created. The local environment needs to contain the variable that you want to reference. Procedure To refer to a local environment variable, use the following syntax: Substitute ENV_VAR with the name of the variable in the local environment that you want to use. For example, you might have the API_KEY variable available in the local environment. You can assign its value to the MY_API_KEY variable, which you can then directly use within your function: Example function name: test namespace: "" runtime: go ... envs: - name: MY_API_KEY value: '{{ env:API_KEY }}' ... 6.9.3. Additional resources Getting started with functions Accessing secrets and config maps from Serverless functions Knative documentation on Autoscaling Kubernetes documentation on managing resources for containers Knative documentation on configuring concurrency 6.10. Accessing secrets and config maps from functions After your functions have been deployed to the cluster, they can access data stored in secrets and config maps. This data can be mounted as volumes, or assigned to environment variables. You can configure this access interactively by using the Knative CLI, or by manually by editing the function configuration YAML file. Important To access secrets and config maps, the function must be deployed on the cluster. This functionality is not available to a function running locally. If a secret or config map value cannot be accessed, the deployment fails with an error message specifying the inaccessible values. 6.10.1. Modifying function access to secrets and config maps interactively You can manage the secrets and config maps accessed by your function by using the kn func config interactive utility. The available operations include listing, adding, and removing values stored in config maps and secrets as environment variables, as well as listing, adding, and removing volumes. This functionality enables you to manage what data stored on the cluster is accessible by your function. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Run the following command in the function project directory: USD kn func config Alternatively, you can specify the function project directory using the --path or -p option. Use the interactive interface to perform the necessary operation. For example, using the utility to list configured volumes produces an output similar to this: USD kn func config ? What do you want to configure? Volumes ? What operation do you want to perform? List Configured Volumes mounts: - Secret "mysecret" mounted at path: "/workspace/secret" - Secret "mysecret2" mounted at path: "/workspace/secret2" This scheme shows all operations available in the interactive utility and how to navigate to them: Optional. Deploy the function to make the changes take effect: USD kn func deploy -p test 6.10.2. Modifying function access to secrets and config maps interactively by using specialized commands Every time you run the kn func config utility, you need to navigate the entire dialogue to select the operation you need, as shown in the section. To save steps, you can directly execute a specific operation by running a more specific form of the kn func config command: To list configured environment variables: USD kn func config envs [-p <function-project-path>] To add environment variables to the function configuration: USD kn func config envs add [-p <function-project-path>] To remove environment variables from the function configuration: USD kn func config envs remove [-p <function-project-path>] To list configured volumes: USD kn func config volumes [-p <function-project-path>] To add a volume to the function configuration: USD kn func config volumes add [-p <function-project-path>] To remove a volume from the function configuration: USD kn func config volumes remove [-p <function-project-path>] 6.10.3. Adding function access to secrets and config maps manually You can manually add configuration for accessing secrets and config maps to your function. This might be preferable to using the kn func config interactive utility and commands, for example when you have an existing configuration snippet. 6.10.3.1. Mounting a secret as a volume You can mount a secret as a volume. Once a secret is mounted, you can access it from the function as a regular file. This enables you to store on the cluster data needed by the function, for example, a list of URIs that need to be accessed by the function. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Open the func.yaml file for your function. For each secret you want to mount as a volume, add the following YAML to the volumes section: name: test namespace: "" runtime: go ... volumes: - secret: mysecret path: /workspace/secret Substitute mysecret with the name of the target secret. Substitute /workspace/secret with the path where you want to mount the secret. For example, to mount the addresses secret, use the following YAML: name: test namespace: "" runtime: go ... volumes: - configMap: addresses path: /workspace/secret-addresses Save the configuration. 6.10.3.2. Mounting a config map as a volume You can mount a config map as a volume. Once a config map is mounted, you can access it from the function as a regular file. This enables you to store on the cluster data needed by the function, for example, a list of URIs that need to be accessed by the function. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Open the func.yaml file for your function. For each config map you want to mount as a volume, add the following YAML to the volumes section: name: test namespace: "" runtime: go ... volumes: - configMap: myconfigmap path: /workspace/configmap Substitute myconfigmap with the name of the target config map. Substitute /workspace/configmap with the path where you want to mount the config map. For example, to mount the addresses config map, use the following YAML: name: test namespace: "" runtime: go ... volumes: - configMap: addresses path: /workspace/configmap-addresses Save the configuration. 6.10.3.3. Setting environment variable from a key value defined in a secret You can set an environment variable from a key value defined as a secret. A value previously stored in a secret can then be accessed as an environment variable by the function at runtime. This can be useful for getting access to a value stored in a secret, such as the ID of a user. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Open the func.yaml file for your function. For each value from a secret key-value pair that you want to assign to an environment variable, add the following YAML to the envs section: name: test namespace: "" runtime: go ... envs: - name: EXAMPLE value: '{{ secret:mysecret:key }}' Substitute EXAMPLE with the name of the environment variable. Substitute mysecret with the name of the target secret. Substitute key with the key mapped to the target value. For example, to access the user ID that is stored in userdetailssecret , use the following YAML: name: test namespace: "" runtime: go ... envs: - value: '{{ configMap:userdetailssecret:userid }}' Save the configuration. 6.10.3.4. Setting environment variable from a key value defined in a config map You can set an environment variable from a key value defined as a config map. A value previously stored in a config map can then be accessed as an environment variable by the function at runtime. This can be useful for getting access to a value stored in a config map, such as the ID of a user. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Open the func.yaml file for your function. For each value from a config map key-value pair that you want to assign to an environment variable, add the following YAML to the envs section: name: test namespace: "" runtime: go ... envs: - name: EXAMPLE value: '{{ configMap:myconfigmap:key }}' Substitute EXAMPLE with the name of the environment variable. Substitute myconfigmap with the name of the target config map. Substitute key with the key mapped to the target value. For example, to access the user ID that is stored in userdetailsmap , use the following YAML: name: test namespace: "" runtime: go ... envs: - value: '{{ configMap:userdetailsmap:userid }}' Save the configuration. 6.10.3.5. Setting environment variables from all values defined in a secret You can set an environment variable from all values defined in a secret. Values previously stored in a secret can then be accessed as environment variables by the function at runtime. This can be useful for simultaneously getting access to a collection of values stored in a secret, for example, a set of data pertaining to a user. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Open the func.yaml file for your function. For every secret for which you want to import all key-value pairs as environment variables, add the following YAML to the envs section: name: test namespace: "" runtime: go ... envs: - value: '{{ secret:mysecret }}' 1 1 Substitute mysecret with the name of the target secret. For example, to access all user data that is stored in userdetailssecret , use the following YAML: name: test namespace: "" runtime: go ... envs: - value: '{{ configMap:userdetailssecret }}' Save the configuration. 6.10.3.6. Setting environment variables from all values defined in a config map You can set an environment variable from all values defined in a config map. Values previously stored in a config map can then be accessed as environment variables by the function at runtime. This can be useful for simultaneously getting access to a collection of values stored in a config map, for example, a set of data pertaining to a user. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Open the func.yaml file for your function. For every config map for which you want to import all key-value pairs as environment variables, add the following YAML to the envs section: name: test namespace: "" runtime: go ... envs: - value: '{{ configMap:myconfigmap }}' 1 1 Substitute myconfigmap with the name of the target config map. For example, to access all user data that is stored in userdetailsmap , use the following YAML: name: test namespace: "" runtime: go ... envs: - value: '{{ configMap:userdetailsmap }}' Save the file. 6.11. Adding annotations to functions You can add Kubernetes annotations to a deployed Serverless function. Annotations enable you to attach arbitrary metadata to a function, for example, a note about the function's purpose. Annotations are added to the annotations section of the func.yaml configuration file. There are two limitations of the function annotation feature: After a function annotation propagates to the corresponding Knative service on the cluster, it cannot be removed from the service by deleting it from the func.yaml file. You must remove the annotation from the Knative service by modifying the YAML file of the service directly, or by using the OpenShift Container Platform web console. You cannot set annotations that are set by Knative, for example, the autoscaling annotations. 6.11.1. Adding annotations to a function You can add annotations to a function. Similar to a label, an annotation is defined as a key-value map. Annotations are useful, for example, for providing metadata about a function, such as the function's author. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a function. Procedure Open the func.yaml file for your function. For every annotation that you want to add, add the following YAML to the annotations section: name: test namespace: "" runtime: go ... annotations: <annotation_name>: "<annotation_value>" 1 1 Substitute <annotation_name>: "<annotation_value>" with your annotation. For example, to indicate that a function was authored by Alice, you might include the following annotation: name: test namespace: "" runtime: go ... annotations: author: "[email protected]" Save the configuration. The time you deploy your function to the cluster, the annotations are added to the corresponding Knative service. 6.12. Functions development reference guide OpenShift Serverless Functions provides templates that can be used to create basic functions. A template initiates the function project boilerplate and prepares it for use with the kn func tool. Each function template is tailored for a specific runtime and follows its conventions. With a template, you can initiate your function project automatically. Templates for the following runtimes are available: Node.js Quarkus TypeScript 6.12.1. Node.js context object reference The context object has several properties that can be accessed by the function developer. Accessing these properties can provide information about HTTP requests and write output to the cluster logs. 6.12.1.1. log Provides a logging object that can be used to write output to the cluster logs. The log adheres to the Pino logging API . Example log function handle(context) { context.log.info("Processing customer"); } You can access the function by using the kn func invoke command: Example command USD kn func invoke --target 'http://example.function.com' Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"Processing customer"} You can change the log level to one of fatal , error , warn , info , debug , trace , or silent . To do that, change the value of logLevel by assigning one of these values to the environment variable FUNC_LOG_LEVEL using the config command. 6.12.1.2. query Returns the query string for the request, if any, as key-value pairs. These attributes are also found on the context object itself. Example query function handle(context) { // Log the 'name' query parameter context.log.info(context.query.name); // Query parameters are also attached to the context context.log.info(context.name); } You can access the function by using the kn func invoke command: Example command USD kn func invoke --target 'http://example.com?name=tiger' Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"tiger"} 6.12.1.3. body Returns the request body if any. If the request body contains JSON code, this will be parsed so that the attributes are directly available. Example body function handle(context) { // log the incoming request body's 'hello' parameter context.log.info(context.body.hello); } You can access the function by using the curl command to invoke it: Example command USD kn func invoke -d '{"Hello": "world"}' Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"world"} 6.12.1.4. headers Returns the HTTP request headers as an object. Example header function handle(context) { context.log.info(context.headers["custom-header"]); } You can access the function by using the kn func invoke command: Example command USD kn func invoke --target 'http://example.function.com' Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"some-value"} 6.12.1.5. HTTP requests method Returns the HTTP request method as a string. httpVersion Returns the HTTP version as a string. httpVersionMajor Returns the HTTP major version number as a string. httpVersionMinor Returns the HTTP minor version number as a string. 6.12.2. TypeScript context object reference The context object has several properties that can be accessed by the function developer. Accessing these properties can provide information about incoming HTTP requests and write output to the cluster logs. 6.12.2.1. log Provides a logging object that can be used to write output to the cluster logs. The log adheres to the Pino logging API . Example log export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.body as Record<string, string>).hello); } else { context.log.info('No data received'); } return 'OK'; } You can access the function by using the kn func invoke command: Example command USD kn func invoke --target 'http://example.function.com' Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"Processing customer"} You can change the log level to one of fatal , error , warn , info , debug , trace , or silent . To do that, change the value of logLevel by assigning one of these values to the environment variable FUNC_LOG_LEVEL using the config command. 6.12.2.2. query Returns the query string for the request, if any, as key-value pairs. These attributes are also found on the context object itself. Example query export function handle(context: Context): string { // log the 'name' query parameter if (context.query) { context.log.info((context.query as Record<string, string>).name); } else { context.log.info('No data received'); } return 'OK'; } You can access the function by using the kn func invoke command: Example command USD kn func invoke --target 'http://example.function.com' --data '{"name": "tiger"}' Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"tiger"} {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"tiger"} 6.12.2.3. body Returns the request body, if any. If the request body contains JSON code, this will be parsed so that the attributes are directly available. Example body export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.body as Record<string, string>).hello); } else { context.log.info('No data received'); } return 'OK'; } You can access the function by using the kn func invoke command: Example command USD kn func invoke --target 'http://example.function.com' --data '{"hello": "world"}' Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"world"} 6.12.2.4. headers Returns the HTTP request headers as an object. Example header export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.headers as Record<string, string>)['custom-header']); } else { context.log.info('No data received'); } return 'OK'; } You can access the function by using the curl command to invoke it: Example command USD curl -H'x-custom-header: some-value'' http://example.function.com Example output {"level":30,"time":1604511655265,"pid":3430203,"hostname":"localhost.localdomain","reqId":1,"msg":"some-value"} 6.12.2.5. HTTP requests method Returns the HTTP request method as a string. httpVersion Returns the HTTP version as a string. httpVersionMajor Returns the HTTP major version number as a string. httpVersionMinor Returns the HTTP minor version number as a string.
[ "systemctl start --user podman.socket", "export DOCKER_HOST=\"unix://USD{XDG_RUNTIME_DIR}/podman/podman.sock\"", "kn func build -v", "podman machine init --memory=8192 --cpus=2 --disk-size=20", "podman machine start Starting machine \"podman-machine-default\" Waiting for VM Mounting volume... /Users/myuser:/Users/user [...truncated output...] You can still connect Docker API clients by setting DOCKER_HOST using the following command in your terminal session: export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock' Machine \"podman-machine-default\" started successfully", "export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock'", "kn func build -v", "kn func create -r <repository> -l <runtime> -t <template> <path>", "kn func create -l typescript -t cloudevents examplefunc", "Created typescript function in /home/user/demo/examplefunc", "kn func create -r https://github.com/boson-project/templates/ -l node -t hello-world examplefunc", "Created node function in /home/user/demo/examplefunc", "kn func run", "kn func run --path=<directory_path>", "kn func run --build", "kn func run --build=false", "kn func help run", "kn func build", "kn func build", "Building function image Function image has been built, image: registry.redhat.io/example/example-function:latest", "kn func build --registry quay.io/username", "Building function image Function image has been built, image: quay.io/username/example-function:latest", "kn func build --push", "kn func help build", "kn func deploy [-n <namespace> -p <path> -i <image>]", "Function deployed at: http://func.example.com", "kn func invoke", "kn func delete [<function_name> -n <namespace> -p <path>]", "oc apply -f https://raw.githubusercontent.com/openshift-knative/kn-plugin-func/serverless-1.28.0/pipelines/resources/tekton/task/func-s2i/0.1/func-s2i.yaml", "oc apply -f https://raw.githubusercontent.com/openshift-knative/kn-plugin-func/serverless-1.28.0/pipelines/resources/tekton/task/func-deploy/0.1/func-deploy.yaml", "kn func create <function_name> -l <runtime>", "git: url: <git_repository_url> 1 revision: main 2 contextDir: <directory_path> 3", "kn func deploy --remote", "πŸ•• Creating Pipeline resources Please provide credentials for image registry used by Pipeline. ? Server: https://index.docker.io/v1/ ? Username: my-repo ? Password: ******** Function deployed at URL: http://test-function.default.svc.cluster.local", "kn func deploy --remote \\ 1 --git-url <repo-url> \\ 2 [--git-branch <branch>] \\ 3 [--git-dir <function-dir>] 4", "kn func deploy --remote --git-url https://example.com/alice/myfunc.git --git-branch my-feature --git-dir functions/example-func/", ". β”œβ”€β”€ func.yaml 1 β”œβ”€β”€ mvnw β”œβ”€β”€ mvnw.cmd β”œβ”€β”€ pom.xml 2 β”œβ”€β”€ README.md └── src β”œβ”€β”€ main β”‚ β”œβ”€β”€ java β”‚ β”‚ └── functions β”‚ β”‚ β”œβ”€β”€ Function.java 3 β”‚ β”‚ β”œβ”€β”€ Input.java β”‚ β”‚ └── Output.java β”‚ └── resources β”‚ └── application.properties └── test └── java └── functions 4 β”œβ”€β”€ FunctionTest.java └── NativeFunctionIT.java", "<dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> <dependency> <groupId>org.assertj</groupId> <artifactId>assertj-core</artifactId> <version>3.8.0</version> <scope>test</scope> </dependency> </dependencies>", "public class Functions { @Funq public void processPurchase(Purchase purchase) { // process the purchase } }", "public class Purchase { private long customerId; private long productId; // getters and setters }", "import io.quarkus.funqy.Funq; import io.quarkus.funqy.knative.events.CloudEvent; public class Input { private String message; // getters and setters } public class Output { private String message; // getters and setters } public class Functions { @Funq public Output withBeans(Input in) { // function body } @Funq public CloudEvent<Output> withCloudEvent(CloudEvent<Input> in) { // function body } @Funq public void withBinary(byte[] in) { // function body } }", "curl \"http://localhost:8080/withBeans\" -X POST -H \"Content-Type: application/json\" -d '{\"message\": \"Hello there.\"}'", "curl \"http://localhost:8080/withBeans?message=Hello%20there.\" -X GET", "curl \"http://localhost:8080/\" -X POST -H \"Content-Type: application/json\" -H \"Ce-SpecVersion: 1.0\" -H \"Ce-Type: withBeans\" -H \"Ce-Source: cURL\" -H \"Ce-Id: 42\" -d '{\"message\": \"Hello there.\"}'", "curl http://localhost:8080/ -H \"Content-Type: application/cloudevents+json\" -d '{ \"data\": {\"message\":\"Hello there.\"}, \"datacontenttype\": \"application/json\", \"id\": \"42\", \"source\": \"curl\", \"type\": \"withBeans\", \"specversion\": \"1.0\"}'", "curl \"http://localhost:8080/\" -X POST -H \"Content-Type: application/octet-stream\" -H \"Ce-SpecVersion: 1.0\" -H \"Ce-Type: withBinary\" -H \"Ce-Source: cURL\" -H \"Ce-Id: 42\" --data-binary '@img.jpg'", "curl http://localhost:8080/ -H \"Content-Type: application/cloudevents+json\" -d \"{ \\\"data_base64\\\": \\\"USD(base64 --wrap=0 img.jpg)\\\", \\\"datacontenttype\\\": \\\"application/octet-stream\\\", \\\"id\\\": \\\"42\\\", \\\"source\\\": \\\"curl\\\", \\\"type\\\": \\\"withBinary\\\", \\\"specversion\\\": \\\"1.0\\\"}\"", "public class Functions { private boolean _processPurchase(Purchase purchase) { // do stuff } public CloudEvent<Void> processPurchase(CloudEvent<Purchase> purchaseEvent) { System.out.println(\"subject is: \" + purchaseEvent.subject()); if (!_processPurchase(purchaseEvent.data())) { return CloudEventBuilder.create() .type(\"purchase.error\") .build(); } return CloudEventBuilder.create() .type(\"purchase.success\") .build(); } }", "public class Functions { @Funq public List<Purchase> getPurchasesByName(String name) { // logic to retrieve purchases } }", "public class Functions { public List<Integer> getIds(); public Purchase[] getPurchasesByName(String name); public String getNameById(int id); public Map<String,Integer> getNameIdMapping(); public void processImage(byte[] img); }", "./mvnw test", ". β”œβ”€β”€ func.yaml 1 β”œβ”€β”€ index.js 2 β”œβ”€β”€ package.json 3 β”œβ”€β”€ README.md └── test 4 β”œβ”€β”€ integration.js └── unit.js", "npm install --save opossum", "function handle(context, data)", "// Expects to receive a CloudEvent with customer data function handle(context, customer) { // process the customer const processed = handle(customer); return context.cloudEventResponse(customer) .source('/handle') .type('fn.process.customer') .response(); }", "{ \"customerId\": \"0123456\", \"productId\": \"6543210\" }", "function handle(context, data)", "function handle(context, customer) { // process customer and return a new CloudEvent return new CloudEvent({ source: 'customer.processor', type: 'customer.processed' }) }", "function handle(context, customer) { // process customer and return custom headers // the response will be '204 No content' return { headers: { customerid: customer.id } }; }", "function handle(context, customer) { // process customer if (customer.restricted) { return { statusCode: 451 } } }", "function handle(context, customer) { // process customer if (customer.restricted) { const err = new Error('Unavailable for legal reasons'); err.statusCode = 451; throw err; } }", "npm test", ". β”œβ”€β”€ func.yaml 1 β”œβ”€β”€ package.json 2 β”œβ”€β”€ package-lock.json β”œβ”€β”€ README.md β”œβ”€β”€ src β”‚ └── index.ts 3 β”œβ”€β”€ test 4 β”‚ β”œβ”€β”€ integration.ts β”‚ └── unit.ts └── tsconfig.json", "npm install --save opossum", "function handle(context:Context): string", "// Expects to receive a CloudEvent with customer data export function handle(context: Context, cloudevent?: CloudEvent): CloudEvent { // process the customer const customer = cloudevent.data; const processed = processCustomer(customer); return context.cloudEventResponse(customer) .source('/customer/process') .type('customer.processed') .response(); }", "// Invokable is the expeted Function signature for user functions export interface Invokable { (context: Context, cloudevent?: CloudEvent): any } // Logger can be used for structural logging to the console export interface Logger { debug: (msg: any) => void, info: (msg: any) => void, warn: (msg: any) => void, error: (msg: any) => void, fatal: (msg: any) => void, trace: (msg: any) => void, } // Context represents the function invocation context, and provides // access to the event itself as well as raw HTTP objects. export interface Context { log: Logger; req: IncomingMessage; query?: Record<string, any>; body?: Record<string, any>|string; method: string; headers: IncomingHttpHeaders; httpVersion: string; httpVersionMajor: number; httpVersionMinor: number; cloudevent: CloudEvent; cloudEventResponse(data: string|object): CloudEventResponse; } // CloudEventResponse is a convenience class used to create // CloudEvents on function returns export interface CloudEventResponse { id(id: string): CloudEventResponse; source(source: string): CloudEventResponse; type(type: string): CloudEventResponse; version(version: string): CloudEventResponse; response(): CloudEvent; }", "{ \"customerId\": \"0123456\", \"productId\": \"6543210\" }", "function handle(context: Context, cloudevent?: CloudEvent): CloudEvent", "export const handle: Invokable = function ( context: Context, cloudevent?: CloudEvent ): Message { // process customer and return a new CloudEvent const customer = cloudevent.data; return HTTP.binary( new CloudEvent({ source: 'customer.processor', type: 'customer.processed' }) ); };", "export function handle(context: Context, cloudevent?: CloudEvent): Record<string, any> { // process customer and return custom headers const customer = cloudevent.data as Record<string, any>; return { headers: { 'customer-id': customer.id } }; }", "export function handle(context: Context, cloudevent?: CloudEvent): Record<string, any> { // process customer const customer = cloudevent.data as Record<string, any>; if (customer.restricted) { return { statusCode: 451 } } // business logic, then return { statusCode: 240 } }", "export function handle(context: Context, cloudevent?: CloudEvent): Record<string, string> { // process customer const customer = cloudevent.data as Record<string, any>; if (customer.restricted) { const err = new Error('Unavailable for legal reasons'); err.statusCode = 451; throw err; } }", "npm install", "npm test", "fn β”œβ”€β”€ func.py 1 β”œβ”€β”€ func.yaml 2 β”œβ”€β”€ requirements.txt 3 └── test_func.py 4", "def main(context: Context): \"\"\" The context parameter contains the Flask request object and any CloudEvent received with the request. \"\"\" print(f\"Method: {context.request.method}\") print(f\"Event data {context.cloud_event.data}\") # ... business logic here", "def main(context: Context): body = { \"message\": \"Howdy!\" } headers = { \"content-type\": \"application/json\" } return body, 200, headers", "@event(\"event_source\"=\"/my/function\", \"event_type\"=\"my.type\") def main(context): # business logic here data = do_something() # more data processing return data", "pip install -r requirements.txt", "python3 test_func.py", "buildEnvs: - name: EXAMPLE1 value: one", "buildEnvs: - name: EXAMPLE1 value: '{{ env:LOCAL_ENV_VAR }}'", "name: test namespace: \"\" runtime: go envs: - name: EXAMPLE1 1 value: value - name: EXAMPLE2 2 value: '{{ env:LOCAL_ENV_VALUE }}' - name: EXAMPLE3 3 value: '{{ secret:mysecret:key }}' - name: EXAMPLE4 4 value: '{{ configMap:myconfigmap:key }}' - value: '{{ secret:mysecret2 }}' 5 - value: '{{ configMap:myconfigmap2 }}' 6", "name: test namespace: \"\" runtime: go volumes: - secret: mysecret 1 path: /workspace/secret - configMap: myconfigmap 2 path: /workspace/configmap", "name: test namespace: \"\" runtime: go options: scale: min: 0 max: 10 metric: concurrency target: 75 utilization: 75 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 1000m memory: 256Mi concurrency: 100", "labels: - key: role value: backend", "labels: - key: author value: '{{ env:USER }}'", "{{ env:ENV_VAR }}", "name: test namespace: \"\" runtime: go envs: - name: MY_API_KEY value: '{{ env:API_KEY }}'", "kn func config", "kn func config ? What do you want to configure? Volumes ? What operation do you want to perform? List Configured Volumes mounts: - Secret \"mysecret\" mounted at path: \"/workspace/secret\" - Secret \"mysecret2\" mounted at path: \"/workspace/secret2\"", "kn func config β”œβ”€> Environment variables β”‚ β”œβ”€> Add β”‚ β”‚ β”œβ”€> ConfigMap: Add all key-value pairs from a config map β”‚ β”‚ β”œβ”€> ConfigMap: Add value from a key in a config map β”‚ β”‚ β”œβ”€> Secret: Add all key-value pairs from a secret β”‚ β”‚ └─> Secret: Add value from a key in a secret β”‚ β”œβ”€> List: List all configured environment variables β”‚ └─> Remove: Remove a configured environment variable └─> Volumes β”œβ”€> Add β”‚ β”œβ”€> ConfigMap: Mount a config map as a volume β”‚ └─> Secret: Mount a secret as a volume β”œβ”€> List: List all configured volumes └─> Remove: Remove a configured volume", "kn func deploy -p test", "kn func config envs [-p <function-project-path>]", "kn func config envs add [-p <function-project-path>]", "kn func config envs remove [-p <function-project-path>]", "kn func config volumes [-p <function-project-path>]", "kn func config volumes add [-p <function-project-path>]", "kn func config volumes remove [-p <function-project-path>]", "name: test namespace: \"\" runtime: go volumes: - secret: mysecret path: /workspace/secret", "name: test namespace: \"\" runtime: go volumes: - configMap: addresses path: /workspace/secret-addresses", "name: test namespace: \"\" runtime: go volumes: - configMap: myconfigmap path: /workspace/configmap", "name: test namespace: \"\" runtime: go volumes: - configMap: addresses path: /workspace/configmap-addresses", "name: test namespace: \"\" runtime: go envs: - name: EXAMPLE value: '{{ secret:mysecret:key }}'", "name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailssecret:userid }}'", "name: test namespace: \"\" runtime: go envs: - name: EXAMPLE value: '{{ configMap:myconfigmap:key }}'", "name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailsmap:userid }}'", "name: test namespace: \"\" runtime: go envs: - value: '{{ secret:mysecret }}' 1", "name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailssecret }}'", "name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:myconfigmap }}' 1", "name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailsmap }}'", "name: test namespace: \"\" runtime: go annotations: <annotation_name>: \"<annotation_value>\" 1", "name: test namespace: \"\" runtime: go annotations: author: \"[email protected]\"", "function handle(context) { context.log.info(\"Processing customer\"); }", "kn func invoke --target 'http://example.function.com'", "{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"Processing customer\"}", "function handle(context) { // Log the 'name' query parameter context.log.info(context.query.name); // Query parameters are also attached to the context context.log.info(context.name); }", "kn func invoke --target 'http://example.com?name=tiger'", "{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"tiger\"}", "function handle(context) { // log the incoming request body's 'hello' parameter context.log.info(context.body.hello); }", "kn func invoke -d '{\"Hello\": \"world\"}'", "{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"world\"}", "function handle(context) { context.log.info(context.headers[\"custom-header\"]); }", "kn func invoke --target 'http://example.function.com'", "{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"some-value\"}", "export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.body as Record<string, string>).hello); } else { context.log.info('No data received'); } return 'OK'; }", "kn func invoke --target 'http://example.function.com'", "{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"Processing customer\"}", "export function handle(context: Context): string { // log the 'name' query parameter if (context.query) { context.log.info((context.query as Record<string, string>).name); } else { context.log.info('No data received'); } return 'OK'; }", "kn func invoke --target 'http://example.function.com' --data '{\"name\": \"tiger\"}'", "{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"tiger\"} {\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"tiger\"}", "export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.body as Record<string, string>).hello); } else { context.log.info('No data received'); } return 'OK'; }", "kn func invoke --target 'http://example.function.com' --data '{\"hello\": \"world\"}'", "{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"world\"}", "export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.headers as Record<string, string>)['custom-header']); } else { context.log.info('No data received'); } return 'OK'; }", "curl -H'x-custom-header: some-value'' http://example.function.com", "{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"some-value\"}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/serverless/functions
3.3. Enabling IP Ports
3.3. Enabling IP Ports Before deploying the Red Hat High Availability Add-On, you must enable certain IP ports on the cluster nodes and on computers that run luci (the Conga user interface server). The following sections identify the IP ports to be enabled: Section 3.3.1, "Enabling IP Ports on Cluster Nodes" Section 3.3.2, "Enabling the IP Port for luci " The following section provides the iptables rules for enabling IP ports needed by the Red Hat High Availability Add-On: Section 3.3.3, "Configuring the iptables Firewall to Allow Cluster Components" 3.3.1. Enabling IP Ports on Cluster Nodes To allow the nodes in a cluster to communicate with each other, you must enable the IP ports assigned to certain Red Hat High Availability Add-On components. Table 3.1, "Enabled IP Ports on Red Hat High Availability Add-On Nodes" lists the IP port numbers, their respective protocols, and the components to which the port numbers are assigned. At each cluster node, enable IP ports for incoming traffic according to Table 3.1, "Enabled IP Ports on Red Hat High Availability Add-On Nodes" . You can use system-config-firewall to enable the IP ports. Table 3.1. Enabled IP Ports on Red Hat High Availability Add-On Nodes IP Port Number Protocol Component 5404, 5405 UDP corosync/cman (Cluster Manager) 11111 TCP ricci (propagates updated cluster information) 21064 TCP dlm (Distributed Lock Manager) 16851 TCP modclusterd
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-iptables-ca
Appendix A. Consistent Network Device Naming
Appendix A. Consistent Network Device Naming Red Hat Enterprise Linux 6 provides consistent network device naming for network interfaces. This feature changes the name of network interfaces on a system in order to make locating and differentiating the interfaces easier. Traditionally, network interfaces in Linux are enumerated as eth[0123...] , but these names do not necessarily correspond to actual labels on the chassis. Modern server platforms with multiple network adapters can encounter non-deterministic and counter-intuitive naming of these interfaces. This affects both network adapters embedded on the motherboard ( Lan-on-Motherboard , or LOM ) and add-in (single and multiport) adapters. The new naming convention assigns names to network interfaces based on their physical location, whether embedded or in PCI slots. By converting to this naming convention, system administrators will no longer have to guess at the physical location of a network port, or modify each system to rename them into some consistent order. This feature, implemented via the biosdevname program, will change the name of all embedded network interfaces, PCI card network interfaces, and virtual function network interfaces from the existing eth[0123...] to the new naming convention as shown in Table A.1, "The new naming convention" . Table A.1. The new naming convention Device Old Name New Name Embedded network interface (LOM) eth[0123...] em[1234...] [a] PCI card network interface eth[0123...] p< slot >p< ethernet port > [b] Virtual function eth[0123...] p< slot >p< ethernet port >_< virtual interface > [c] [a] New enumeration starts at 1 . [b] For example: p3p4 [c] For example: p3p4_1 System administrators may continue to write rules in /etc/udev/rules.d/70-persistent-net.rules to change the device names to anything desired; those will take precedence over this physical location naming convention. A.1. Affected Systems Consistent network device naming is enabled by default for a set of Dell PowerEdge , C Series , and Precision Workstation systems. For more details regarding the impact on Dell systems, visit https://access.redhat.com/kb/docs/DOC-47318 . For all other systems, it will be disabled by default; see Section A.2, "System Requirements" and Section A.3, "Enabling and Disabling the Feature" for more details. Regardless of the type of system, Red Hat Enterprise Linux 6 guests running under Red Hat Enterprise Linux 5 hosts will not have devices renamed, since the virtual machine BIOS does not provide SMBIOS information. Upgrades from Red Hat Enterprise Linux 6.0 to Red Hat Enterprise Linux 6.1 are unaffected, and the old eth[0123...] naming convention will continue to be used.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/appe-Consistent_Network_Device_Naming
Chapter 37. Element Substitution
Chapter 37. Element Substitution Abstract XML Schema substitution groups allow you to define a group of elements that can replace a top level, or head, element. This is useful in cases where you have multiple elements that share a common base type or with elements that need to be interchangeable. 37.1. Substitution Groups in XML Schema Overview A substitution group is a feature of XML schema that allows you to specify elements that can replace another element in documents generated from that schema. The replaceable element is called the head element and must be defined in the schema's global scope. The elements of the substitution group must be of the same type as the head element or a type that is derived from the head element's type. In essence, a substitution group allows you to build a collection of elements that can be specified using a generic element. For example, if you are building an ordering system for a company that sells three types of widgets you might define a generic widget element that contains a set of common data for all three widget types. Then you can define a substitution group that contains a more specific set of data for each type of widget. In your contract you can then specify the generic widget element as a message part instead of defining a specific ordering operation for each type of widget. When the actual message is built, the message can contain any of the elements of the substitution group. Syntax Substitution groups are defined using the substitutionGroup attribute of the XML Schema element element. The value of the substitutionGroup attribute is the name of the element that the element being defined replaces. For example, if your head element is widget , adding the attribute substitutionGroup="widget" to an element named woodWidget specifies that anywhere a widget element is used, you can substitute a woodWidget element. This is shown in Example 37.1, "Using a Substitution Group" . Example 37.1. Using a Substitution Group Type restrictions The elements of a substitution group must be of the same type as the head element or of a type derived from the head element's type. For example, if the head element is of type xsd:int all members of the substitution group must be of type xsd:int or of a type derived from xsd:int . You can also define a substitution group similar to the one shown in Example 37.2, "Substitution Group with Complex Types" where the elements of the substitution group are of types derived from the head element's type. Example 37.2. Substitution Group with Complex Types The head element of the substitution group, widget , is defined as being of type widgetType . Each element of the substitution group extends widgetType to include data that is specific to ordering that type of widget. Based on the schema in Example 37.2, "Substitution Group with Complex Types" , the part elements in Example 37.3, "XML Document using a Substitution Group" are valid. Example 37.3. XML Document using a Substitution Group Abstract head elements You can define an abstract head element that can never appear in a document produced using your schema. Abstract head elements are similar to abstract classes in Java because they are used as the basis for defining more specific implementations of a generic class. Abstract heads also prevent the use of the generic element in the final product. You declare an abstract head element by setting the abstract attribute of an element element to true , as shown in Example 37.4, "Abstract Head Definition" . Using this schema, a valid review element can contain either a positiveComment element or a negativeComment element, but cannot contain a comment element. Example 37.4. Abstract Head Definition 37.2. Substitution Groups in Java Overview Apache CXF, as specified in the JAXB specification, supports substitution groups using Java's native class hierarchy in combination with the ability of the JAXBElement class' support for wildcard definitions. Because the members of a substitution group must all share a common base type, the classes generated to support the elements' types also share a common base type. In addition, Apache CXF maps instances of the head element to JAXBElement<? extends T> properties. Generated object factory methods The object factory generated to support a package containing a substitution group has methods for each of the elements in the substitution group. For each of the members of the substitution group, except for the head element, the @XmlElementDecl annotation decorating the object factory method includes two additional properties, as described in Table 37.1, "Properties for Declaring a JAXB Element is a Member of a Substitution Group" . Table 37.1. Properties for Declaring a JAXB Element is a Member of a Substitution Group Property Description substitutionHeadNamespace Specifies the namespace where the head element is defined. substitutionHeadName Specifies the value of the head element's name attribute. The object factory method for the head element of the substitution group's @XmlElementDecl contains only the default namespace property and the default name property. In addition to the element instantiation methods, the object factory contains a method for instantiating an object representing the head element. If the members of the substitution group are all of complex types, the object factory also contains methods for instantiating instances of each complex type used. Example 37.5, "Object Factory Method for a Substitution Group" shows the object factory method for the substitution group defined in Example 37.2, "Substitution Group with Complex Types" . Example 37.5. Object Factory Method for a Substitution Group Substitution groups in interfaces If the head element of a substitution group is used as a message part in one of an operation's messages, the resulting method parameter will be an object of the class generated to support that element. It will not necessarily be an instance of the JAXBElement<? extends T> class. The runtime relies on Java's native type hierarchy to support the type substitution, and Java will catch any attempts to use unsupported types. To ensure that the runtime knows all of the classes needed to support the element substitution, the SEI is decorated with the @XmlSeeAlso annotation. This annotation specifies a list of classes required by the runtime for marshalling. Fore more information on using the @XmlSeeAlso annotation see Section 32.4, "Adding Classes to the Runtime Marshaller" . Example 37.7, "Generated Interface Using a Substitution Group" shows the SEI generated for the interface shown in Example 37.6, "WSDL Interface Using a Substitution Group" . The interface uses the substitution group defined in Example 37.2, "Substitution Group with Complex Types" . Example 37.6. WSDL Interface Using a Substitution Group Example 37.7. Generated Interface Using a Substitution Group The SEI shown in Example 37.7, "Generated Interface Using a Substitution Group" lists the object factory in the @XmlSeeAlso annotation. Listing the object factory for a namespace provides access to all of the generated classes for that namespace. Substitution groups in complex types When the head element of a substitution group is used as an element in a complex type, the code generator maps the element to a JAXBElement<? extends T> property. It does not map it to a property containing an instance of the generated class generated to support the substitution group. For example, the complex type defined in Example 37.8, "Complex Type Using a Substitution Group" results in the Java class shown in Example 37.9, "Java Class for a Complex Type Using a Substitution Group" . The complex type uses the substitution group defined in Example 37.2, "Substitution Group with Complex Types" . Example 37.8. Complex Type Using a Substitution Group Example 37.9. Java Class for a Complex Type Using a Substitution Group Setting a substitution group property How you work with a substitution group depends on whether the code generator mapped the group to a straight Java class or to a JAXBElement<? extends T> class. When the element is simply mapped to an object of the generated value class, you work with the object the same way you work with other Java objects that are part of a type hierarchy. You can substitute any of the subclasses for the parent class. You can inspect the object to determine its exact class, and cast it appropriately. The JAXB specification recommends that you use the object factory methods for instantiating objects of the generated classes. When the code generators create a JAXBElement<? extends T> object to hold instances of a substitution group, you must wrap the element's value in a JAXBElement<? extends T> object. The best method to do this is to use the element creation methods provided by the object factory. They provide an easy means for creating an element based on its value. Example 37.10, "Setting a Member of a Substitution Group" shows code for setting an instance of a substitution group. Example 37.10. Setting a Member of a Substitution Group The code in Example 37.10, "Setting a Member of a Substitution Group" does the following: Instantiates an object factory. Instantiates a PlasticWidgetType object. Instantiates a JAXBElement<PlasticWidgetType> object to hold a plastic widget element. Instantiates a WidgetOrderInfo object. Sets the WidgetOrderInfo object's widget to the JAXBElement object holding the plastic widget element. Getting the value of a substitution group property The object factory methods do not help when extracting the element's value from a JAXBElement<? extends T> object. You must to use the JAXBElement<? extends T> object's getValue() method. The following options determine the type of object returned by the getValue() method: Use the isInstance() method of all the possible classes to determine the class of the element's value object. Use the JAXBElement<? extends T> object's getName() method to determine the element's name. The getName() method returns a QName. Using the local name of the element, you can determine the proper class for the value object. Use the JAXBElement<? extends T> object's getDeclaredType() method to determine the class of the value object. The getDeclaredType() method returns the Class object of the element's value object. Warning There is a possibility that the getDeclaredType() method will return the base class for the head element regardless of the actual class of the value object. Example 37.11, "Getting the Value of a Member of the Substitution Group" shows code retrieving the value from a substitution group. To determine the proper class of the element's value object the example uses the element's getName() method. Example 37.11. Getting the Value of a Member of the Substitution Group 37.3. Widget Vendor Example 37.3.1. Widget Ordering Interface This section shows an example of substitution groups being used in Apache CXF to solve a real world application. A service and consumer are developed using the widget substitution group defined in Example 37.2, "Substitution Group with Complex Types" . The service offers two operations: checkWidgets and placeWidgetOrder . Example 37.12, "Widget Ordering Interface" shows the interface for the ordering service. Example 37.12. Widget Ordering Interface Example 37.13, "Widget Ordering SEI" shows the generated Java SEI for the interface. Example 37.13. Widget Ordering SEI Note Because the example only demonstrates the use of substitution groups, some of the business logic is not shown. 37.3.2. The checkWidgets Operation Overview checkWidgets is a simple operation that has a parameter that is the head member of a substitution group. This operation demonstrates how to deal with individual parameters that are members of a substitution group. The consumer must ensure that the parameter is a valid member of the substitution group. The service must properly determine which member of the substitution group was sent in the request. Consumer implementation The generated method signature uses the Java class supporting the type of the substitution group's head element. Because the member elements of a substitution group are either of the same type as the head element or of a type derived from the head element's type, the Java classes generated to support the members of the substitution group inherit from the Java class generated to support the head element. Java's type hierarchy natively supports using subclasses in place of the parent class. Because of how Apache CXF generates the types for a substitution group and Java's type hierarchy, the client can invoke checkWidgets() without using any special code. When developing the logic to invoke checkWidgets() you can pass in an object of one of the classes generated to support the widget substitution group. Example 37.14, "Consumer Invoking checkWidgets() " shows a consumer invoking checkWidgets() . Example 37.14. Consumer Invoking checkWidgets() Service implementation The service's implementation of checkWidgets() gets a widget description as a WidgetType object, checks the inventory of widgets, and returns the number of widgets in stock. Because all of the classes used to implement the substitution group inherit from the same base class, you can implement checkWidgets() without using any JAXB specific APIs. All of the classes generated to support the members of the substitution group for widget extend the WidgetType class. Because of this fact, you can use instanceof to determine what type of widget was passed in and simply cast the widgetPart object into the more restrictive type if appropriate. Once you have the proper type of object, you can check the inventory of the right kind of widget. Example 37.15, "Service Implementation of checkWidgets() " shows a possible implementation. Example 37.15. Service Implementation of checkWidgets() 37.3.3. The placeWidgetOrder Operation Overview placeWidgetOrder uses two complex types containing the substitution group. This operation demonstrates to use such a structure in a Java implementation. Both the consumer and the service must get and set members of a substitution group. Consumer implementation To invoke placeWidgetOrder() the consumer must construct a widget order containing one element of the widget substitution group. When adding the widget to the order, the consumer should use the object factory methods generated for each element of the substitution group. This ensures that the runtime and the service can correctly process the order. For example, if an order is being placed for a plastic widget, the ObjectFactory.createPlasticWidget() method is used to create the element before adding it to the order. Example 37.16, "Setting a Substitution Group Member" shows consumer code for setting the widget property of the WidgetOrderInfo object. Example 37.16. Setting a Substitution Group Member Service implementation The placeWidgetOrder() method receives an order in the form of a WidgetOrderInfo object, processes the order, and returns a bill to the consumer in the form of a WidgetOrderBillInfo object. The orders can be for a plain widget, a plastic widget, or a wooden widget. The type of widget ordered is determined by what type of object is stored in widgetOrderForm object's widget property. The widget property is a substitution group and can contain a widget element, a woodWidget element, or a plasticWidget element. The implementation must determine which of the possible elements is stored in the order. This can be accomplished using the JAXBElement<? extends T> object's getName() method to determine the element's QName. The QName can then be used to determine which element in the substitution group is in the order. Once the element included in the bill is known, you can extract its value into the proper type of object. Example 37.17, "Implementation of placeWidgetOrder() " shows a possible implementation. Example 37.17. Implementation of placeWidgetOrder() The code in Example 37.17, "Implementation of placeWidgetOrder() " does the following: Instantiates an object factory to create elements. Instantiates a WidgetOrderBillInfo object to hold the bill. Gets the number of widgets ordered. Gets the local name of the element stored in the order. Checks to see if the element is a woodWidget element. Extracts the value of the element from the order to the proper type of object. Creates a JAXBElement<T> object placed into the bill. Sets the bill object's widget property. Sets the bill object's amountDue property.
[ "<element name=\"widget\" type=\"xsd:string\" /> <element name=\"woodWidget\" type=\"xsd:string\" substitutionGroup=\"widget\" />", "<complexType name=\"widgetType\"> <sequence> <element name=\"shape\" type=\"xsd:string\" /> <element name=\"color\" type=\"xsd:string\" /> </sequence> </complexType> <complexType name=\"woodWidgetType\"> <complexContent> <extension base=\"widgetType\"> <sequence> <element name=\"woodType\" type=\"xsd:string\" /> </sequence> </extension> </complexContent> </complexType> <complexType name=\"plasticWidgetType\"> <complexContent> <extension base=\"widgetType\"> <sequence> <element name=\"moldProcess\" type=\"xsd:string\" /> </sequence> </extension> </complexContent> </complexType> <element name=\"widget\" type=\"widgetType\" /> <element name=\"woodWidget\" type=\"woodWidgetType\" substitutionGroup=\"widget\" /> <element name=\"plasticWidget\" type=\"plasticWidgetType\" substitutionGroup=\"widget\" /> <complexType name=\"partType\"> <sequence> <element ref=\"widget\" /> </sequence> </complexType> <element name=\"part\" type=\"partType\" />", "<part> <widget> <shape>round</shape> <color>blue</color> </widget> </part> <part> <plasticWidget> <shape>round</shape> <color>blue</color> <moldProcess>sandCast</moldProcess> </plasticWidget> </part> <part> <woodWidget> <shape>round</shape> <color>blue</color> <woodType>elm</woodType> </woodWidget> </part>", "<element name=\"comment\" type=\"xsd:string\" abstract=\"true\" /> <element name=\"positiveComment\" type=\"xsd:string\" substitutionGroup=\"comment\" /> <element name=\"negtiveComment\" type=\"xsd:string\" substitutionGroup=\"comment\" /> <element name=\"review\"> <complexContent> <all> <element name=\"custName\" type=\"xsd:string\" /> <element name=\"impression\" ref=\"comment\" /> </all> </complexContent> </element>", "public class ObjectFactory { private final static QName _Widget_QNAME = new QName(...); private final static QName _PlasticWidget_QNAME = new QName(...); private final static QName _WoodWidget_QNAME = new QName(...); public ObjectFactory() { } public WidgetType createWidgetType() { return new WidgetType(); } public PlasticWidgetType createPlasticWidgetType() { return new PlasticWidgetType(); } public WoodWidgetType createWoodWidgetType() { return new WoodWidgetType(); } @XmlElementDecl(namespace=\"...\", name = \"widget\") public JAXBElement<WidgetType> createWidget(WidgetType value) { return new JAXBElement<WidgetType>(_Widget_QNAME, WidgetType.class, null, value); } @XmlElementDecl(namespace = \"...\", name = \"plasticWidget\", substitutionHeadNamespace = \"...\", substitutionHeadName = \"widget\") public JAXBElement<PlasticWidgetType> createPlasticWidget(PlasticWidgetType value) { return new JAXBElement<PlasticWidgetType>(_PlasticWidget_QNAME, PlasticWidgetType.class, null, value); } @XmlElementDecl(namespace = \"...\", name = \"woodWidget\", substitutionHeadNamespace = \"...\", substitutionHeadName = \"widget\") public JAXBElement<WoodWidgetType> createWoodWidget(WoodWidgetType value) { return new JAXBElement<WoodWidgetType>(_WoodWidget_QNAME, WoodWidgetType.class, null, value); } }", "<message name=\"widgetMessage\"> <part name=\"widgetPart\" element=\"xsd1:widget\" /> </message> <message name=\"numWidgets\"> <part name=\"numInventory\" type=\"xsd:int\" /> </message> <message name=\"badSize\"> <part name=\"numInventory\" type=\"xsd:int\" /> </message> <portType name=\"orderWidgets\"> <operation name=\"placeWidgetOrder\"> <input message=\"tns:widgetOrder\" name=\"order\" /> <output message=\"tns:widgetOrderBill\" name=\"bill\" /> <fault message=\"tns:badSize\" name=\"sizeFault\" /> </operation> <operation name=\"checkWidgets\"> <input message=\"tns:widgetMessage\" name=\"request\" /> <output message=\"tns:numWidgets\" name=\"response\" /> </operation> </portType>", "@WebService(targetNamespace = \"...\", name = \"orderWidgets\") @XmlSeeAlso({com.widgetvendor.types.widgettypes.ObjectFactory.class}) public interface OrderWidgets { @SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.BARE) @WebResult(name = \"numInventory\", targetNamespace = \"\", partName = \"numInventory\") @WebMethod public int checkWidgets( @WebParam(partName = \"widgetPart\", name = \"widget\", targetNamespace = \"...\") com.widgetvendor.types.widgettypes.WidgetType widgetPart ); }", "<complexType name=\"widgetOrderInfo\"> <sequence> <element name=\"amount\" type=\"xsd:int\"/> <element ref=\"xsd1:widget\"/> </sequence> </complexType>", "@XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = \"widgetOrderInfo\", propOrder = {\"amount\",\"widget\",}) public class WidgetOrderInfo { protected int amount; @XmlElementRef(name = \"widget\", namespace = \"...\", type = JAXBElement.class) protected JAXBElement<? extends WidgetType> widget; public int getAmount() { return amount; } public void setAmount(int value) { this.amount = value; } public JAXBElement<? extends WidgetType> getWidget() { return widget; } public void setWidget(JAXBElement<? extends WidgetType> value) { this.widget = ((JAXBElement<? extends WidgetType> ) value); } }", "ObjectFactory of = new ObjectFactory(); PlasticWidgetType pWidget = of.createPlasticWidgetType(); pWidget.setShape = \"round'; pWidget.setColor = \"green\"; pWidget.setMoldProcess = \"injection\"; JAXBElement<PlasticWidgetType> widget = of.createPlasticWidget(pWidget); WidgetOrderInfo order = of.createWidgetOrderInfo(); order.setWidget(widget);", "String elementName = order.getWidget().getName().getLocalPart(); if (elementName.equals(\"woodWidget\") { WoodWidgetType widget=order.getWidget().getValue(); } else if (elementName.equals(\"plasticWidget\") { PlasticWidgetType widget=order.getWidget().getValue(); } else { WidgetType widget=order.getWidget().getValue(); }", "<message name=\"widgetOrder\"> <part name=\"widgetOrderForm\" type=\"xsd1:widgetOrderInfo\"/> </message> <message name=\"widgetOrderBill\"> <part name=\"widgetOrderConformation\" type=\"xsd1:widgetOrderBillInfo\"/> </message> <message name=\"widgetMessage\"> <part name=\"widgetPart\" element=\"xsd1:widget\" /> </message> <message name=\"numWidgets\"> <part name=\"numInventory\" type=\"xsd:int\" /> </message> <portType name=\"orderWidgets\"> <operation name=\"placeWidgetOrder\"> <input message=\"tns:widgetOrder\" name=\"order\"/> <output message=\"tns:widgetOrderBill\" name=\"bill\"/> </operation> <operation name=\"checkWidgets\"> <input message=\"tns:widgetMessage\" name=\"request\" /> <output message=\"tns:numWidgets\" name=\"response\" /> </operation> </portType>", "@WebService(targetNamespace = \"http://widgetVendor.com/widgetOrderForm\", name = \"orderWidgets\") @XmlSeeAlso({com.widgetvendor.types.widgettypes.ObjectFactory.class}) public interface OrderWidgets { @SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.BARE) @WebResult(name = \"numInventory\", targetNamespace = \"\", partName = \"numInventory\") @WebMethod public int checkWidgets( @WebParam(partName = \"widgetPart\", name = \"widget\", targetNamespace = \"http://widgetVendor.com/types/widgetTypes\") com.widgetvendor.types.widgettypes.WidgetType widgetPart ); @SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.BARE) @WebResult(name = \"widgetOrderConformation\", targetNamespace = \"\", partName = \"widgetOrderConformation\") @WebMethod public com.widgetvendor.types.widgettypes.WidgetOrderBillInfo placeWidgetOrder( @WebParam(partName = \"widgetOrderForm\", name = \"widgetOrderForm\", targetNamespace = \"\") com.widgetvendor.types.widgettypes.WidgetOrderInfo widgetOrderForm ) throws BadSize; }", "System.out.println(\"What type of widgets do you want to order?\"); System.out.println(\"1 - Normal\"); System.out.println(\"2 - Wood\"); System.out.println(\"3 - Plastic\"); System.out.println(\"Selection [1-3]\"); String selection = reader.readLine(); String trimmed = selection.trim(); char widgetType = trimmed.charAt(0); switch (widgetType) { case '1': { WidgetType widget = new WidgetType(); break; } case '2': { WoodWidgetType widget = new WoodWidgetType(); break; } case '3': { PlasticWidgetType widget = new PlasticWidgetType(); break; } default : System.out.println(\"Invaid Widget Selection!!\"); } proxy.checkWidgets(widgets);", "public int checkWidgets(WidgetType widgetPart) { if (widgetPart instanceof WidgetType) { return checkWidgetInventory(widgetType); } else if (widgetPart instanceof WoodWidgetType) { WoodWidgetType widget = (WoodWidgetType)widgetPart; return checkWoodWidgetInventory(widget); } else if (widgetPart instanceof PlasticWidgetType) { PlasticWidgetType widget = (PlasticWidgetType)widgetPart; return checkPlasticWidgetInventory(widget); } }", "ObjectFactory of = new ObjectFactory(); WidgetOrderInfo order = new of.createWidgetOrderInfo(); System.out.println(); System.out.println(\"What color widgets do you want to order?\"); String color = reader.readLine(); System.out.println(); System.out.println(\"What shape widgets do you want to order?\"); String shape = reader.readLine(); System.out.println(); System.out.println(\"What type of widgets do you want to order?\"); System.out.println(\"1 - Normal\"); System.out.println(\"2 - Wood\"); System.out.println(\"3 - Plastic\"); System.out.println(\"Selection [1-3]\"); String selection = reader.readLine(); String trimmed = selection.trim(); char widgetType = trimmed.charAt(0); switch (widgetType) { case '1': { WidgetType widget = of.createWidgetType(); widget.setColor(color); widget.setShape(shape); JAXB<WidgetType> widgetElement = of.createWidget(widget); order.setWidget(widgetElement); break; } case '2': { WoodWidgetType woodWidget = of.createWoodWidgetType(); woodWidget.setColor(color); woodWidget.setShape(shape); System.out.println(); System.out.println(\"What type of wood are your widgets?\"); String wood = reader.readLine(); woodWidget.setWoodType(wood); JAXB<WoodWidgetType> widgetElement = of.createWoodWidget(woodWidget); order.setWoodWidget(widgetElement); break; } case '3': { PlasticWidgetType plasticWidget = of.createPlasticWidgetType(); plasticWidget.setColor(color); plasticWidget.setShape(shape); System.out.println(); System.out.println(\"What type of mold to use for your widgets?\"); String mold = reader.readLine(); plasticWidget.setMoldProcess(mold); JAXB<WidgetType> widgetElement = of.createPlasticWidget(plasticWidget); order.setPlasticWidget(widgetElement); break; } default : System.out.println(\"Invaid Widget Selection!!\"); }", "public com.widgetvendor.types.widgettypes.WidgetOrderBillInfo placeWidgetOrder(WidgetOrderInfo widgetOrderForm) { ObjectFactory of = new ObjectFactory(); WidgetOrderBillInfo bill = new WidgetOrderBillInfo() // Copy the shipping address and the number of widgets // ordered from widgetOrderForm to bill int numOrdered = widgetOrderForm.getAmount(); String elementName = widgetOrderForm.getWidget().getName().getLocalPart(); if (elementName.equals(\"woodWidget\") { WoodWidgetType widget=order.getWidget().getValue(); buildWoodWidget(widget, numOrdered); // Add the widget info to bill JAXBElement<WoodWidgetType> widgetElement = of.createWoodWidget(widget); bill.setWidget(widgetElement); float amtDue = numOrdered * 0.75; bill.setAmountDue(amtDue); } else if (elementName.equals(\"plasticWidget\") { PlasticWidgetType widget=order.getWidget().getValue(); buildPlasticWidget(widget, numOrdered); // Add the widget info to bill JAXBElement<PlasticWidgetType> widgetElement = of.createPlasticWidget(widget); bill.setWidget(widgetElement); float amtDue = numOrdered * 0.90; bill.setAmountDue(amtDue); } else { WidgetType widget=order.getWidget().getValue(); buildWidget(widget, numOrdered); // Add the widget info to bill JAXBElement<WidgetType> widgetElement = of.createWidget(widget); bill.setWidget(widgetElement); float amtDue = numOrdered * 0.30; bill.setAmountDue(amtDue); } return(bill); }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/jaxwselementsubstitution
Chapter 7. Configure storage for OpenShift Container Platform services
Chapter 7. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as image registry, monitoring, and logging. The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have plenty of storage capacity for these services. If the storage for these critical services runs out of space, the cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data sub section of Configuring persistent storage in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 7.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the Container Image Registry. On Google Cloud, it is not required to change the storage for the registry. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 7.2. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 7.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 7.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 7.3. Persistent Volume Claims attached to prometheus-k8s-* pod 7.3. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 7.3.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 7.3.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 7.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide.
[ "storage: pvc: claim: <new-pvc-name>", "storage: pvc: claim: ocs4registry", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, e.g. 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}", "spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd", "config.yaml: | openshift-storage: delete: days: 5" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/configure-storage-for-openshift-container-platform-services_rhodf
Chapter 2. Securing applications deployed on JBoss EAP with Single Sign-On
Chapter 2. Securing applications deployed on JBoss EAP with Single Sign-On You can secure applications with Single Sign-on (SSO) to delegate authentication to an SSO-provider such as Red Hat build of Keycloak. You can use either OpenID Connect (OIDC) or Security Assertion Markup Language v2 (SAML v2) as the SSO protocols. To secure applications with SSO, follow these procedures: Create an example application to secure with Single sign-on : Use this procedure to create a simple web-application for securing with SSO. If you already have an application to secure with SSO, skip this step. Create a realm and users in Red Hat build of Keycloak Secure your application with SSO by using either OIDC or SAML as the protocol: Secure applications with OIDC Secure applications with SAML 2.1. Creating an example application to secure with Single sign-on Create a web-application to deploy on JBoss EAP and secure it with Single sign-on (SSO) with OpenID Connect (OIDC) or Security Assertion Mark-up Language (SAML). Note The following procedures are provided as an example only. If you already have an application that you want to secure, you can skip these and go directly to Creating a realm and users in Red Hat build of Keycloak . 2.1.1. Creating a Maven project for web-application development For creating a web-application, create a Maven project with the required dependencies and the directory structure. Important The following procedure is provided only as an example and should not be used in a production environment. For information about creating applications for JBoss EAP, see Getting started with developing applications for JBoss EAP deployment . Prerequisites You have installed Maven. For more information, see Downloading Apache Maven . Procedure Set up a Maven project using the mvn command. The command creates the directory structure for the project and the pom.xml configuration file. Syntax Example Navigate to the application root directory: Syntax Example Replace the content of the generated pom.xml file with the following text: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.example.app</groupId> <artifactId>simple-webapp-example</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>simple-webapp-example Maven Webapp</name> <!-- FIXME change it to the project's website --> <url>http://www.example.com</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <version.maven.war.plugin>3.4.0</version.maven.war.plugin> </properties> <dependencies> <dependency> <groupId>jakarta.servlet</groupId> <artifactId>jakarta.servlet-api</artifactId> <version>6.0.0</version> <scope>provided</scope> </dependency> </dependencies> <build> <finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>USD{version.maven.war.plugin}</version> </plugin> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>4.2.2.Final</version> </plugin> </plugins> </build> </project> Verification In the application root directory, enter the following command: You get an output similar to the following: steps Creating a web application 2.1.2. Creating a web application Create a web application containing a servlet that returns the user name obtained from the logged-in user's principal. If there is no logged-in user, the servlet returns the text "NO AUTHENTICATED USER". In this procedure, <application_home> refers to the directory that contains the pom.xml configuration file for the application. Prerequisites You have created a Maven project. For more information, see Creating a Maven project for web-application development . JBoss EAP is running. Procedure Create a directory to store the Java files. Syntax Example Navigate to the new directory. Syntax Example Create a file SecuredServlet.java with the following content: package com.example.app; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import jakarta.servlet.ServletException; import jakarta.servlet.annotation.WebServlet; import jakarta.servlet.http.HttpServlet; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; /** * A simple secured HTTP servlet. It returns the user name of obtained * from the logged-in user's Principal. If there is no logged-in user, * it returns the text "NO AUTHENTICATED USER". */ @WebServlet("/secured") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { writer.println("<html>"); writer.println(" <head><title>Secured Servlet</title></head>"); writer.println(" <body>"); writer.println(" <h1>Secured Servlet</h1>"); writer.println(" <p>"); writer.print(" Current Principal '"); Principal user = req.getUserPrincipal(); writer.print(user != null ? user.getName() : "NO AUTHENTICATED USER"); writer.print("'"); writer.println(" </p>"); writer.println(" </body>"); writer.println("</html>"); } } } In the application root directory, compile your application with the following command: Deploy the application. Verification In a browser, navigate to http://localhost:8080/simple-webapp-example/secured . You get the following message: Because no authentication mechanism is added, you can access the application. steps Creating a realm and users in Red Hat build of Keycloak 2.2. Creating a realm and users in Red Hat build of Keycloak A realm in Red Hat build of Keycloak is equivalent to a tenant. Each realm allows an administrator to create isolated groups of applications and users. The following procedure outlines the minimum steps required to get started with securing applications deployed to JBoss EAP with Red Hat build of Keycloak for testing purposes. For detailed configurations, see the Red Hat build of Keycloak Server Administration Guide . Note The following procedure is provided as an example only. If you already have configured a realm and users in Red Hat build of Keycloak, you can skip this procedure and go directly to securing applications. For more information, see: Securing applications with OIDC Securing applications with SAML Prerequisites You have administrator access to Red Hat build of Keycloak. Procedure Start the Red Hat build of Keycloak server at a port other than 8080 because JBoss EAP default port is 8080. Note The start-dev command is not meant for production environments. For more information, see Trying Red Hat build of Keycloak in development mode in the Red Hat build of Keycloak Server Guide. Syntax Example Log in to the Admin Console at http://localhost:<port>/ . For example, http://localhost:8180/ . Create a realm. Hover over Master , and click Create Realm . Enter a name for the realm. For example, example_realm . Ensure that Enabled is set to ON . Click Create . For more information, see Creating a realm in the Red Hat build of Keycloak Server Administration Guide. Create a user. Click Users , then click Add user , Enter a user name. For example, user1 . Click Create . For more information, see Creating users in the Red Hat build of Keycloak Server Administration Guide. Set credentials for the user. Click Credentials . Set a password for the user. For example, passwordUser1 . Toggle Temporary to OFF and click Set Password . In the confirmation prompt, click Save . For more information, see Defining user credentials in the Red Hat build of Keycloak Server Administration Guide. Create a role. This is the role name you configure in JBoss EAP for authorization. Click Realm Roles , then Create role . Enter a role name, such as Admin . Click Save . Assign the role to the user. Click Users . Click the user to which you want to assign the role. Click Role Mapping . Click Assign role . Select the role to assign. For example, Admin . Click Assign . For more information, see Creating a realm role in the Red Hat build of Keycloak Server Administration Guide. steps To use this realm to secure applications deployed to JBoss EAP, follow these procedures: Securing applications with OIDC Securing applications with SAML Additional resources Red Hat build of Keycloak Getting Started Guide 2.3. Securing applications with OIDC Use the JBoss EAP native OpenID Connect (OIDC) client to secure your applications using an external OpenID provider. OIDC is an identity layer that enables clients, such as JBoss EAP, to verify a user's identity based on authentication performed by an OpenID provider. For example, you can secure your JBoss EAP applications using Red Hat build of Keycloak as the OpenID provider. To secure applications with OIDC, follow these procedures: Creating an OIDC client in JBoss EAP Securing a web application using OpenID Connect 2.3.1. Application security with OpenID Connect in JBoss EAP When you secure your applications using an OpenID provider, you do not need to configure any security domain resources locally. The elytron-oidc-client subsystem provides a native OpenID Connect (OIDC) client in JBoss EAP to connect with OpenID providers (OP). JBoss EAP automatically creates a virtual security domain for your application, based on your OpenID provider configurations. The elytron-oidc-client subsystem acts as the Relying Party (RP). Note The JBoss EAP native OIDC client does not support RP-Initiated logout. Important It is recommended to use the OIDC client with Red Hat build of Keycloak. You can use other OpenID providers if they can be configured to use access tokens that are JSON Web Tokens (JWTs) and can be configured to use the RS256, RS384, RS512, ES256, ES384, or ES512 signature algorithm. To enable the use of OIDC, you can configure either the elytron-oidc-client subsystem or an application itself. JBoss EAP activates the OIDC authentication as follows: When you deploy an application to JBoss EAP, the elytron-oidc-client subsystem scans the deployment to detect if the OIDC authentication mechanism is required. If the subsystem detects OIDC configuration for the deployment in either the elytron-oidc-client subsystem or the application deployment descriptor, JBoss EAP enables the OIDC authentication mechanism for the application. If the subsystem detects OIDC configuration in both places, the configuration in the elytron-oidc-client subsystem secure-deployment attribute takes precedence over the configuration in the application deployment descriptor. Deployment configuration To secure an application with OIDC by using a deployment descriptor, update the application's deployment configuration as follows: Set the auth-method property to OIDC in the application deployment descriptor web.xml file. Example deployment descriptor update <login-config> <auth-method>OIDC</auth-method> </login-config> Create a file called oidc.json in the WEB-INF directory with the OIDC configuration information. Example oidc.json contents { "client-id" : "customer-portal", 1 "provider-url" : "http://localhost:8180/realms/demo", 2 "ssl-required" : "external", 3 "credentials" : { "secret" : "234234-234234-234234" 4 } } 1 The name to identify the OIDC client with the OpenID provider. 2 The OpenID provider URL. 3 Require HTTPS for external requests. 4 The client secret that was registered with the OpenID provider. Subsystem configuration You can secure applications with OIDC by configuring the elytron-oidc-client subsystem in the following ways: Create a single configuration for multiple deployments if you use the same OpenID provider for each application. Create a different configuration for each deployment if you use different OpenID providers for different applications. Example XML configuration for a single deployment: <subsystem xmlns="urn:wildfly:elytron-oidc-client:1.0"> <secure-deployment name="DEPLOYMENT_RUNTIME_NAME.war"> 1 <client-id>customer-portal</client-id> 2 <provider-url>http://localhost:8180/realms/demo</provider-url> 3 <ssl-required>external</ssl-required> 4 <credential name="secret" secret="0aa31d98-e0aa-404c-b6e0-e771dba1e798" /> 5 </secure-deployment </subsystem> 1 The deployment runtime name. 2 The name to identify the OIDC client with the OpenID provider. 3 The OpenID provider URL. 4 Require HTTPS for external requests. 5 The client secret that was registered with the OpenID provider. To secure multiple applications using the same OpenID provider, configure the provider separately, as shown in the example: <subsystem xmlns="urn:wildfly:elytron-oidc-client:1.0"> <provider name=" USD{OpenID_provider_name} "> <provider-url>http://localhost:8080/realms/demo</provider-url> <ssl-required>external</ssl-required> </provider> <secure-deployment name="customer-portal.war"> 1 <provider> USD{OpenID_provider_name} </provider> <client-id>customer-portal</client-id> <credential name="secret" secret="0aa31d98-e0aa-404c-b6e0-e771dba1e798" /> </secure-deployment> <secure-deployment name="product-portal.war"> 2 <provider> USD{OpenID_provider_name} </provider> <client-id>product-portal</client-id> <credential name="secret" secret="0aa31d98-e0aa-404c-b6e0-e771dba1e798" /> </secure-deployment> </subsystem> 1 A deployment: customer-portal.war 2 Another deployment: product-portal.war Additional resources OpenID Connect specification elytron-oidc-client subsystem attributes OpenID Connect Libraries 2.3.2. Creating an OIDC client in Red Hat build of Keycloak Create an OpenID Connect (OIDC) client in Red Hat build of Keycloak to use with JBoss EAP to secure applications. The following procedure outlines the minimum steps required to get started with securing applications deployed to JBoss EAP with Red Hat build of Keycloak for testing purposes. For detailed configurations, see Managing OpenID Connect clients in the Red Hat build of Keycloak Server Administration Guide. Prerequisites You have created a realm and defined users in Red Hat build of Keycloak. For more information, see Creating a realm and users in JBoss EAP Procedure Navigate to the Red Hat build of Keycloak Admin Console. Create a client. Click Clients , then click Create client . Ensure that Client type is set to OpenID Connect . Enter a client ID. For example, jbeap-oidc . Click . In the Capability Config tab, ensure that Authentication Flow is set to Standard flow and Direct access grants . Click . In the Login settings tab, enter the value for Valid redirect URIs . Enter the URL where the page should redirect after successful authentication, for example, http://localhost:8080/simple-webapp-example/secured/* . Click Save . View the adapter configuration. Click Action , then Download adapter config . Select Keycloak OIDC JSON as the Format Option to see the connection parameters. { "realm": "example_realm", "auth-server-url": "http://localhost:8180/", "ssl-required": "external", "resource": "jbeap-oidc", "public-client": true, "confidential-port": 0 } When configuring your JBoss EAP application to use Red Hat build of Keycloak as the identity provider, you use the parameters as follows: "provider-url" : "http://localhost:8180/realms/example_realm", "ssl-required": "external", "client-id": "jbeap-oidc", "public-client": true, "confidential-port": 0 steps Securing a web application using OpenID Connect Additional resources Securing Applications and Services Guide 2.3.3. Securing a web application using OpenID Connect You can secure an application by either updating its deployment configuration or by configuring the elytron-oidc-client subsystem . If you use the application created in the procedure, Creating a web application , the value of the Principal comes from the ID token from the OpenID provider. By default, the Principal is the value of the "sub" claim from the token. You can also use the value of "email", "preferred_username", "name", "given_name", "family_name", or "nickname" claims as the Principal. Specify which claim value from the ID token is to be used as the Principal in one of the following places: The elytron-oidc-client subsystem attribute principal-attribute . The oidc.json file . There are two ways in which you can configure applications to use OIDC: By configuring the elytron-oidc-client subsystem. Use this method if you do not want to add configuration to the application deployment. By updating the deployment configuration Use this method if you do not want to add configuration to the server and prefer to keep the configuration within the application deployment. Prerequisites You have deployed applications on JBoss EAP. Procedure Configure the application's web.xml to protect the application resources. <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" metadata-complete="false"> <security-constraint> <web-resource-collection> <web-resource-name>secured</web-resource-name> <url-pattern>/secured</url-pattern> </web-resource-collection> <auth-constraint> <role-name>Admin</role-name> 1 </auth-constraint> </security-constraint> <security-role> <role-name>*</role-name> </security-role> </web-app> 1 Only allow the users with the role Admin to access the application. To allow users with any role to access the application, use the wildcard ** as the value for role-name . To secure the application with OpenID Connect, either update the deployment configuration or configure the elytron-oidc-client subsystem. Note If you configure OpenID Connect in both the deployment configuration and the elytron-oidc-client subsystem, the configuration in the elytron-oidc-client subsystem secure-deployment attribute takes precedence over the configuration in the application deployment descriptor. Updating the deployment configuration. Add login configuration to the application's web.xml specifying authentication method as OIDC. <web-app> ... <login-config> <auth-method>OIDC</auth-method> 1 </login-config> ... </web-app> 1 Use OIDC to secure the application. Create a file oidc.json in the WEB-INF directory, like this: { "provider-url" : "http://localhost:8180/realms/example_realm", "ssl-required": "external", "client-id": "jbeap-oidc", "public-client": true, "confidential-port": 0 } Configuring the elytron-oidc-client subsystem: To secure your application, use the following management CLI command: In the application root directory, compile your application with the following command: Deploy the application. Verification In a browser, navigate to http://localhost:8080/simple-webapp-example/secured . You are redirected to Red Hat build of Keycloak login page. You can log in with your credentials for the user you defined in Red Hat build of Keycloak. Your application is now secured using OIDC. Additional resources elytron-oidc-client subsystem attributes 2.4. Securing applications with SAML You can use the Galleon layers provided by the Keycloak SAML adapter feature pack to secure web applications with Security Assertion Markup Language (SAML). For information about Keycloak SAML adapter feature pack, see Keycloak SAML adapter feature pack for securing applications using SAML . To secure applications with SAML, follow these procedures: Securing web applications using SAML 2.4.1. Application security with SAML in JBoss EAP Keycloak SAML adapter Galleon pack is a Galleon feature pack that includes three layers: keycloak-saml , keycloak-client-saml , and keycloak-client-saml-ejb . Use the layers in the feature pack to install the necessary modules and configurations in JBoss EAP to use Red Hat build of Keycloak as an identity provider for single sign-on using Security Assertion Markup Language (SAML). The following table describes the use cases for each layer. Layer Applicable for Description keycloak-saml OpenShift Use this layer for Source to Image (s2i) with automatic registration of the SAML client. You must use this layer along with the cloud-default-config layer. keycloak-client-saml Bare metal, OpenShift Use this layer for web-applications on bare metal, and for Source to Image (s2i) with keycloak-saml subsystem configuration provided in a CLI script or in the deployment configuration. keycloak-client-saml-ejb Bare metal Use this layer for applications where you want to propagate identities to Jakarta Enterprise Beans. To enable the use of SAML, you can configure either the keycloak-saml subsystem or an application itself. Deployment configuration To secure an application with SAML by using a deployment descriptor, update the application's deployment configuration as follows: Set the auth-method property to SAML in the application deployment descriptor web.xml file. Example deployment descriptor update <login-config> <auth-method>SAML</auth-method> </login-config> Create a file called keycloak-saml.xml in the WEB-INF directory with the SAML configuration information. You can obtain this file from the SAML provider. Example keycloak-saml.xml <keycloak-saml-adapter> <SP entityID="" sslPolicy="EXTERNAL" logoutPage="SPECIFY YOUR LOGOUT PAGE!"> <Keys> <Key signing="true"> <PrivateKeyPem>PRIVATE KEY NOT SET UP OR KNOWN</PrivateKeyPem> <CertificatePem>...</CertificatePem> </Key> </Keys> <IDP entityID="idp" signatureAlgorithm="RSA_SHA256" signatureCanonicalizationMethod="http://www.w3.org/2001/10/xml-exc-c14n#"> <SingleSignOnService signRequest="true" validateResponseSignature="true" validateAssertionSignature="false" requestBinding="POST" bindingUrl="http://localhost:8180/realms/example_saml_realm/protocol/saml"/> <SingleLogoutService signRequest="true" signResponse="true" validateRequestSignature="true" validateResponseSignature="true" requestBinding="POST" responseBinding="POST" postBindingUrl="http://localhost:8180/realms/example_saml_realm/protocol/saml" redirectBindingUrl="http://localhost:8180/realms/example_saml_realm/protocol/saml"/> </IDP> </SP> </keycloak-saml-adapter> The values of PrivateKeyPem , and CertificatePem are unique for each client. Subsystem configuration You can secure applications with SAML by configuring the keycloak-saml subsystem. You can obtain the client configuration file containing the subsystem configuration commands from Red Hat build of Keycloak. For more information, see Generating client adapter config . 2.4.2. Creating a SAML client in Red Hat build of Keycloak Create a Security Assertion Markup Language (SAML) client in Red Hat build of Keycloak to use with JBoss EAP to secure applications. The following procedure outlines the minimum steps required to get started with securing applications deployed to JBoss EAP with Red Hat build of Keycloak for testing purposes. For detailed configurations, see Creating a SAML client in the Red Hat build of Keycloak Server Administration Guide. Prerequisites You have created a realm and defined users in Red Hat build of Keycloak. For more information, see Creating a realm and users in JBoss EAP Procedure Navigate to the Red Hat build of Keycloak Admin Console. Create a client. Click Clients , then click Create client . Select SAML as the Client type . Enter the URL for the application you want to secure as the Client ID . For example, http://localhost:8080/simple-webapp-example/secured/ . Important The client ID must exactly match the URL of your application. If the client ID does not match, you get an error similar to the following: Enter a client name. For example, jbeap-saml . Click . Enter the following information: Root URL : The URL for your application, for example, http://localhost:8080/simple-webapp-example/ . Home URL : The URL for your application, for example, http://localhost:8080/simple-webapp-example/ . Important If you do not set the Home URL, SP entityID in the client configuration remains blank and causes errors. If using the management CLI commands, you get the following error: You can resolve the error by defining the value for SP entityID in the respective configuration files. Valid Redirect URIs : The URIs that are allowed after a user logs in, for example, http://localhost:8080/simple-webapp-example/secured/* . Master SAML Processing URL : The URL for your application followed by saml . For example, http://localhost:8080/simple-webapp-example/saml . Important If you do not append saml to the URL, you get a redirection error. For more information, see Creating a SAML client . You can now use the configured client to secure web applications deployed on JBoss EAP. For more information, see Securing web applications using SAML . steps Securing web applications using SAML Additional resources Red Hat build of Keycloak Server Administration Guide 2.4.3. Securing web applications using SAML The Keycloak SAML adapter feature pack provides two layers for non-OpenShift deployments: keycloak-client-saml , and keycloak-client-saml-ejb . Use the keycloak-client-saml layer to secure servlet based-web applications, and the keycloak-client-saml-ejb to secure Jakarta Enterprise Beans applications. There are two ways in which you can configure applications to use SAML: By configuring the keycloak-saml subsystem. Use this method if you do not want to add configuration to the application deployment. By updating the deployment configuration Use this method if you do not want to add configuration to the server and prefer to keep the configuration within the application deployment. Prerequisites A SAML client has been created in Red Hat build of Keycloak. For more information, see Creating a SAML client in Red Hat build of Keycloak . JBoss EAP has been installed by using the jboss-eap-installation-manager . For more information, see Installing JBoss EAP 8.0 using the jboss-eap-installation-manager in the Red Hat JBoss Enterprise Application Platform Installation Methods guide. Procedure Add the required Keycloak SAML adapter layer to the server by using jboss-eap-installation-manager . Following are the details of the available layers: Feature pack: org.keycloak:keycloak-saml-adapter-galleon-pack . Layers: keycloak-client-saml : Use this layer to secure servlets. keycloak-client-saml-ejb : Use this layer to propagate identities from servlets to Jakarta Enterprise Beans. For information about adding feature packs and layers in JBoss EAP, see Adding Feature Packs to existing JBoss EAP Servers using the jboss-eap-installation-manager in the Red Hat JBoss Enterprise Application Platform Installation Methods guide. Configure the application's web.xml to protect the application resources. <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" metadata-complete="false"> <security-constraint> <web-resource-collection> <web-resource-name>secured</web-resource-name> <url-pattern>/secured</url-pattern> </web-resource-collection> <auth-constraint> <role-name>Admin</role-name> 1 </auth-constraint> </security-constraint> <security-role> <role-name>*</role-name> </security-role> </web-app> 1 Only allow the users with the role Admin to access the application. To allow users with any role to access the application, use the wildcard ** as the value for role-name . Secure your applications with SAML either by using the management CLI, or by updating the application deployment. By updating the application deployment. Add login configuration to the application's web.xml specifying authentication method as SAML. <web-app> ... <login-config> <auth-method>SAML</auth-method> 1 </login-config> ... </web-app> 1 Use SAML to secure the application. Download the configuration keycloak-saml.xml file from Red Hat build of Keycloak and save it in the WEB-INF/ directory of your application. For more information, see Generating client adapter config . Example keycloak-saml.xml <keycloak-saml-adapter> <SP entityID="" sslPolicy="EXTERNAL" logoutPage="SPECIFY YOUR LOGOUT PAGE!"> <Keys> <Key signing="true"> <PrivateKeyPem>PRIVATE KEY NOT SET UP OR KNOWN</PrivateKeyPem> <CertificatePem>...</CertificatePem> </Key> </Keys> <IDP entityID="idp" signatureAlgorithm="RSA_SHA256" signatureCanonicalizationMethod="http://www.w3.org/2001/10/xml-exc-c14n#"> <SingleSignOnService signRequest="true" validateResponseSignature="true" validateAssertionSignature="false" requestBinding="POST" bindingUrl="http://localhost:8180/realms/example_saml_realm/protocol/saml"/> <SingleLogoutService signRequest="true" signResponse="true" validateRequestSignature="true" validateResponseSignature="true" requestBinding="POST" responseBinding="POST" postBindingUrl="http://localhost:8180/realms/example_saml_realm/protocol/saml" redirectBindingUrl="http://localhost:8180/realms/example_saml_realm/protocol/saml"/> </IDP> </SP> </keycloak-saml-adapter> The values of PrivateKeyPem , and CertificatePem are unique for each client. By using the management CLI. Download the client configuration file keycloak-saml-subsystem.cli from Red Hat build of Keycloak. For more information, see Generating client adapter config . Example keycloak-saml-subsystem.cli The values of PrivateKeyPem , and CertificatePem are unique for each client. Update every occurrence of YOUR-WAR.war in the client configuration file with the name of your application WAR, for example simple-webapp-example.war . Note The generated CLI script has a missing ) at the end of the second statement: You must add the missing ) Configure JBoss EAP by running keycloak-saml-subsystem.cli script using the management CLI. Deploy the application. Verification In a browser, navigate to the application URL. For example, http://localhost:8080/simple-webapp-example/secured . You are redirected to the Red Hat build of Keycloak login page. You can log in with your credentials for the user you defined in Red Hat build of Keycloak. Your application is now secured using SAML. Additional resources Keycloak SAML adapter feature pack for securing applications using SAML
[ "mvn archetype:generate -DgroupId= USD{group-to-which-your-application-belongs} -DartifactId= USD{name-of-your-application} -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false", "mvn archetype:generate -DgroupId=com.example.app -DartifactId=simple-webapp-example -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false", "cd <name-of-your-application>", "cd simple-webapp-example", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>com.example.app</groupId> <artifactId>simple-webapp-example</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name>simple-webapp-example Maven Webapp</name> <!-- FIXME change it to the project's website --> <url>http://www.example.com</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <version.maven.war.plugin>3.4.0</version.maven.war.plugin> </properties> <dependencies> <dependency> <groupId>jakarta.servlet</groupId> <artifactId>jakarta.servlet-api</artifactId> <version>6.0.0</version> <scope>provided</scope> </dependency> </dependencies> <build> <finalName>USD{project.artifactId}</finalName> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>USD{version.maven.war.plugin}</version> </plugin> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>4.2.2.Final</version> </plugin> </plugins> </build> </project>", "mvn install", "[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 0.795 s [INFO] Finished at: 2022-04-28T17:39:48+05:30 [INFO] ------------------------------------------------------------------------", "mkdir -p src/main/java/<path_based_on_artifactID>", "mkdir -p src/main/java/com/example/app", "cd src/main/java/<path_based_on_artifactID>", "cd src/main/java/com/example/app", "package com.example.app; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import jakarta.servlet.ServletException; import jakarta.servlet.annotation.WebServlet; import jakarta.servlet.http.HttpServlet; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; /** * A simple secured HTTP servlet. It returns the user name of obtained * from the logged-in user's Principal. If there is no logged-in user, * it returns the text \"NO AUTHENTICATED USER\". */ @WebServlet(\"/secured\") public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { writer.println(\"<html>\"); writer.println(\" <head><title>Secured Servlet</title></head>\"); writer.println(\" <body>\"); writer.println(\" <h1>Secured Servlet</h1>\"); writer.println(\" <p>\"); writer.print(\" Current Principal '\"); Principal user = req.getUserPrincipal(); writer.print(user != null ? user.getName() : \"NO AUTHENTICATED USER\"); writer.print(\"'\"); writer.println(\" </p>\"); writer.println(\" </body>\"); writer.println(\"</html>\"); } } }", "mvn package [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.015 s [INFO] Finished at: 2022-04-28T17:48:53+05:30 [INFO] ------------------------------------------------------------------------", "mvn wildfly:deploy", "Secured Servlet Current Principal 'NO AUTHENTICATED USER'", "<path_to_rhbk> /bin/kc.sh start-dev --http-port <offset-number>", "/home/servers/rhbk-22.0/bin/kc.sh start-dev --http-port 8180", "<login-config> <auth-method>OIDC</auth-method> </login-config>", "{ \"client-id\" : \"customer-portal\", 1 \"provider-url\" : \"http://localhost:8180/realms/demo\", 2 \"ssl-required\" : \"external\", 3 \"credentials\" : { \"secret\" : \"234234-234234-234234\" 4 } }", "<subsystem xmlns=\"urn:wildfly:elytron-oidc-client:1.0\"> <secure-deployment name=\"DEPLOYMENT_RUNTIME_NAME.war\"> 1 <client-id>customer-portal</client-id> 2 <provider-url>http://localhost:8180/realms/demo</provider-url> 3 <ssl-required>external</ssl-required> 4 <credential name=\"secret\" secret=\"0aa31d98-e0aa-404c-b6e0-e771dba1e798\" /> 5 </secure-deployment </subsystem>", "<subsystem xmlns=\"urn:wildfly:elytron-oidc-client:1.0\"> <provider name=\" USD{OpenID_provider_name} \"> <provider-url>http://localhost:8080/realms/demo</provider-url> <ssl-required>external</ssl-required> </provider> <secure-deployment name=\"customer-portal.war\"> 1 <provider> USD{OpenID_provider_name} </provider> <client-id>customer-portal</client-id> <credential name=\"secret\" secret=\"0aa31d98-e0aa-404c-b6e0-e771dba1e798\" /> </secure-deployment> <secure-deployment name=\"product-portal.war\"> 2 <provider> USD{OpenID_provider_name} </provider> <client-id>product-portal</client-id> <credential name=\"secret\" secret=\"0aa31d98-e0aa-404c-b6e0-e771dba1e798\" /> </secure-deployment> </subsystem>", "{ \"realm\": \"example_realm\", \"auth-server-url\": \"http://localhost:8180/\", \"ssl-required\": \"external\", \"resource\": \"jbeap-oidc\", \"public-client\": true, \"confidential-port\": 0 }", "\"provider-url\" : \"http://localhost:8180/realms/example_realm\", \"ssl-required\": \"external\", \"client-id\": \"jbeap-oidc\", \"public-client\": true, \"confidential-port\": 0", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <web-app version=\"2.5\" xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd\" metadata-complete=\"false\"> <security-constraint> <web-resource-collection> <web-resource-name>secured</web-resource-name> <url-pattern>/secured</url-pattern> </web-resource-collection> <auth-constraint> <role-name>Admin</role-name> 1 </auth-constraint> </security-constraint> <security-role> <role-name>*</role-name> </security-role> </web-app>", "<web-app> <login-config> <auth-method>OIDC</auth-method> 1 </login-config> </web-app>", "{ \"provider-url\" : \"http://localhost:8180/realms/example_realm\", \"ssl-required\": \"external\", \"client-id\": \"jbeap-oidc\", \"public-client\": true, \"confidential-port\": 0 }", "/subsystem=elytron-oidc-client/secure-deployment=simple-oidc-example.war/:add(client-id=jbeap-oidc,provider-url=http://localhost:8180/realms/example_realm,public-client=true,ssl-required=external)", "mvn package", "mvn wildfly:deploy", "<login-config> <auth-method>SAML</auth-method> </login-config>", "<keycloak-saml-adapter> <SP entityID=\"\" sslPolicy=\"EXTERNAL\" logoutPage=\"SPECIFY YOUR LOGOUT PAGE!\"> <Keys> <Key signing=\"true\"> <PrivateKeyPem>PRIVATE KEY NOT SET UP OR KNOWN</PrivateKeyPem> <CertificatePem>...</CertificatePem> </Key> </Keys> <IDP entityID=\"idp\" signatureAlgorithm=\"RSA_SHA256\" signatureCanonicalizationMethod=\"http://www.w3.org/2001/10/xml-exc-c14n#\"> <SingleSignOnService signRequest=\"true\" validateResponseSignature=\"true\" validateAssertionSignature=\"false\" requestBinding=\"POST\" bindingUrl=\"http://localhost:8180/realms/example_saml_realm/protocol/saml\"/> <SingleLogoutService signRequest=\"true\" signResponse=\"true\" validateRequestSignature=\"true\" validateResponseSignature=\"true\" requestBinding=\"POST\" responseBinding=\"POST\" postBindingUrl=\"http://localhost:8180/realms/example_saml_realm/protocol/saml\" redirectBindingUrl=\"http://localhost:8180/realms/example_saml_realm/protocol/saml\"/> </IDP> </SP> </keycloak-saml-adapter>", "2023-05-17 19:54:31,586 WARN [org.keycloak.events] (executor-thread-0) type=LOGIN_ERROR, realmId=eba0f106-389f-4216-a676-05fcd0c0c72e, clientId=null, userId=null, ipAddress=127.0.0.1, error=client_not_found, reason=Cannot_match_source_hash", "Can't reset to root in the middle of the path @72", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <web-app version=\"2.5\" xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd\" metadata-complete=\"false\"> <security-constraint> <web-resource-collection> <web-resource-name>secured</web-resource-name> <url-pattern>/secured</url-pattern> </web-resource-collection> <auth-constraint> <role-name>Admin</role-name> 1 </auth-constraint> </security-constraint> <security-role> <role-name>*</role-name> </security-role> </web-app>", "<web-app> <login-config> <auth-method>SAML</auth-method> 1 </login-config> </web-app>", "<keycloak-saml-adapter> <SP entityID=\"\" sslPolicy=\"EXTERNAL\" logoutPage=\"SPECIFY YOUR LOGOUT PAGE!\"> <Keys> <Key signing=\"true\"> <PrivateKeyPem>PRIVATE KEY NOT SET UP OR KNOWN</PrivateKeyPem> <CertificatePem>...</CertificatePem> </Key> </Keys> <IDP entityID=\"idp\" signatureAlgorithm=\"RSA_SHA256\" signatureCanonicalizationMethod=\"http://www.w3.org/2001/10/xml-exc-c14n#\"> <SingleSignOnService signRequest=\"true\" validateResponseSignature=\"true\" validateAssertionSignature=\"false\" requestBinding=\"POST\" bindingUrl=\"http://localhost:8180/realms/example_saml_realm/protocol/saml\"/> <SingleLogoutService signRequest=\"true\" signResponse=\"true\" validateRequestSignature=\"true\" validateResponseSignature=\"true\" requestBinding=\"POST\" responseBinding=\"POST\" postBindingUrl=\"http://localhost:8180/realms/example_saml_realm/protocol/saml\" redirectBindingUrl=\"http://localhost:8180/realms/example_saml_realm/protocol/saml\"/> </IDP> </SP> </keycloak-saml-adapter>", "/subsystem=keycloak-saml/secure-deployment=YOUR-WAR.war/:add /subsystem=keycloak-saml/secure-deployment=YOUR-WAR.war/SP=\"http://localhost:8080/simple-webapp-example/\"/:add(sslPolicy=EXTERNAL,logoutPage=\"SPECIFY YOUR LOGOUT PAGE!\" /subsystem=keycloak-saml/secure-deployment=YOUR-WAR.war/SP=\"http://localhost:8080/simple-webapp-example/\"/Key=KEY1:add(signing=true, PrivateKeyPem=\"...\", CertificatePem=\"...\") /subsystem=keycloak-saml/secure-deployment=YOUR-WAR.war/SP=\"http://localhost:8080/simple-webapp-example/\"/IDP=idp/:add( SingleSignOnService={ signRequest=true, validateResponseSignature=true, validateAssertionSignature=false, requestBinding=POST, bindingUrl=http://localhost:8180/realms/example-saml-realm/protocol/saml}, SingleLogoutService={ signRequest=true, signResponse=true, validateRequestSignature=true, validateResponseSignature=true, requestBinding=POST, responseBinding=POST, postBindingUrl=http://localhost:8180/realms/example-saml-realm/protocol/saml, redirectBindingUrl=http://localhost:8180/realms/example-saml-realm/protocol/saml} ) /subsystem=keycloak-saml/secure-deployment=YOUR-WAR.war/SP=\"http://localhost:8080/simple-webapp-example/\"/IDP=idp/:write-attribute(name=signatureAlgorithm,value=RSA_SHA256) /subsystem=keycloak-saml/secure-deployment=YOUR-WAR.war/SP=\"http://localhost:8080/simple-webapp-example/\"/IDP=idp/:write-attribute(name=signatureCanonicalizationMethod,value=http://www.w3.org/2001/10/xml-exc-c14n#)", "/subsystem=keycloak-saml/secure-deployment=YOUR-WAR.war/SP=\"\"/:add(sslPolicy=EXTERNAL,logoutPage=\"SPECIFY YOUR LOGOUT PAGE!\"", "<EAP_HOME> /bin/jboss-cli.sh -c --file=<path_to_the_file>/keycloak-saml-subsystem.cli", "mvn wildfly:deploy" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_single_sign-on_with_jboss_eap/securing-applications-deployed-on-server-with-single-sign-on_default
Chapter 21. Browse
Chapter 21. Browse Both producer and consumer are supported The Browse component provides a simple BrowsableEndpoint which can be useful for testing, visualisation tools or debugging. The exchanges sent to the endpoint are all available to be browsed. 21.1. Dependencies When using browse with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-browse-starter</artifactId> </dependency> 21.2. URI format Where someName can be any string to uniquely identify the endpoint. 21.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 21.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 21.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 21.4. Component Options The Browse component supports 3 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 21.5. Endpoint Options The Browse endpoint is configured using URI syntax: with the following path and query parameters: 21.5.1. Path Parameters (1 parameters) Name Description Default Type name (common) Required A name which can be any string to uniquely identify the endpoint. String 21.5.2. Query Parameters (4 parameters) Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 21.6. Sample In the route below, we insert a browse: component to be able to browse the Exchanges that are passing through: from("activemq:order.in").to("browse:orderReceived").to("bean:processOrder"); We can now inspect the received exchanges from within the Java code: private CamelContext context; public void inspectReceivedOrders() { BrowsableEndpoint browse = context.getEndpoint("browse:orderReceived", BrowsableEndpoint.class); List<Exchange> exchanges = browse.getExchanges(); // then we can inspect the list of received exchanges from Java for (Exchange exchange : exchanges) { String payload = exchange.getIn().getBody(); // do something with payload } } 21.7. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.browse.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.browse.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.browse.enabled Whether to enable auto configuration of the browse component. This is enabled by default. Boolean camel.component.browse.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-browse-starter</artifactId> </dependency>", "browse:someName[?options]", "browse:name", "from(\"activemq:order.in\").to(\"browse:orderReceived\").to(\"bean:processOrder\");", "private CamelContext context; public void inspectReceivedOrders() { BrowsableEndpoint browse = context.getEndpoint(\"browse:orderReceived\", BrowsableEndpoint.class); List<Exchange> exchanges = browse.getExchanges(); // then we can inspect the list of received exchanges from Java for (Exchange exchange : exchanges) { String payload = exchange.getIn().getBody(); // do something with payload } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-browse-component-starter
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_bare_metal_provisioning_service/proc_providing-feedback-on-red-hat-documentation