title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
β | url
stringlengths 79
342
|
---|---|---|---|
Chapter 8. Uninstalling a cluster on IBM Power Virtual Server | Chapter 8. Uninstalling a cluster on IBM Power Virtual Server You can remove a cluster that you deployed to IBM Power(R) Virtual Server. 8.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. You have configured the ccoctl binary. You have installed the IBM Cloud(R) CLI and installed or updated the VPC infrastructure service plugin. For more information see "Prerequisites" in the IBM Cloud(R) CLI documentation . Procedure If the following conditions are met, this step is required: The installer created a resource group as part of the installation process. You or one of your applications created persistent volume claims (PVCs) after the cluster was deployed. In which case, the PVCs are not removed when uninstalling the cluster, which might prevent the resource group from being successfully removed. To prevent a failure: Log in to the IBM Cloud(R) using the CLI. To list the PVCs, run the following command: USD ibmcloud is volumes --resource-group-name <infrastructure_id> For more information about listing volumes, see the IBM Cloud(R) CLI documentation . To delete the PVCs, run the following command: USD ibmcloud is volume-delete --force <volume_id> For more information about deleting volumes, see the IBM Cloud(R) CLI documentation . Export the API key that was created as part of the installation process. USD export IBMCLOUD_API_KEY=<api_key> Note You must set the variable name exactly as specified. The installation program expects the variable name to be present to remove the service IDs that were created when the cluster was installed. From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. You might have to run the openshift-install destroy command up to three times to ensure a proper cleanup. Remove the manual CCO credentials that were created for the cluster: USD ccoctl ibmcloud delete-service-id \ --credentials-requests-dir <path_to_credential_requests_directory> \ --name <cluster_name> Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"ibmcloud is volumes --resource-group-name <infrastructure_id>",
"ibmcloud is volume-delete --force <volume_id>",
"export IBMCLOUD_API_KEY=<api_key>",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"ccoctl ibmcloud delete-service-id --credentials-requests-dir <path_to_credential_requests_directory> --name <cluster_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_power_virtual_server/uninstalling-cluster-ibm-power-vs |
Chapter 7. Deploying the RHEL bootc images | Chapter 7. Deploying the RHEL bootc images You can deploy the rhel-bootc container image by using the following different mechanisms. Anaconda bootc-image-builder bootc install The following bootc image types are available: Disk images that you generated by using the bootc image-builder such as: QCOW2 (QEMU copy-on-write, virtual disk) Raw (Mac Format) AMI (Amazon Cloud) ISO: Unattended installation method, by using an USB Sticks or Install-on-boot. After you have created a layered image that you can deploy, there are several ways that the image can be installed to a host: You can use RHEL installer and Kickstart to install the layered image to a bare metal system, by using the following mechanisms: Deploy by using USB PXE You can also use bootc-image-builder to convert the container image to a bootc image and deploy it to a bare metal or to a cloud environment. The installation method happens only one time. After you deploy your image, any future updates will apply directly from the container registry as the updates are published. Figure 7.1. Deploying a bootc image by using a basic build installer bootc install , or deploying a container image by using Anaconda and Kickstart Figure 7.2. Using bootc-image-builder to create disk images from bootc images and deploying disk images in different environments, such as the edge, servers, and clouds by using Anaconda, bootc-image-builder or bootc install 7.1. Deploying a container image by using KVM with a QCOW2 disk image After creating a QEMU disk image from a RHEL bootc image by using the bootc-image-builder tool, you can use a virtualization software to boot it. Prerequisites You created a container image. See Creating QCOW2 images by using bootc-image-builder . You pushed the container image to an accessible repository. Procedure Run the container image that you create by using either libvirt . See Creating virtual machines by using the command line for more details. The following example uses libvirt : Verification Connect to the VM in which you are running the container image. See Connecting to virtual machines for more details. steps You can make updates to the image and push the changes to a registry. See Managing RHEL bootc images . Additional resources Configuring and managing virtualization 7.2. Deploying a container image to AWS with an AMI disk image After using the bootc-image-builder tool to create an AMI from a bootc image, and uploading it to a AWS s3 bucket, you can deploy a container image to AWS with the AMI disk image. Prerequisites You created an Amazon Machine Image (AMI) from a bootc image. See Creating AMI images by using bootc-image-builder and uploading it to AWS . cloud-init is available in the Containerfile that you previously created so that you can create a layered image for your use case. Procedure In a browser, access Service->EC2 and log in. On the AWS console dashboard menu, choose the correct region. The image must have the Available status, to indicate that it was correctly uploaded. On the AWS dashboard, select your image and click Launch . In the new window that opens, choose an instance type according to the resources you need to start your image. Click Review and Launch . Review your instance details. You can edit each section if you need to make any changes. Click Launch . Before you start the instance, select a public key to access it. You can either use the key pair you already have or you can create a new key pair. Click Launch Instance to start your instance. You can check the status of the instance, which displays as Initializing . After the instance status is Running , the Connect button becomes available. Click Connect . A window appears with instructions on how to connect by using SSH. Run the following command to set the permissions of your private key file so that only you can read it. See Connect to your Linux instance . Connect to your instance by using its Public DNS: Note Your instance continues to run unless you stop it. Verification After launching your image, you can: Try to connect to http:// <your_instance_ip_address> in a browser. Check if you are able to perform any action while connected to your instance by using SSH. steps After you deploy your image, you can make updates to the image and push the changes to a registry. See Managing RHEL bootc images . Additional resources Pushing images to AWS Cloud AMI Amazon Machine Images (AMI) 7.3. Deploying a container image by using Anaconda and Kickstart You can deploy the RHEL ISO image that you downloaded from Red Hat by using Anaconda and Kickstart to install your container image. Warning The use of rpm-ostree to make changes, or install content, is not supported. Prerequisites You have downloaded the 9.4 Boot ISO for your architecture from Red Hat. See Downloading RH boot images . Procedure Create an ostreecontainer Kickstart file. For example: Boot a system by using the 9.4 Boot ISO installation media. Append the Kickstart file with the following to the kernel argument: Press CTRL+X to boot the system. steps After you deploy your container image, you can make updates to the image and push the changes to a registry. See Managing RHEL bootc images . Additional resources ostreecontainer documentation bootc upgrade fails when using local rpm-ostree modifications (Red Hat Knowledgebase) 7.4. Deploying a custom ISO container image After you build an ISO image by using bootc-image-builder , the resulting image is a system similar to the RHEL ISOs available for download, except that your container image content is embedded in the ISO disk image. You do not need to have access to the network during installation. You can install the resulting ISO disk image to a bare metal system. See Creating ISO images by using bootc-image-builder . Prerequisites You have created an ISO image with your bootc image embedded. Procedure Copy your ISO disk image to a USB flash drive. Perform a bare-metal installation by using the content in the USB stick into a disconnected environment. steps After you deploy your container image, you can make updates to the image and push the changes to a registry. See Managing RHEL bootc images . 7.5. Deploying an ISO bootc image over PXE boot You can use a network installation to deploy the RHEL ISO image over PXE boot to run your ISO bootc image. Prerequisites You have downloaded the 9.4 Boot ISO for your architecture from Red Hat. See Downloading RH boot images . You have configured the server for the PXE boot. Choose one of the following options: For HTTP clients, see Configuring the DHCPv4 server for HTTP and PXE boot . For UEFI-based clients, see Configuring a TFTP server for UEFI-based clients . For BIOS-based clients, see Configuring a TFTP server for BIOS-based clients . You have a client, also known as the system to which you are installing your ISO image. Procedure Export the RHEL installation ISO image to the HTTP server. The PXE boot server is now ready to serve PXE clients. Boot the client and start the installation. Select PXE Boot when prompted to specify a boot source. If the boot options are not displayed, press the Enter key on your keyboard or wait until the boot window opens. From the Red Hat Enterprise Linux boot window, select the boot option that you want, and press Enter. Start the network installation. steps You can make updates to the image and push the changes to a registry. See Managing RHEL bootc images . Additional resources Preparing to install from the network using PXE Booting the installation from a network by using PXE 7.6. Building, configuring, and launching disk images with bootc-image-builder You can inject configuration into a custom image by using a Containerfile. Procedure Create a disk image. The following example shows how to add a user to the disk image. name - User name. Mandatory password - Nonencrypted password. Not mandatory key - Public SSH key contents. Not mandatory groups - An array of groups to add the user into. Not mandatory Run bootc-image-builder and pass the following arguments: Launch a VM, for example, by using virt-install : Verification Access the system with SSH: steps After you deploy your container image, you can make updates to the image and push the changes to a registry. See Managing RHEL bootable images . 7.7. Deploying a container image by using bootc With bootc , you have a container that is the source of truth. It contains a basic build installer and it is available as bootc install to-disk or bootc install to-filesystem . By using the bootc install methods you do not need to perform any additional steps to deploy the container image, because the container images include a basic installer. With image mode for RHEL, you can install unconfigured images, for example, images that do not have a default password or SSH key. Perform a bare-metal installation to a device by using a RHEL ISO image. Prerequisites You have downloaded the 9.4 Boot ISO for your architecture from Red Hat. See Downloading RH boot images . You have created a configuration file. Procedure inject a configuration into the running ISO image, for example: steps After you deploy your container image, you can make updates to the image and push the changes to a registry. See Managing RHEL bootable images . 7.8. Advanced installation with to-filesystem The bootc install contains two subcommands: bootc install to-disk and bootc install to-filesystem . The bootc-install-to-filesystem performs installation to the target filesystem. The bootc install to-disk subcommand consists of a set of opinionated lower level tools that you can also call independently. The command consist of the following tools: mkfs.USDfs /dev/disk mount /dev/disk /mnt bootc install to-filesystem --karg=root=UUID= <uuid of /mnt> --imgref USDself /mnt 7.8.1. Using bootc install to-existing-root The bootc install to-existing-root is a variant of install to-filesystem . You can use it to convert an existing system into the target container image. Warning This conversion eliminates the /boot and /boot/efi partitions and can delete the existing Linux installation. The conversion process reuses the filesystem, and even though the user data is preserved, the system no longer boots in package mode. Prerequisites You must have root permissions to complete the procedure. You must match the host environment and the target container version, for example, if your host is a RHEL 9 host, then you must have a RHEL 9 container. Installing a RHEL container on a Fedora host by using btrfs as the RHEL kernel will not support that filesystem. Procedure Run the following command to convert an existing system into the target container image. Pass the target rootfs by using the -v /:/target option. This command deletes the data in /boot , but everything else in the existing operating system is not automatically deleted. This can be useful because the new image can automatically import data from the host system. Consequently, container images, database, the user home directory data, configuration files in /etc are all available after the subsequent reboot in /sysroot . You can also use the --root-ssh-authorized-keys flag to inherit the root user SSH keys, by adding --root-ssh-authorized-keys /target/root/.ssh/authorized_keys . For example: | [
"sudo virt-install --name bootc --memory 4096 --vcpus 2 --disk qcow2/disk.qcow2 --import --os-variant rhel9-unknown",
"chmod 400 <your-instance-name.pem>",
"ssh -i <your-instance-name.pem> ec2-user@ <your-instance-IP-address>",
"Basic setup text network --bootproto=dhcp --device=link --activate Basic partitioning clearpart --all --initlabel --disklabel=gpt reqpart --add-boot part / --grow --fstype xfs Reference the container image to install - The kickstart has no %packages section. A container image is being installed. ostreecontainer --url registry.redhat.io/rhel9/rhel-bootc:9.4 firewall --disabled services --enabled=sshd Only inject a SSH key for root rootpw --iscrypted locked sshkey --username root \"<your key here>\" reboot",
"inst.ks=http:// <path_to_your_kickstart>",
"[[blueprint.customizations.user]] name = \"user\" password = \"pass\" key = \"ssh-rsa AAA ... [email protected]\" groups = [\"wheel\"]",
"sudo podman run --rm -it --privileged --pull=newer --security-opt label=type:unconfined_t -v USD(pwd)/config.toml:/config.toml -v USD(pwd)/output:/output registry.redhat.io/rhel9/bootc-image-builder:latest --type qcow2 --config config.toml quay.io/ <namespace> / <image> : <tag>",
"sudo virt-install --name bootc --memory 4096 --vcpus 2 --disk qcow2/disk.qcow2 --import --os-variant rhel9",
"ssh -i / <path_to_private_ssh-key> <user1> @ <ip-address>",
"podman run --rm --privileged --pid=host -v /var/lib/containers:/var/lib/containers --security-opt label=type:unconfined_t <image> bootc install to-disk <path-to-disk>",
"podman run --rm --privileged -v /dev:/dev -v /var/lib/containers:/var/lib/containers -v /:/target --pid=host --security-opt label=type:unconfined_t <image> bootc install to-existing-root",
"podman run --rm --privileged -v /dev:/dev -v /var/lib/containers:/var/lib/containers -v /:/target --pid=host --security-opt label=type:unconfined_t <image> bootc install to-existing-root --root-ssh-authorized-keys /target/root/.ssh/authorized_keys"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_image_mode_for_rhel_to_build_deploy_and_manage_operating_systems/deploying-the-rhel-bootc-images_using-image-mode-for-rhel-to-build-deploy-and-manage-operating-systems |
Chapter 2. Configuring an Azure Stack Hub account | Chapter 2. Configuring an Azure Stack Hub account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 2.1. Azure Stack Hub account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters. The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Description vCPU 56 A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. VNet 1 Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 2 The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Additional resources Optimizing storage . 2.2. Configuring a DNS zone in Azure Stack Hub To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar's DNS zone to Azure Stack Hub, see Microsoft's documentation for Azure Stack Hub datacenter DNS integration . 2.3. Required Azure Stack Hub roles Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use: Owner To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation. 2.4. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Register your environment: USD az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1 1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. See the Microsoft documentation for details. Set the active environment: USD az cloud set -n AzureStackCloud Update your environment configuration to use the specific API version for Azure Stack Hub: USD az cloud update --profile 2019-03-01-hybrid Log in to the Azure CLI: USD az login If you are in a multitenant environment, you must also supply the tenant ID. If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": AzureStackCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 2.5. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on Azure Stack Hub . Install an OpenShift Container Platform cluster on Azure Stack Hub with user-provisioned infrastructure by following Installing a cluster on Azure Stack Hub using ARM templates . | [
"az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1",
"az cloud set -n AzureStackCloud",
"az cloud update --profile 2019-03-01-hybrid",
"az login",
"az account list --refresh",
"[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_azure_stack_hub/installing-azure-stack-hub-account |
probe::scheduler.wakeup_new | probe::scheduler.wakeup_new Name probe::scheduler.wakeup_new - Newly created task is woken up for the first time Synopsis scheduler.wakeup_new Values name name of the probe point task_state state of the task woken up task_pid PID of the new task woken up task_tid TID of the new task woken up task_priority priority of the new task task_cpu cpu of the task woken up | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-scheduler-wakeup-new |
Chapter 10. Removing Windows nodes | Chapter 10. Removing Windows nodes You can remove a Windows node by deleting its host Windows machine. 10.1. Deleting a specific machine You can delete a specific machine. Important Do not delete a control plane machine unless your cluster uses a control plane machine set. Prerequisites Install an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure View the machines that are in the cluster by running the following command: USD oc get machine -n openshift-machine-api The command output contains a list of machines in the <clusterid>-<role>-<cloud_region> format. Identify the machine that you want to delete. Delete the machine by running the following command: USD oc delete machine <machine> -n openshift-machine-api Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. If the machine that you delete belongs to a machine set, a new machine is immediately created to satisfy the specified number of replicas. | [
"oc get machine -n openshift-machine-api",
"oc delete machine <machine> -n openshift-machine-api"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/windows_container_support_for_openshift/removing-windows-nodes |
Chapter 68. role | Chapter 68. role This chapter describes the commands under the role command. 68.1. role add Adds a role assignment to a user or group on the system, a domain, or a project Usage: Table 68.1. Positional Arguments Value Summary <role> Role to add to <user> (name or id) Table 68.2. Optional Arguments Value Summary -h, --help Show this help message and exit --system <system> Include <system> (all) --domain <domain> Include <domain> (name or id) --project <project> Include <project> (name or id) --user <user> Include <user> (name or id) --group <group> Include <group> (name or id) --group-domain <group-domain> Domain the group belongs to (name or id). this can be used in case collisions between group names exist. --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --inherited Specifies if the role grant is inheritable to the sub projects --role-domain <role-domain> Domain the role belongs to (name or id). this must be specified when the name of a domain specific role is used. 68.2. role assignment list List role assignments Usage: Table 68.3. Optional Arguments Value Summary -h, --help Show this help message and exit --effective Returns only effective role assignments --role <role> Role to filter (name or id) --role-domain <role-domain> Domain the role belongs to (name or id). this must be specified when the name of a domain specific role is used. --names Display names instead of ids --user <user> User to filter (name or id) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --group <group> Group to filter (name or id) --group-domain <group-domain> Domain the group belongs to (name or id). this can be used in case collisions between group names exist. --domain <domain> Domain to filter (name or id) --project <project> Project to filter (name or id) --system <system> Filter based on system role assignments --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --inherited Specifies if the role grant is inheritable to the sub projects --auth-user Only list assignments for the authenticated user --auth-project Only list assignments for the project to which the authenticated user's token is scoped Table 68.4. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 68.5. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 68.6. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 68.7. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 68.3. role create Create new role Usage: Table 68.8. Positional Arguments Value Summary <role-name> New role name Table 68.9. Optional Arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain the role belongs to (name or id) --or-show Return existing role Table 68.10. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 68.11. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 68.12. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 68.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 68.4. role delete Delete role(s) Usage: Table 68.14. Positional Arguments Value Summary <role> Role(s) to delete (name or id) Table 68.15. Optional Arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain the role belongs to (name or id) 68.5. role list List roles Usage: Table 68.16. Optional Arguments Value Summary -h, --help Show this help message and exit --domain <domain> Include <domain> (name or id) Table 68.17. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 68.18. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 68.19. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 68.20. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 68.6. role remove Removes a role assignment from system/domain/project : user/group Usage: Table 68.21. Positional Arguments Value Summary <role> Role to remove (name or id) Table 68.22. Optional Arguments Value Summary -h, --help Show this help message and exit --system <system> Include <system> (all) --domain <domain> Include <domain> (name or id) --project <project> Include <project> (name or id) --user <user> Include <user> (name or id) --group <group> Include <group> (name or id) --group-domain <group-domain> Domain the group belongs to (name or id). this can be used in case collisions between group names exist. --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --inherited Specifies if the role grant is inheritable to the sub projects --role-domain <role-domain> Domain the role belongs to (name or id). this must be specified when the name of a domain specific role is used. 68.7. role set Set role properties Usage: Table 68.23. Positional Arguments Value Summary <role> Role to modify (name or id) Table 68.24. Optional Arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain the role belongs to (name or id) --name <name> Set role name 68.8. role show Display role details Usage: Table 68.25. Positional Arguments Value Summary <role> Role to display (name or id) Table 68.26. Optional Arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain the role belongs to (name or id) Table 68.27. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 68.28. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 68.29. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 68.30. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack role add [-h] [--system <system> | --domain <domain> | --project <project>] [--user <user> | --group <group>] [--group-domain <group-domain>] [--project-domain <project-domain>] [--user-domain <user-domain>] [--inherited] [--role-domain <role-domain>] <role>",
"openstack role assignment list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--effective] [--role <role>] [--role-domain <role-domain>] [--names] [--user <user>] [--user-domain <user-domain>] [--group <group>] [--group-domain <group-domain>] [--domain <domain> | --project <project> | --system <system>] [--project-domain <project-domain>] [--inherited] [--auth-user] [--auth-project]",
"openstack role create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] [--or-show] <role-name>",
"openstack role delete [-h] [--domain <domain>] <role> [<role> ...]",
"openstack role list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--domain <domain>]",
"openstack role remove [-h] [--system <system> | --domain <domain> | --project <project>] [--user <user> | --group <group>] [--group-domain <group-domain>] [--project-domain <project-domain>] [--user-domain <user-domain>] [--inherited] [--role-domain <role-domain>] <role>",
"openstack role set [-h] [--domain <domain>] [--name <name>] <role>",
"openstack role show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] <role>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/role |
13.2.2. Preparing a Driver Disc | 13.2.2. Preparing a Driver Disc You can create a driver update disc on CD or DVD. 13.2.2.1. Creating a driver update disc on CD or DVD Important CD/DVD Creator is part of the GNOME desktop. If you use a different Linux desktop, or a different operating system altogether, you will need to use another piece of software to create the CD or DVD. The steps will be generally similar. Make sure that the software that you choose can create CDs or DVDs from image files. While this is true of most CD and DVD burning software, exceptions exist. Look for a button or menu entry labeled burn from image or similar. If your software lacks this feature, or you do not select it, the resulting disc will hold only the image file itself, instead of the contents of the image file. Use the desktop file manager to locate the ISO image file of the driver disc, supplied to you by Red Hat or your hardware vendor. Figure 13.2. A typical .iso file displayed in a file manager window Right-click on this file and choose Write to disc . You will see a window similar to the following: Figure 13.3. CD/DVD Creator's Write to Disc dialog Click the Write button. If a blank disc is not already in the drive, CD/DVD Creator will prompt you to insert one. After you burn a driver update disc CD or DVD, verify that the disc was created successfully by inserting it into your system and browsing to it using the file manager. You should see a single file named rhdd3 and a directory named rpms : Figure 13.4. Contents of a typical driver update disc on CD or DVD If you see only a single file ending in .iso , then you have not created the disc correctly and should try again. Ensure that you choose an option similar to burn from image if you use a Linux desktop other than GNOME or if you use a different operating system. Refer to Section 13.3.2, "Let the Installer Prompt You for a Driver Update" and Section 13.3.3, "Use a Boot Option to Specify a Driver Update Disk" to learn how to use the driver update disc during installation. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-preparing_a_driver_update_disk-ppc |
Installing Red Hat Developer Hub on OpenShift Container Platform | Installing Red Hat Developer Hub on OpenShift Container Platform Red Hat Developer Hub 1.3 Red Hat Customer Content Services | [
"global: auth: backend: enabled: true clusterRouterBase: apps.<clusterName>.com # other Red Hat Developer Hub Helm Chart configurations",
"Loaded config from app-config-from-configmap.yaml, env 2023-07-24T19:44:46.223Z auth info Configuring \"database\" as KeyStore provider type=plugin Backend failed to start up Error: Missing required config value at 'backend.database.client'",
"NAMESPACE=<emphasis><rhdh></emphasis> new-project USD{NAMESPACE} || oc project USD{NAMESPACE}",
"helm upgrade redhat-developer-hub -i https://github.com/openshift-helm-charts/charts/releases/download/redhat-redhat-developer-hub-1.3.5/redhat-developer-hub-1.3.5.tgz",
"PASSWORD=USD(oc get secret redhat-developer-hub-postgresql -o jsonpath=\"{.data.password}\" | base64 -d) CLUSTER_ROUTER_BASE=USD(oc get route console -n openshift-console -o=jsonpath='{.spec.host}' | sed 's/^[^.]*\\.//') helm upgrade redhat-developer-hub -i \"https://github.com/openshift-helm-charts/charts/releases/download/redhat-redhat-developer-hub-1.3.5/redhat-developer-hub-1.3.5.tgz\" --set global.clusterRouterBase=\"USDCLUSTER_ROUTER_BASE\" --set global.postgresql.auth.password=\"USDPASSWORD\"",
"echo \"https://redhat-developer-hub-USDNAMESPACE.USDCLUSTER_ROUTER_BASE\""
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html-single/installing_red_hat_developer_hub_on_openshift_container_platform/index |
Chapter 3. Upgrading to the Red Hat JBoss Core Services Apache HTTP Server 2.4.57 | Chapter 3. Upgrading to the Red Hat JBoss Core Services Apache HTTP Server 2.4.57 The steps to upgrade to the latest Red Hat JBoss Core Services (JBCS) release differ depending on whether you previously installed JBCS from RPM packages or from an archive file. Upgrading JBCS when installed from RPM packages If you installed an earlier release of the JBCS Apache HTTP Server from RPM packages on RHEL 7 or RHEL 8 by using the yum groupinstall command, you can upgrade to the latest release. You can use the yum groupupdate command to upgrade to the 2.4.57 release on RHEL 7 or RHEL 8. Note JBCS does not provide an RPM distribution of the Apache HTTP Server on RHEL 9. Upgrading JBCS when installed from an archive file If you installed an earlier release of the JBCS Apache HTTP Server from an archive file, you must perform the following steps to upgrade to the Apache HTTP Server 2.4.57: Install the Apache HTTP Server 2.4.57. Set up the Apache HTTP Server 2.4.57. Remove the earlier version of Apache HTTP Server. The following procedure describes the recommended steps for upgrading a JBCS Apache HTTP Server 2.4.51 release that you installed from archive files to the latest 2.4.57 release. Prerequisites If you are using Red Hat Enterprise Linux, you have root user access. If you are using Windows Server, you have administrative access. The Red Hat JBoss Core Services Apache HTTP Server 2.4.51 or earlier was previously installed in your system from an archive file. Procedure Shut down any running instances of Red Hat JBoss Core Services Apache HTTP Server 2.4.51. Back up the Red Hat JBoss Core Services Apache HTTP Server 2.4.51 installation and configuration files. Install the Red Hat JBoss Core Services Apache HTTP Server 2.4.57 using the .zip installation method for the current system (see Additional Resources below). Migrate your configuration from the Red Hat JBoss Core Services Apache HTTP Server version 2.4.51 to version 2.4.57. Note The Apache HTTP Server configuration files might have changed since the Apache HTTP Server 2.4.51 release. Consider updating the 2.4.57 version configuration files rather than overwrite them with the configuration files from a different version, such as the Apache HTTP Server 2.4.51. Remove the Red Hat JBoss Core Services Apache HTTP Server 2.4.51 root directory. Additional Resources Installing the JBCS Apache HTTP Server on RHEL from archive files Installing the JBCS Apache HTTP Server on RHEL from RPM packages Installing the JBCS Apache HTTP Server on Windows Server | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_3_release_notes/upgrading-to-the-jbcs-http-2.4.57-release-notes |
3.6. Saving Network Packet Filter Settings | 3.6. Saving Network Packet Filter Settings After configuring the appropriate network packet filters for your situation, save the settings so they can be restored after a reboot. For iptables , enter the following command: To ensure the iptables service is started at system start, enter the following command: You can verify whether the changes persist on a reboot by running the following command and checking whether the changes remain: See the Red Hat Enterprise Linux 7 Security Guide for more information on working with iptables in Red Hat Enterprise Linux 7 | [
"iptables-save > /etc/sysconfig/iptables",
"systemctl enable iptables",
"systemctl restart iptables"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/s1-lvs-fwm-sav-vsa |
Chapter 129. KafkaMirrorMakerConsumerSpec schema reference | Chapter 129. KafkaMirrorMakerConsumerSpec schema reference Used in: KafkaMirrorMakerSpec Full list of KafkaMirrorMakerConsumerSpec schema properties Configures a MirrorMaker consumer. 129.1. numStreams Use the consumer.numStreams property to configure the number of streams for the consumer. You can increase the throughput in mirroring topics by increasing the number of consumer threads. Consumer threads belong to the consumer group specified for Kafka MirrorMaker. Topic partitions are assigned across the consumer threads, which consume messages in parallel. 129.2. offsetCommitInterval Use the consumer.offsetCommitInterval property to configure an offset auto-commit interval for the consumer. You can specify the regular time interval at which an offset is committed after Kafka MirrorMaker has consumed data from the source Kafka cluster. The time interval is set in milliseconds, with a default value of 60,000. 129.3. config Use the consumer.config properties to configure Kafka options for the consumer as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Interceptors Properties with the following prefixes cannot be set: bootstrap.servers group.id interceptor.classes sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to MirrorMaker, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Important The Cluster Operator does not validate keys or values in the config object provided. If an invalid configuration is provided, the MirrorMaker cluster might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all MirrorMaker nodes. 129.4. groupId Use the consumer.groupId property to configure a consumer group identifier for the consumer. Kafka MirrorMaker uses a Kafka consumer to consume messages, behaving like any other Kafka consumer client. Messages consumed from the source Kafka cluster are mirrored to a target Kafka cluster. A group identifier is required, as the consumer needs to be part of a consumer group for the assignment of partitions. 129.5. KafkaMirrorMakerConsumerSpec schema properties Property Property type Description numStreams integer Specifies the number of consumer stream threads to create. offsetCommitInterval integer Specifies the offset auto-commit interval in ms. Default value is 60000. bootstrapServers string A list of host:port pairs for establishing the initial connection to the Kafka cluster. groupId string A unique string that identifies the consumer group this consumer belongs to. authentication KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth Authentication configuration for connecting to the cluster. tls ClientTls TLS configuration for connecting MirrorMaker to the cluster. config map The MirrorMaker consumer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkamirrormakerconsumerspec-reference |
function::string_quoted | function::string_quoted Name function::string_quoted - Quotes a given string Synopsis Arguments str The kernel address to retrieve the string from Description Returns the quoted string version of the given string, with characters where any ASCII characters that are not printable are replaced by the corresponding escape sequence in the returned string. Note that the string will be surrounded by double quotes. | [
"string_quoted:string(str:string)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-string-quoted |
Chapter 1. About Tooling guide | Chapter 1. About Tooling guide This guide introduces VS Code extensions for Red Hat build of Apache Camel and how to install and use Camel CLI. Important The VS Code extensions for Apache Camel are listed as development support. For more information about scope of development support, see Development Support Scope of Coverage for Red Hat Build of Apache Camel . VS Code extensions for Red Hat build of Apache Camel. Following VS Code extensions are explained in this guide. Language Support for Apache Camel The Language Support for Apache Camel extension adds the language support for Apache Camel for Java, Yaml, or XML DSL code. Debug Adapter for Apache Camel The Debug Adapter for Apache Camel adds Camel Debugger power by attaching to a running Camel route written in Java, Yaml, or XML DSL. Camel CLI This is a JBang based Camel application that you can use for running your Camel routes. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/tooling_guide_for_red_hat_build_of_apache_camel/introduction_tooling_guide |
Chapter 8. Securing Kafka | Chapter 8. Securing Kafka A secure deployment of AMQ Streams can encompass: Encryption for data exchange Authentication to prove identity Authorization to allow or decline actions executed by users 8.1. Encryption AMQ Streams supports Transport Layer Security (TLS), a protocol for encrypted communication. Communication is always encrypted for communication between: Kafka brokers ZooKeeper nodes Operators and Kafka brokers Operators and ZooKeeper nodes Kafka Exporter You can also configure TLS between Kafka brokers and clients by applying TLS encryption to the listeners of the Kafka broker. TLS is specified for external clients when configuring an external listener. AMQ Streams components and Kafka clients use digital certificates for encryption. The Cluster Operator sets up certificates to enable encryption within the Kafka cluster. You can provide your own server certificates, referred to as Kafka listener certificates , for communication between Kafka clients and Kafka brokers, and inter-cluster communication. AMQ Streams uses Secrets to store the certificates and private keys required for TLS in PEM and PKCS #12 format. A TLS Certificate Authority (CA) issues certificates to authenticate the identity of a component. AMQ Streams verifies the certificates for the components against the CA certificate. AMQ Streams components are verified against the cluster CA Certificate Authority (CA) Kafka clients are verified against the clients CA Certificate Authority (CA) 8.2. Authentication Kafka listeners use authentication to ensure a secure client connection to the Kafka cluster. Supported authentication mechanisms: Mutual TLS client authentication (on listeners with TLS enabled encryption) SASL SCRAM-SHA-512 OAuth 2.0 token based authentication Custom authentication The User Operator manages user credentials for TLS and SCRAM authentication, but not OAuth 2.0. For example, through the User Operator you can create a user representing a client that requires access to the Kafka cluster, and specify TLS as the authentication type. Using OAuth 2.0 token-based authentication, application clients can access Kafka brokers without exposing account credentials. An authorization server handles the granting of access and inquiries about access. Custom authentication allows for any type of kafka-supported authentication. It can provide more flexibility, but also adds complexity. 8.3. Authorization Kafka clusters use authorization to control the operations that are permitted on Kafka brokers by specific clients or users. If applied to a Kafka cluster, authorization is enabled for all listeners used for client connection. If a user is added to a list of super users in a Kafka broker configuration, the user is allowed unlimited access to the cluster regardless of any authorization constraints implemented through authorization mechanisms. Supported authorization mechanisms: Simple authorization OAuth 2.0 authorization (if you are using OAuth 2.0 token-based authentication) Open Policy Agent (OPA) authorization Custom authorization Simple authorization uses AclAuthorizer , the default Kafka authorization plugin. AclAuthorizer uses Access Control Lists (ACLs) to define which users have access to which resources. For custom authorization, you configure your own Authorizer plugin to enforce ACL rules. OAuth 2.0 and OPA provide policy-based control from an authorization server. Security policies and permissions used to grant access to resources on Kafka brokers are defined in the authorization server. URLs are used to connect to the authorization server and verify that an operation requested by a client or user is allowed or denied. Users and clients are matched against the policies created in the authorization server that permit access to perform specific actions on Kafka brokers. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/amq_streams_on_openshift_overview/security-overview_str |
Chapter 1. Checking which version you have installed | Chapter 1. Checking which version you have installed To begin troubleshooting, determine which version of Red Hat build of MicroShift you have installed. 1.1. Checking the version using the command-line interface To begin troubleshooting, you must know your MicroShift version. One way to get this information is by using the CLI. Procedure Run the following command to check the version information: USD microshift version Example output Red Hat build of MicroShift Version: 4.18-0.microshift-e6980e25 Base OCP Version: 4.18 1.2. Checking the MicroShift version using the API To begin troubleshooting, you must know your MicroShift version. One way to get this information is by using the API. Procedure To get the version number using the OpenShift CLI ( oc ), view the kube-public/microshift-version config map by running the following command: USD oc get configmap -n kube-public microshift-version -o yaml Example output apiVersion: v1 data: major: "4" minor: "13" version: 4.13.8-0.microshift-fa441af87431 kind: ConfigMap metadata: creationTimestamp: "2023-08-03T21:06:11Z" name: microshift-version namespace: kube-public 1.3. Checking the etcd version You can get the version information for the etcd database included with your MicroShift by using one or both of the following methods, depending on the level of information that you need. Procedure To display the base database version information, run the following command: USD microshift-etcd version Example output microshift-etcd Version: 4.17.1 Base etcd Version: 3.5.13 To display the full database version information, run the following command: USD microshift-etcd version -o json Example output { "major": "4", "minor": "16", "gitVersion": "4.17.1~rc.1", "gitCommit": "140777711962eb4e0b765c39dfd325fb0abb3622", "gitTreeState": "clean", "buildDate": "2024-05-10T16:37:53Z", "goVersion": "go1.21.9" "compiler": "gc", "platform": "linux/amd64", "patch": "", "etcdVersion": "3.5.13" } | [
"microshift version",
"Red Hat build of MicroShift Version: 4.18-0.microshift-e6980e25 Base OCP Version: 4.18",
"oc get configmap -n kube-public microshift-version -o yaml",
"apiVersion: v1 data: major: \"4\" minor: \"13\" version: 4.13.8-0.microshift-fa441af87431 kind: ConfigMap metadata: creationTimestamp: \"2023-08-03T21:06:11Z\" name: microshift-version namespace: kube-public",
"microshift-etcd version",
"microshift-etcd Version: 4.17.1 Base etcd Version: 3.5.13",
"microshift-etcd version -o json",
"{ \"major\": \"4\", \"minor\": \"16\", \"gitVersion\": \"4.17.1~rc.1\", \"gitCommit\": \"140777711962eb4e0b765c39dfd325fb0abb3622\", \"gitTreeState\": \"clean\", \"buildDate\": \"2024-05-10T16:37:53Z\", \"goVersion\": \"go1.21.9\" \"compiler\": \"gc\", \"platform\": \"linux/amd64\", \"patch\": \"\", \"etcdVersion\": \"3.5.13\" }"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/troubleshooting/microshift-version |
Chapter 1. Integrating an overcloud with containerized Red Hat Ceph Storage | Chapter 1. Integrating an overcloud with containerized Red Hat Ceph Storage You can use Red Hat OpenStack Platform (RHOSP) director to integrate your cloud environment, which director calls the overcloud, with Red Hat Ceph Storage. You manage and scale the cluster itself outside of the overcloud configuration. For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide . This guide contains instructions for deploying a containerized Red Hat Ceph Storage cluster with your overcloud. Director uses Ansible playbooks provided through the ceph-ansible package to deploy a containerized Ceph Storage cluster. The director also manages the configuration and scaling operations of the cluster. For more information about containerized services in the Red Hat OpenStack Platform, see Configuring a basic overcloud with the CLI tools in Director Installation and Usage . 1.1. Ceph Storage clusters Red Hat Ceph Storage is a distributed data object store designed to provide excellent performance, reliability, and scalability. Distributed object stores accommodate unstructured data so clients can use modern object interfaces and legacy interfaces simultaneously. At the core of every Ceph deployment is the Ceph Storage cluster, which consists of several types of daemons, but primarily, these two: Ceph OSD (Object Storage Daemon) Ceph OSDs store data on behalf of Ceph clients. Additionally, Ceph OSDs use the CPU and memory of Ceph nodes to perform data replication, rebalancing, recovery, monitoring, and reporting functions. Ceph Monitor A Ceph monitor maintains a master copy of the Ceph storage cluster map with the current state of the storage cluster. For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide . 1.2. Requirements to deploy a containerized Ceph Storage cluster with your overcloud Before you deploy a containerized Ceph Storage cluster with your overcloud, your environment must contain the following configuration: An undercloud host with the Red Hat OpenStack Platform (RHOSP) director installed. See Installing director on the undercloud in Director Installation and Usage . Any additional hardware recommended for Red Hat Ceph Storage. For more information about recommended hardware, see the Red Hat Ceph Storage Hardware Guide . Important The Ceph monitor service installs on the overcloud Controller nodes, so you must provide adequate resources to avoid performance issues. Ensure that the Controller nodes in your environment use at least 16GB of RAM for memory and solid-state drive (SSD) storage for the Ceph monitor data. For a medium to large Ceph installation, provide at least 500GB of Ceph monitor data. This space is necessary to avoid levelDB growth if the cluster becomes unstable. The following examples are common sizes for Ceph Storage clusters: Small: 250 terabytes Medium: 1 petabyte Large: 2 petabytes or more. Note Using jumbo frames for the Storage and Storage Management networks is not mandatory but the increase in MTU size improves storage performance. For more information, see Configuring jumbo frames . 1.2.1. Configuring jumbo frames Jumbo frames are frames with an MTU of 9,000. Jumbo frames are not mandatory for the Storage and Storage Management networks but the increase in MTU size improves storage performance. If you want to use jumbo frames, you must configure all network switch ports in the data path to support jumbo frames. Important Network configuration changes such as MTU settings must be completed during the initial deployment. They cannot be applied to an existing deployment. Procedure Log in to the undercloud node as the stack user. Locate the network definition file. Modify the network definition file to extend the template to include the StorageMgmtIpSubnet and StorageMgmtNetworkVlanID attributes of the Storage Management network. Set the mtu attribute of the interfaces to 9000 . The following is an example of implementing these interface settings: Save the changes to the network definition file. Note All network switch ports between servers using the interface with the new MTU setting must be updated to support jumbo frames. If these switch changes are not made, problems will develop at the application layer that can cause the Red Hat Ceph Storage cluster to not reach quorum. If these settings are made and these problems are still observed, verify all hosts using the network configured for jumbo frames can communicate at the configured MTU setting. Use a command like the following example to perform this task: ping -M do -s 8972 172.16.1.11 1.3. Ceph Storage node requirements If you use Red Hat OpenStack Platform (RHOSP) director to create Red Hat Ceph Storage nodes, there are additional requirements. For information about how to select a processor, memory, network interface cards (NICs), and disk layout for Ceph Storage nodes, see Hardware selection recommendations for Red Hat Ceph Storage in the Red Hat Ceph Storage Hardware Guide . Each Ceph Storage node also requires a supported power management interface, such as Intelligent Platform Management Interface (IPMI) functionality, on the motherboard of the server. Note RHOSP director uses ceph-ansible , which does not support installing the OSD on the root disk of Ceph Storage nodes. This means that you need at least two disks for a supported Ceph Storage node. Ceph Storage nodes and RHEL compatibility RHOSP 16.2 is supported on RHEL 8.4. Before upgrading to RHOSP 16.1 and later, review the Red Hat Knowledgebase article Red Hat Ceph Storage: Supported configurations . Red Hat Ceph Storage compatibility RHOSP 16.2 supports Red Hat Ceph Storage 4. Placement Groups (PGs) Ceph Storage uses placement groups (PGs) to facilitate dynamic and efficient object tracking at scale. In the case of OSD failure or cluster rebalancing, Ceph can move or replicate a placement group and its contents, which means a Ceph Storage cluster can rebalance and recover efficiently. The default placement group count that director creates is not always optimal, so it is important to calculate the correct placement group count according to your requirements. You can use the placement group calculator to calculate the correct count. To use the PG calculator, enter the predicted storage usage per service as a percentage, as well as other properties about your Ceph cluster, such as the number OSDs. The calculator returns the optimal number of PGs per pool. For more information, see Placement Groups (PGs) per Pool Calculator . Auto-scaling is an alternative way to manage placement groups. With the auto-scale feature, you set the expected Ceph Storage requirements per service as a percentage instead of a specific number of placement groups. Ceph automatically scales placement groups based on how the cluster is used. For more information, see Auto-scaling placement groups in the Red Hat Ceph Storage Strategies Guide . Processor 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions. Network Interface Cards A minimum of one 1 Gbps Network Interface Cards (NICs), although Red Hat recommends that you use at least two NICs in a production environment. Use additional NICs for bonded interfaces or to delegate tagged VLAN traffic. Use a 10 Gbps interface for storage nodes, especially if you want to create a Red Hat OpenStack Platform (RHOSP) environment that serves a high volume of traffic. Power management Each Controller node requires a supported power management interface, such as Intelligent Platform Management Interface (IPMI) functionality on the motherboard of the server. 1.4. Ansible playbooks to deploy Ceph Storage The /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml environment file instructs director to use playbooks derived from the ceph-ansible project. These playbooks are installed in /usr/share/ceph-ansible/ of the undercloud. In particular, the following file contains all the default settings that the playbooks apply: /usr/share/ceph-ansible/group_vars/all.yml.sample Warning Although ceph-ansible uses playbooks to deploy containerized Ceph Storage, do not edit these files to customize your deployment. Instead, use heat environment files to override the defaults set by these playbooks. If you edit the ceph-ansible playbooks directly, your deployment fails. For information about the default settings applied by director for containerized Ceph Storage, see the heat templates in /usr/share/openstack-tripleo-heat-templates/deployment/ceph-ansible . Note Reading these templates requires a deeper understanding of how environment files and heat templates work in director. for more information, see Understanding Heat Templates and Environment Files . For more information about containerized services in RHOSP, see Configuring a basic overcloud with the CLI tools in the Director Installation and Usage guide. | [
"- type: interface name: em2 use_dhcp: false mtu: 9000 - type: vlan device: em2 mtu: 9000 use_dhcp: false vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet} - type: vlan device: em2 mtu: 9000 use_dhcp: false vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet}"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_an_overcloud_with_containerized_red_hat_ceph/assembly_integrating-an-overcloud-with-red-hat-ceph-storage_deployingcontainerizedrhcs |
Chapter 5. Gathering data about your cluster | Chapter 5. Gathering data about your cluster When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. It is recommended to provide: Data gathered using the oc adm must-gather command The unique cluster ID 5.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... Optionally, you can run the oc adm must-gather command in a specific namespace by using the --run-namespace option. For example: USD oc adm must-gather --run-namespace <namespace> \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 5.1.1. Gathering data about your cluster for Red Hat Support You can gather debugging information about your cluster by using the oc adm must-gather CLI command. Prerequisites You have access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) is installed. Procedure Navigate to the directory where you want to store the must-gather data. Note If your cluster is in a disconnected environment, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters in disconnected environments, you must import the default must-gather image as an image stream. USD oc import-image is/must-gather -n openshift Run the oc adm must-gather command: USD oc adm must-gather Important If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. Note Because this command picks a random control plane node by default, the pod might be scheduled to a control plane node that is in the NotReady and SchedulingDisabled state. If this command fails, for example, if you cannot schedule a pod on your cluster, then use the oc adm inspect command to gather information for particular resources. Note Contact Red Hat Support for the recommended resources to gather. Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 5.1.2. Gathering data about specific features You can gather debugging information about specific features by using the oc adm must-gather CLI command with the --image or --image-stream argument. The must-gather tool supports multiple images, so you can gather data about more than one feature by running a single command. Table 5.1. Supported must-gather images Image Purpose registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 Data collection for OpenShift Virtualization. registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8 Data collection for OpenShift Serverless. registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:<installed_version_service_mesh> Data collection for Red Hat OpenShift Service Mesh. registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v<installed_version_migration_toolkit> Data collection for the Migration Toolkit for Containers. registry.redhat.io/odf4/odf-must-gather-rhel9:v<installed_version_ODF> Data collection for Red Hat OpenShift Data Foundation. registry.redhat.io/openshift-logging/cluster-logging-rhel9-operator:v<installed_version_logging> Data collection for logging. quay.io/netobserv/must-gather Data collection for the Network Observability Operator. registry.redhat.io/openshift4/ose-csi-driver-shared-resource-mustgather-rhel8 Data collection for OpenShift Shared Resource CSI Driver. registry.redhat.io/openshift4/ose-local-storage-mustgather-rhel9:v<installed_version_LSO> Data collection for Local Storage Operator. registry.redhat.io/openshift-sandboxed-containers/osc-must-gather-rhel8:v<installed_version_sandboxed_containers> Data collection for OpenShift sandboxed containers. registry.redhat.io/workload-availability/node-healthcheck-must-gather-rhel8:v<installed-version-NHC> Data collection for the Red Hat Workload Availability Operators, including the Self Node Remediation (SNR) Operator, the Fence Agents Remediation (FAR) Operator, the Machine Deletion Remediation (MDR) Operator, the Node Health Check Operator (NHC) Operator, and the Node Maintenance Operator (NMO) Operator. registry.redhat.io/numaresources/numaresources-must-gather-rhel9:v<installed-version-nro> Data collection for the NUMA Resources Operator (NRO). registry.redhat.io/openshift4/ptp-must-gather-rhel8:v<installed-version-ptp> Data collection for the PTP Operator. registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v<installed_version_GitOps> Data collection for Red Hat OpenShift GitOps. registry.redhat.io/openshift4/ose-secrets-store-csi-mustgather-rhel8:v<installed_version_secret_store> Data collection for the Secrets Store CSI Driver Operator. registry.redhat.io/lvms4/lvms-must-gather-rhel9:v<installed_version_LVMS> Data collection for the LVM Operator. registry.redhat.io/compliance/openshift-compliance-must-gather-rhel8:<digest-version> Data collection for the Compliance Operator. registry.redhat.io/rhacm2/acm-must-gather-rhel9:v<ACM_version> Data collection for Red Hat Advanced Cluster Management (RHACM) 2.10 and later. registry.redhat.io/rhacm2/acm-must-gather-rhel8:v<ACM_version> Data collection for RHACM 2.9 and earlier. <registry_name:port_number>/rhacm2/acm-must-gather-rhel9:v<ACM_version> Data collection for RHACM 2.10 and later in a disconnected environment. <registry_name:port_number>/rhacm2/acm-must-gather-rhel8:v<ACM_version> Data collection for RHACM 2.9 and earlier in a disconnected environment. Note To determine the latest version for an OpenShift Container Platform component's image, see the OpenShift Operator Life Cycles web page on the Red Hat Customer Portal. Prerequisites You have access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) is installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command with one or more --image or --image-stream arguments. Note To collect the default must-gather data in addition to specific feature data, add the --image-stream=openshift/must-gather argument. For information on gathering data about the Custom Metrics Autoscaler, see the Additional resources section that follows. For example, the following command gathers both the default cluster data and information specific to OpenShift Virtualization: USD oc adm must-gather \ --image-stream=openshift/must-gather \ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 2 1 The default OpenShift Container Platform must-gather image 2 The must-gather image for OpenShift Virtualization You can use the must-gather tool with additional arguments to gather data that is specifically related to OpenShift Logging and the Red Hat OpenShift Logging Operator in your cluster. For OpenShift Logging, run the following command: USD oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator \ -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') Example 5.1. Example must-gather output for OpenShift Logging βββ cluster-logging β βββ clo β β βββ cluster-logging-operator-74dd5994f-6ttgt β β βββ clusterlogforwarder_cr β β βββ cr β β βββ csv β β βββ deployment β β βββ logforwarding_cr β βββ collector β β βββ fluentd-2tr64 β βββ eo β β βββ csv β β βββ deployment β β βββ elasticsearch-operator-7dc7d97b9d-jb4r4 β βββ es β β βββ cluster-elasticsearch β β β βββ aliases β β β βββ health β β β βββ indices β β β βββ latest_documents.json β β β βββ nodes β β β βββ nodes_stats.json β β β βββ thread_pool β β βββ cr β β βββ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β β βββ logs β β βββ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β βββ install β β βββ co_logs β β βββ install_plan β β βββ olmo_logs β β βββ subscription β βββ kibana β βββ cr β βββ kibana-9d69668d4-2rkvz βββ cluster-scoped-resources β βββ core β βββ nodes β β βββ ip-10-0-146-180.eu-west-1.compute.internal.yaml β βββ persistentvolumes β βββ pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml βββ event-filter.html βββ gather-debug.log βββ namespaces βββ openshift-logging β βββ apps β β βββ daemonsets.yaml β β βββ deployments.yaml β β βββ replicasets.yaml β β βββ statefulsets.yaml β βββ batch β β βββ cronjobs.yaml β β βββ jobs.yaml β βββ core β β βββ configmaps.yaml β β βββ endpoints.yaml β β βββ events β β β βββ elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml β β β βββ elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml β β β βββ elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml β β β βββ elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml β β β βββ elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml β β β βββ elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml β β βββ events.yaml β β βββ persistentvolumeclaims.yaml β β βββ pods.yaml β β βββ replicationcontrollers.yaml β β βββ secrets.yaml β β βββ services.yaml β βββ openshift-logging.yaml β βββ pods β β βββ cluster-logging-operator-74dd5994f-6ttgt β β β βββ cluster-logging-operator β β β β βββ cluster-logging-operator β β β β βββ logs β β β β βββ current.log β β β β βββ .insecure.log β β β β βββ .log β β β βββ cluster-logging-operator-74dd5994f-6ttgt.yaml β β βββ cluster-logging-operator-registry-6df49d7d4-mxxff β β β βββ cluster-logging-operator-registry β β β β βββ cluster-logging-operator-registry β β β β βββ logs β β β β βββ current.log β β β β βββ .insecure.log β β β β βββ .log β β β βββ cluster-logging-operator-registry-6df49d7d4-mxxff.yaml β β β βββ mutate-csv-and-generate-sqlite-db β β β βββ mutate-csv-and-generate-sqlite-db β β β βββ logs β β β βββ current.log β β β βββ .insecure.log β β β βββ .log β β βββ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β β βββ elasticsearch-im-app-1596030300-bpgcx β β β βββ elasticsearch-im-app-1596030300-bpgcx.yaml β β β βββ indexmanagement β β β βββ indexmanagement β β β βββ logs β β β βββ current.log β β β βββ .insecure.log β β β βββ .log β β βββ fluentd-2tr64 β β β βββ fluentd β β β β βββ fluentd β β β β βββ logs β β β β βββ current.log β β β β βββ .insecure.log β β β β βββ .log β β β βββ fluentd-2tr64.yaml β β β βββ fluentd-init β β β βββ fluentd-init β β β βββ logs β β β βββ current.log β β β βββ .insecure.log β β β βββ .log β β βββ kibana-9d69668d4-2rkvz β β β βββ kibana β β β β βββ kibana β β β β βββ logs β β β β βββ current.log β β β β βββ .insecure.log β β β β βββ .log β β β βββ kibana-9d69668d4-2rkvz.yaml β β β βββ kibana-proxy β β β βββ kibana-proxy β β β βββ logs β β β βββ current.log β β β βββ .insecure.log β β β βββ .log β βββ route.openshift.io β βββ routes.yaml βββ openshift-operators-redhat βββ ... Run the oc adm must-gather command with one or more --image or --image-stream arguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt: USD oc adm must-gather \ --image-stream=openshift/must-gather \ 1 --image=quay.io/kubevirt/must-gather 2 1 The default OpenShift Container Platform must-gather image 2 The must-gather image for KubeVirt Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 5.2. Additional resources Gathering debugging data for the Custom Metrics Autoscaler. Red Hat OpenShift Container Platform Life Cycle Policy 5.2.1. Gathering network logs You can gather network logs on all nodes in a cluster. Procedure Run the oc adm must-gather command with -- gather_network_logs : USD oc adm must-gather -- gather_network_logs Note By default, the must-gather tool collects the OVN nbdb and sbdb databases from all of the nodes in the cluster. Adding the -- gather_network_logs option to include additional logs that contain OVN-Kubernetes transactions for OVN nbdb database. Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1 1 Replace must-gather-local.472290403699006248 with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 5.2.2. Changing the must-gather storage limit When using the oc adm must-gather command to collect data the default maximum storage for the information is 30% of the storage capacity of the container. After the 30% limit is reached the container is killed and the gathering process stops. Information already gathered is downloaded to your local storage. To run the must-gather command again, you need either a container with more storage capacity or to adjust the maximum volume percentage. If the container reaches the storage limit, an error message similar to the following example is generated. Example output Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting... Prerequisites You have access to the cluster as a user with the cluster-admin role. The OpenShift CLI ( oc ) is installed. Procedure Run the oc adm must-gather command with the volume-percentage flag. The new value cannot exceed 100. USD oc adm must-gather --volume-percentage <storage_percentage> 5.3. Obtaining your cluster ID When providing information to Red Hat Support, it is helpful to provide the unique identifier for your cluster. You can have your cluster ID autofilled by using the OpenShift Container Platform web console. You can also manually obtain your cluster ID by using the web console or the OpenShift CLI ( oc ). Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the web console or the OpenShift CLI ( oc ) installed. Procedure To open a support case and have your cluster ID autofilled using the web console: From the toolbar, navigate to (?) Help and select Share Feedback from the list. Click Open a support case from the Tell us about your experience window. To manually obtain your cluster ID using the web console: Navigate to Home Overview . The value is available in the Cluster ID field of the Details section. To obtain your cluster ID using the OpenShift CLI ( oc ), run the following command: USD oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}' 5.4. About sosreport sosreport is a tool that collects configuration details, system information, and diagnostic data from Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems. sosreport provides a standardized way to collect diagnostic information relating to a node, which can then be provided to Red Hat Support for issue diagnosis. In some support interactions, Red Hat Support may ask you to collect a sosreport archive for a specific OpenShift Container Platform node. For example, it might sometimes be necessary to review system logs or other node-specific data that is not included within the output of oc adm must-gather . 5.5. Generating a sosreport archive for an OpenShift Container Platform cluster node The recommended way to generate a sosreport for an OpenShift Container Platform 4.15 cluster node is through a debug pod. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have SSH access to your hosts. You have installed the OpenShift CLI ( oc ). You have a Red Hat standard or premium Subscription. You have a Red Hat Customer Portal account. You have an existing Red Hat Support case ID. Procedure Obtain a list of cluster nodes: USD oc get nodes Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-cluster-node To enter into a debug session on the target node that is tainted with the NoExecute effect, add a toleration to a dummy namespace, and start the debug pod in the dummy namespace: USD oc new-project dummy USD oc patch namespace dummy --type=merge -p '{"metadata": {"annotations": { "scheduler.alpha.kubernetes.io/defaultTolerations": "[{\"operator\": \"Exists\"}]"}}}' USD oc debug node/my-cluster-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. Start a toolbox container, which includes the required binaries and plugins to run sosreport : # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . Remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container, to avoid issues with sosreport plugins. Collect a sosreport archive. Run the sos report command to collect necessary troubleshooting data on crio and podman : # sos report -k crio.all=on -k crio.logs=on -k podman.all=on -k podman.logs=on 1 1 -k enables you to define sosreport plugin parameters outside of the defaults. Optional: To include information on OVN-Kubernetes networking configurations from a node in your report, run the following command: # sos report --all-logs Press Enter when prompted, to continue. Provide the Red Hat Support case ID. sosreport adds the ID to the archive's file name. The sosreport output provides the archive's location and checksum. The following sample output references support case ID 01234567 : Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e 1 The sosreport archive's file path is outside of the chroot environment because the toolbox container mounts the host's root directory at /host . Provide the sosreport archive to Red Hat Support for analysis, using one of the following methods. Upload the file to an existing Red Hat support case. Concatenate the sosreport archive by running the oc debug node/<node_name> command and redirect the output to a file. This command assumes you have exited the oc debug session: USD oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 1 The debug container mounts the host's root directory at /host . Reference the absolute path from the debug container's root directory, including /host , when specifying target files for concatenation. Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a sosreport archive from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy a sosreport archive from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path> . Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal. Select Attach files and follow the prompts to upload the file. 5.6. Querying bootstrap node journal logs If you experience bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node. Prerequisites You have SSH access to your bootstrap node. You have the fully qualified domain name of the bootstrap node. Procedure Query bootkube.service journald unit logs from a bootstrap node during OpenShift Container Platform installation. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. Collect logs from the bootstrap node containers using podman on the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done' 5.7. Querying cluster node journal logs You can gather journald unit logs and other logs within /var/log on individual cluster nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Your API service is still functional. You have SSH access to your hosts. Procedure Query kubelet journald unit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes only: USD oc adm node-logs --role=master -u kubelet 1 1 Replace kubelet as appropriate to query other unit logs. Collect logs from specific subdirectories under /var/log/ on cluster nodes. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver/audit.log If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log : USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . 5.8. Network trace methods Collecting network traces, in the form of packet capture records, can assist Red Hat Support with troubleshooting network issues. OpenShift Container Platform supports two ways of performing a network trace. Review the following table and choose the method that meets your needs. Table 5.2. Supported methods of collecting a network trace Method Benefits and capabilities Collecting a host network trace You perform a packet capture for a duration that you specify on one or more nodes at the same time. The packet capture files are transferred from nodes to the client machine when the specified duration is met. You can troubleshoot why a specific action triggers network communication issues. Run the packet capture, perform the action that triggers the issue, and use the logs to diagnose the issue. Collecting a network trace from an OpenShift Container Platform node or container You perform a packet capture on one node or one container. You run the tcpdump command interactively, so you can control the duration of the packet capture. You can start the packet capture manually, trigger the network communication issue, and then stop the packet capture manually. This method uses the cat command and shell redirection to copy the packet capture data from the node or container to the client machine. 5.9. Collecting a host network trace Sometimes, troubleshooting a network-related issue is simplified by tracing network communication and capturing packets on multiple nodes at the same time. You can use a combination of the oc adm must-gather command and the registry.redhat.io/openshift4/network-tools-rhel8 container image to gather packet captures from nodes. Analyzing packet captures can help you troubleshoot network communication issues. The oc adm must-gather command is used to run the tcpdump command in pods on specific nodes. The tcpdump command records the packet captures in the pods. When the tcpdump command exits, the oc adm must-gather command transfers the files with the packet captures from the pods to your client machine. Tip The sample command in the following procedure demonstrates performing a packet capture with the tcpdump command. However, you can run any command in the container image that is specified in the --image argument to gather troubleshooting information from multiple nodes at the same time. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Run a packet capture from the host network on some nodes by running the following command: USD oc adm must-gather \ --dest-dir /tmp/captures \ <.> --source-dir '/tmp/tcpdump/' \ <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \ <.> --node-selector 'node-role.kubernetes.io/worker' \ <.> --host-network=true \ <.> --timeout 30s \ <.> -- \ tcpdump -i any \ <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300 <.> The --dest-dir argument specifies that oc adm must-gather stores the packet captures in directories that are relative to /tmp/captures on the client machine. You can specify any writable directory. <.> When tcpdump is run in the debug pod that oc adm must-gather starts, the --source-dir argument specifies that the packet captures are temporarily stored in the /tmp/tcpdump directory on the pod. <.> The --image argument specifies a container image that includes the tcpdump command. <.> The --node-selector argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the --node-name argument instead to run the packet capture on a single node. If you omit both the --node-selector and the --node-name argument, the packet captures are performed on all nodes. <.> The --host-network=true argument is required so that the packet captures are performed on the network interfaces of the node. <.> The --timeout argument and value specify to run the debug pod for 30 seconds. If you do not specify the --timeout argument and a duration, the debug pod runs for 10 minutes. <.> The -i any argument for the tcpdump command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name. Perform the action, such as accessing a web application, that triggers the network communication issue while the network trace captures packets. Review the packet capture files that oc adm must-gather transferred from the pods to your client machine: tmp/captures βββ event-filter.html βββ ip-10-0-192-217-ec2-internal 1 β βββ registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca... β βββ 2022-01-13T19:31:31.pcap βββ ip-10-0-201-178-ec2-internal 2 β βββ registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca... β βββ 2022-01-13T19:31:30.pcap βββ ip-... βββ timestamp 1 2 The packet captures are stored in directories that identify the hostname, container, and file name. If you did not specify the --node-selector argument, then the directory level for the hostname is not present. 5.10. Collecting a network trace from an OpenShift Container Platform node or container When investigating potential network-related OpenShift Container Platform issues, Red Hat Support might request a network packet trace from a specific OpenShift Container Platform cluster node or from a specific container. The recommended method to capture a network trace in OpenShift Container Platform is through a debug pod. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have an existing Red Hat Support case ID. You have a Red Hat standard or premium Subscription. You have a Red Hat Customer Portal account. You have SSH access to your hosts. Procedure Obtain a list of cluster nodes: USD oc get nodes Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-cluster-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. From within the chroot environment console, obtain the node's interface names: # ip ad Start a toolbox container, which includes the required binaries and plugins to run sosreport : # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . To avoid tcpdump issues, remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container. Initiate a tcpdump session on the cluster node and redirect output to a capture file. This example uses ens5 as the interface name: USD tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1 1 The tcpdump capture file's path is outside of the chroot environment because the toolbox container mounts the host's root directory at /host . If a tcpdump capture is required for a specific container on the node, follow these steps. Determine the target container ID. The chroot host command precedes the crictl command in this step because the toolbox container mounts the host's root directory at /host : # chroot /host crictl ps Determine the container's process ID. In this example, the container ID is a7fe32346b120 : # chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}' Initiate a tcpdump session on the container and redirect output to a capture file. This example uses 49628 as the container's process ID and ens5 as the interface name. The nsenter command enters the namespace of a target process and runs a command in its namespace. because the target process in this example is a container's process ID, the tcpdump command is run in the container's namespace from the host: # nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1 1 The tcpdump capture file's path is outside of the chroot environment because the toolbox container mounts the host's root directory at /host . Provide the tcpdump capture file to Red Hat Support for analysis, using one of the following methods. Upload the file to an existing Red Hat support case. Concatenate the sosreport archive by running the oc debug node/<node_name> command and redirect the output to a file. This command assumes you have exited the oc debug session: USD oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1 1 The debug container mounts the host's root directory at /host . Reference the absolute path from the debug container's root directory, including /host , when specifying target files for concatenation. Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a tcpdump capture file from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy a tcpdump capture file from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path> . Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal. Select Attach files and follow the prompts to upload the file. 5.11. Providing diagnostic data to Red Hat Support When investigating OpenShift Container Platform issues, Red Hat Support might ask you to upload diagnostic data to a support case. Files can be uploaded to a support case through the Red Hat Customer Portal. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have a Red Hat standard or premium Subscription. You have a Red Hat Customer Portal account. You have an existing Red Hat Support case ID. Procedure Upload diagnostic data to an existing Red Hat support case through the Red Hat Customer Portal. Concatenate a diagnostic file contained on an OpenShift Container Platform node by using the oc debug node/<node_name> command and redirect the output to a file. The following example copies /host/var/tmp/my-diagnostic-data.tar.gz from a debug container to /var/tmp/my-diagnostic-data.tar.gz : USD oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1 1 The debug container mounts the host's root directory at /host . Reference the absolute path from the debug container's root directory, including /host , when specifying target files for concatenation. Note OpenShift Container Platform 4.15 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring files from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy diagnostic files from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path> . Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal. Select Attach files and follow the prompts to upload the file. 5.12. About toolbox toolbox is a tool that starts a container on a Red Hat Enterprise Linux CoreOS (RHCOS) system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run commands such as sosreport . The primary purpose for a toolbox container is to gather diagnostic information and to provide it to Red Hat Support. However, if additional diagnostic tools are required, you can add RPM packages or run an image that is an alternative to the standard support tools image. Installing packages to a toolbox container By default, running the toolbox command starts a container with the registry.redhat.io/rhel8/support-tools:latest image. This image contains the most frequently used support tools. If you need to collect node-specific data that requires a support tool that is not part of the image, you can install additional packages. Prerequisites You have accessed a node with the oc debug node/<node_name> command. Procedure Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Start the toolbox container: # toolbox Install the additional package, such as wget : # dnf install -y <package_name> Starting an alternative image with toolbox By default, running the toolbox command starts a container with the registry.redhat.io/rhel8/support-tools:latest image. You can start an alternative image by creating a .toolboxrc file and specifying the image to run. Prerequisites You have accessed a node with the oc debug node/<node_name> command. Procedure Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Create a .toolboxrc file in the home directory for the root user ID: # vi ~/.toolboxrc REGISTRY=quay.io 1 IMAGE=fedora/fedora:33-x86_64 2 TOOLBOX_NAME=toolbox-fedora-33 3 1 Optional: Specify an alternative container registry. 2 Specify an alternative image to start. 3 Optional: Specify an alternative name for the toolbox container. Start a toolbox container with the alternative image: # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . Remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container, to avoid issues with sosreport plugins. | [
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.15.9 2",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"βββ cluster-logging β βββ clo β β βββ cluster-logging-operator-74dd5994f-6ttgt β β βββ clusterlogforwarder_cr β β βββ cr β β βββ csv β β βββ deployment β β βββ logforwarding_cr β βββ collector β β βββ fluentd-2tr64 β βββ eo β β βββ csv β β βββ deployment β β βββ elasticsearch-operator-7dc7d97b9d-jb4r4 β βββ es β β βββ cluster-elasticsearch β β β βββ aliases β β β βββ health β β β βββ indices β β β βββ latest_documents.json β β β βββ nodes β β β βββ nodes_stats.json β β β βββ thread_pool β β βββ cr β β βββ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β β βββ logs β β βββ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β βββ install β β βββ co_logs β β βββ install_plan β β βββ olmo_logs β β βββ subscription β βββ kibana β βββ cr β βββ kibana-9d69668d4-2rkvz βββ cluster-scoped-resources β βββ core β βββ nodes β β βββ ip-10-0-146-180.eu-west-1.compute.internal.yaml β βββ persistentvolumes β βββ pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml βββ event-filter.html βββ gather-debug.log βββ namespaces βββ openshift-logging β βββ apps β β βββ daemonsets.yaml β β βββ deployments.yaml β β βββ replicasets.yaml β β βββ statefulsets.yaml β βββ batch β β βββ cronjobs.yaml β β βββ jobs.yaml β βββ core β β βββ configmaps.yaml β β βββ endpoints.yaml β β βββ events β β β βββ elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml β β β βββ elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml β β β βββ elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml β β β βββ elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml β β β βββ elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml β β β βββ elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml β β βββ events.yaml β β βββ persistentvolumeclaims.yaml β β βββ pods.yaml β β βββ replicationcontrollers.yaml β β βββ secrets.yaml β β βββ services.yaml β βββ openshift-logging.yaml β βββ pods β β βββ cluster-logging-operator-74dd5994f-6ttgt β β β βββ cluster-logging-operator β β β β βββ cluster-logging-operator β β β β βββ logs β β β β βββ current.log β β β β βββ previous.insecure.log β β β β βββ previous.log β β β βββ cluster-logging-operator-74dd5994f-6ttgt.yaml β β βββ cluster-logging-operator-registry-6df49d7d4-mxxff β β β βββ cluster-logging-operator-registry β β β β βββ cluster-logging-operator-registry β β β β βββ logs β β β β βββ current.log β β β β βββ previous.insecure.log β β β β βββ previous.log β β β βββ cluster-logging-operator-registry-6df49d7d4-mxxff.yaml β β β βββ mutate-csv-and-generate-sqlite-db β β β βββ mutate-csv-and-generate-sqlite-db β β β βββ logs β β β βββ current.log β β β βββ previous.insecure.log β β β βββ previous.log β β βββ elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms β β βββ elasticsearch-im-app-1596030300-bpgcx β β β βββ elasticsearch-im-app-1596030300-bpgcx.yaml β β β βββ indexmanagement β β β βββ indexmanagement β β β βββ logs β β β βββ current.log β β β βββ previous.insecure.log β β β βββ previous.log β β βββ fluentd-2tr64 β β β βββ fluentd β β β β βββ fluentd β β β β βββ logs β β β β βββ current.log β β β β βββ previous.insecure.log β β β β βββ previous.log β β β βββ fluentd-2tr64.yaml β β β βββ fluentd-init β β β βββ fluentd-init β β β βββ logs β β β βββ current.log β β β βββ previous.insecure.log β β β βββ previous.log β β βββ kibana-9d69668d4-2rkvz β β β βββ kibana β β β β βββ kibana β β β β βββ logs β β β β βββ current.log β β β β βββ previous.insecure.log β β β β βββ previous.log β β β βββ kibana-9d69668d4-2rkvz.yaml β β β βββ kibana-proxy β β β βββ kibana-proxy β β β βββ logs β β β βββ current.log β β β βββ previous.insecure.log β β β βββ previous.log β βββ route.openshift.io β βββ routes.yaml βββ openshift-operators-redhat βββ",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=quay.io/kubevirt/must-gather 2",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc adm must-gather -- gather_network_logs",
"tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1",
"Disk usage exceeds the volume percentage of 30% for mounted directory. Exiting",
"oc adm must-gather --volume-percentage <storage_percentage>",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"oc get nodes",
"oc debug node/my-cluster-node",
"oc new-project dummy",
"oc patch namespace dummy --type=merge -p '{\"metadata\": {\"annotations\": { \"scheduler.alpha.kubernetes.io/defaultTolerations\": \"[{\\\"operator\\\": \\\"Exists\\\"}]\"}}}'",
"oc debug node/my-cluster-node",
"chroot /host",
"toolbox",
"sos report -k crio.all=on -k crio.logs=on -k podman.all=on -k podman.logs=on 1",
"sos report --all-logs",
"Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'",
"oc adm node-logs --role=master -u kubelet 1",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"oc adm must-gather --dest-dir /tmp/captures \\ <.> --source-dir '/tmp/tcpdump/' \\ <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \\ <.> --node-selector 'node-role.kubernetes.io/worker' \\ <.> --host-network=true \\ <.> --timeout 30s \\ <.> -- tcpdump -i any \\ <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300",
"tmp/captures βββ event-filter.html βββ ip-10-0-192-217-ec2-internal 1 β βββ registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca β βββ 2022-01-13T19:31:31.pcap βββ ip-10-0-201-178-ec2-internal 2 β βββ registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca β βββ 2022-01-13T19:31:30.pcap βββ ip- βββ timestamp",
"oc get nodes",
"oc debug node/my-cluster-node",
"chroot /host",
"ip ad",
"toolbox",
"tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"chroot /host crictl ps",
"chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}'",
"nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1",
"oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1",
"chroot /host",
"toolbox",
"dnf install -y <package_name>",
"chroot /host",
"vi ~/.toolboxrc",
"REGISTRY=quay.io 1 IMAGE=fedora/fedora:33-x86_64 2 TOOLBOX_NAME=toolbox-fedora-33 3",
"toolbox"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/support/gathering-cluster-data |
5.31. control-center | 5.31. control-center 5.31.1. RHBA-2012:0950 - control-center bug fix and enhancement update Updated control-center packages that fix one bug and add various enhancements are now available for Red Hat Enterprise Linux 6. The control-center packages contain various configuration utilities for the GNOME desktop. These utilities allow the user to configure accessibility options, desktop fonts, keyboard and mouse properties, sound setup, desktop theme and background, user interface properties, screen resolution, and other settings. Bug Fix BZ# 771600 versions of the control-center package contained gnome-at-mobility, a script that requires a software component that is not distributed with Red Hat Enterprise Linux 6 nor is present in any of the available channels. With this update, the non-functional gnome-at-mobility script has been removed and is no longer distributed as part of the control-center package. Enhancements BZ# 524942 The background configuration tool now uses the XDG Base Directory Specification to determine where to store its data file. By default, this file is located at ~/.config/gnome-control-center/backgrounds.xml. Users can change the ~/.config/ prefix by setting the XDG_DATA_HOME environment variable, or set the GNOMECC_USE_OLD_BG_PATH environment variable to 1 to restore the old behavior and use the ~/.gnome2/backgrounds.xml file. BZ# 632680 The control-center-extra package now includes a GNOME Control Center shell. This shell provides a user interface for launching the various Control Center utilities. BZ# 769465 , BZ# 801363 The GNOME Control Center now provides a configuration utility for Wacom graphics tablets, which replaces the wacompl utility. All users of control-center are advised to upgrade to these updated packages, which fix this bug and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/control-center |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/installation_guide/making-open-source-more-inclusive |
Chapter 27. Managing servers | Chapter 27. Managing servers Note For step by step instructions on how to publish a Camel project to Red Hat Fuse, see Chapter 28, Publishing Fuse Integration Projects to a Server . 27.1. Adding a Server Overview For the tooling to manage a server, you need to add the server to the Servers list. Once added, the server appears in the Servers view, where you can connect to it and publish your Fuse Integration projects. Note If adding a Red Hat Fuse server, it is recommended that you edit its installDir /etc/users.properties file and add user information, in the form of user=password,role , to enable the tooling to establish an SSH connection to the server. Procedure There are three ways to add a new server to the Servers view: In the Servers view, click No servers are available. Click this link to create a new server... . Note This link appears in the Servers view only when no server has been defined. If you defined and added a server when you first created your project, the Servers view displays that server. In the Servers view, right-click to open the context menu and select New Server . On the menu bar, select File New Other Server Server . In the Define a New Server dialog, to add a new server: Expand the Red Hat JBoss Middleware node to expose the list of available server options: Click the server that you want to add. In the Server's host name field, accept the default ( localhost ). Note The address of localhost is 0.0.0.0 . In the Server name field, accept the default, or enter a different name for the runtime server. For Server runtime environment , accept the default or click Add to open the server's runtime definition page: Note If the server is not already installed on your machine, you can install it now by clicking Download and install runtime... and following the site's download instructions. Depending on the site, you might be required to provide valid credentials before you can continue the download process. Accept the default for the installation Name . In the Home Directory field, enter the path where the server runtime is installed, or click Browse to find and select it. to Execution Environment , select the runtime JRE from the drop-down menu. If the version you want does not appear in the list, click Environments and select the version from the list that appears. The JRE version you select must be installed on your machine. Note See Red Hat Fuse Supported Configurations for the required Java version. Leave the Alternate JRE option as is. Click to save the server's runtime definition and open its Configuration details page: Accept the default for SSH Port ( 8101 ). The runtime uses the SSH port to connect to the server's Karaf shell. If this default is incorrect for your setup, you can discover the correct port number by looking in the server's installDir /etc/org.apache.karaf.shell.cfg file. In the User Name field, enter the name used to log into the server. For Red Hat Fuse, this is a user name stored in the Red Hat Fuse installDir /etc/users.properties file. Note If the default user has been activated (uncommented) in the /etc/users.properties file, the tooling autofills the User Name and Password fields with the default user's name and password, as shown in [servCnfigDetails] . If a user has not been set up, you can either add one to that file by using the format user=password,role (for example, joe=secret,Administrator ), or you can set one using the karaf jaas command set: jaas:realms - to list the realms jaas:manage --index 1 - to edit the first (server) realm jaas:useradd <username> <password> - to add a user and associated password jaas:roleadd <username> Administrator - to specify the new user's role jaas:update - to update the realm with the new user information If a jaas realm has already been selected for the server, you can discover the user name by issuing the command JBossFuse:karaf@root> jaas:users . In the Password field, enter the password required for User Name to log into the server. Click Finish to save the server's configuration details. The server runtime appears in the Servers view. Expanding the server node exposes the server's JMX node: 27.2. Starting a Server Overview When you start a configured server, the tooling opens the server's remote management console in the Terminal view. This allows you to easily manage the container while testing your application. Procedure To start a server: In the Servers view, select the server you want to start. Click . The Console view opens and displays a message asking you to wait while the container is starting, for example: Note If you did not properly configure the user name and password for opening the remote console, a dialog opens asking you to enter the proper credentials. See Section 27.1, "Adding a Server" . After the container has started up, the Terminal view opens to display the container's management console. The running server appears in the Servers view: The running server also appears in the JMX Navigator view under Server Connections : Note If the server is running on the same machine as the tooling, the server also has an entry under Local Processes . 27.3. Connecting to a Running Server Overview After you start a configured server, it appears in the Servers view and in the JMX Navigator view under the Server Connections node. You may need to expand the Server Connections node to see the server. To publish and test your Fuse project application on the running server, you must first connect to it. You can connect to a running server either in the Servers view or in the JMX Navigator view. Note The Servers view and the JMX Navigator view are synchronized with regards to server connections. That is, connecting to a server in the Servers view also connects it in the JMX Navigator view, and vice versa. Connecting to a running server in the Servers view In the Servers view, expand the server runtime to expose its JMX[Disconnected] node. Double-click the JMX[Disconnected] node: Connecting to a running server in the JMX Navigator view In the JMX Navigator view, under the Server Connections node, select the server to which you want to connect. Double-click the selected server: Viewing bundles installed on the connected server In either the Servers view or the JMX Navigator view, expand the server runtime tree to expose the Bundles node, and select it. The tooling populates the Properties view with a list of bundles that are installed on the server: Using the Properties view's Search tool, you can search for bundles by their Symbolic Name or by their Identifier , if you know it. As you type the symbolic name or the identifier, the list updates, showing only the bundles that match the current search string. Note Alternatively, you can issue the osgi:list command in the Terminal view to see a generated list of bundles installed on the Red Hat Fuse server runtime. The tooling uses a different naming scheme for OSGi bundles displayed by the osgi:list command. In the <build> section of project's pom.xml file, you can find the bundle's symbolic name and its bundle name (OSGi) listed in the maven-bundle-plugin entry. For more details, see the section called "Verifying the project was published to the server" . 27.4. Disconnecting from a Server Overview When you are done testing your application, you can disconnect from the server without stopping it. Note The Servers view and the JMX Navigator view are synchronized with regards to server connections. That is, disconnecting from a server in the Servers view also disconnects it in the JMX Navigator view, and vice versa. Disconnecting from a server in the Servers view In the Servers view, expand the server runtime to expose its JMX[Connected] node. Right-click the JMX[Connected] node to open the context menu, and then select Disconnect . Disconnecting from a server in the JMX Navigator view In the JMX Navigator view, under Server Connections , select the server from which you want to disconnect. Right-click the selected server to open the context menu, and then select Disconnect . 27.5. Stopping a Server Overview You can shut down a server in the Servers view or in the server's remote console in the Terminal view. Using the Servers view To stop a server: In the Servers view, select the server you want to stop. Click . Using the remote console To stop a server: Open the Terminal view that is hosting the server's remote console. Press: CTRL + D 27.6. Deleting a Server Overview When you are finished with a configured server, or if you misconfigure a server, you can delete it and its configuration. First, delete the server from the Servers view or from the JMX Navigator view. , delete the server's configuration. Deleting a server In the Servers view, right-click the server you want to delete to open the context menu. Select Delete . Click OK . Deleting the server's configuration On Linux and Windows machines, select Window Preferences . Expand the Server folder, and then select Runtime Environments to open the Server Runtime Environments page. From the list, select the runtime environment of the server that you previously deleted from the Servers view, and then click Remove . Click OK . | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/ridermanageservers |
6.3. Resource-Specific Parameters | 6.3. Resource-Specific Parameters For any individual resource, you can use the following command to display the parameters you can set for that resource. For example, the following command displays the parameters you can set for a resource of type LVM . | [
"pcs resource describe standard:provider:type | type",
"pcs resource describe LVM Resource options for: LVM volgrpname (required): The name of volume group. exclusive: If set, the volume group will be activated exclusively. partial_activation: If set, the volume group will be activated even only partial of the physical volumes available. It helps to set to true, when you are using mirroring logical volumes."
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-genresourceparams-HAAR |
Chapter 5. Configuration options | Chapter 5. Configuration options This chapter lists the available configuration options for Red Hat build of Apache Qpid JMS. JMS configuration options are set as query parameters on the connection URI. For more information, see Section 4.3, "Connection URIs" . 5.1. JMS options These options control the behaviour of JMS objects such as Connection , Session , MessageConsumer , and MessageProducer . jms.username The user name the client uses to authenticate the connection. jms.password The password the client uses to authenticate the connection. jms.clientID The client ID that the client applies to the connection. jms.forceAsyncSend If enabled, all messages from a MessageProducer are sent asynchronously. Otherwise, only certain kinds, such as non-persistent messages or those inside a transaction, are sent asynchronously. It is disabled by default. jms.forceSyncSend If enabled, all messages from a MessageProducer are sent synchronously. It is disabled by default. jms.forceAsyncAcks If enabled, all message acknowledgments are sent asynchronously. It is disabled by default. jms.localMessageExpiry If enabled, any expired messages received by a MessageConsumer are filtered out and not delivered. It is enabled by default. jms.localMessagePriority If enabled, prefetched messages are reordered locally based on their message priority value. It is disabled by default. jms.validatePropertyNames If enabled, message property names are required to be valid Java identifiers. It is enabled by default. jms.receiveLocalOnly If enabled, calls to receive with a timeout argument check a consumer's local message buffer only. Otherwise, if the timeout expires, the remote peer is checked to ensure there are really no messages. It is disabled by default. jms.receiveNoWaitLocalOnly If enabled, calls to receiveNoWait check a consumer's local message buffer only. Otherwise, the remote peer is checked to ensure there are really no messages available. It is disabled by default. jms.queuePrefix An optional prefix value added to the name of any Queue created from a Session . jms.topicPrefix An optional prefix value added to the name of any Topic created from a Session . jms.closeTimeout The time in milliseconds for which the client waits for normal resource closure before returning. The default is 60000 (60 seconds). jms.connectTimeout The time in milliseconds for which the client waits for connection establishment before returning with an error. The default is 15000 (15 seconds). jms.sendTimeout The time in milliseconds for which the client waits for completion of a synchronous message send before returning an error. By default the client waits indefinitely for a send to complete. jms.requestTimeout The time in milliseconds for which the client waits for completion of various synchronous interactions like opening a producer or consumer (excluding send) with the remote peer before returning an error. By default the client waits indefinitely for a request to complete. jms.clientIDPrefix An optional prefix value used to generate client ID values when a new Connection is created by the ConnectionFactory . The default is ID: . jms.connectionIDPrefix An optional prefix value used to generate connection ID values when a new Connection is created by the ConnectionFactory . This connection ID is used when logging some information from the Connection object, so a configurable prefix can make breadcrumbing the logs easier. The default is ID: . jms.populateJMSXUserID If enabled, populate the JMSXUserID property for each sent message using the authenticated user name from the connection. It is disabled by default. jms.awaitClientID If enabled, a connection with no client ID configured in the URI waits for a client ID to be set programmatically, or for confirmation that none can be set, before sending the AMQP connection "open". It is enabled by default. jms.useDaemonThread If enabled, a connection uses a daemon thread for its executor, rather than a non-daemon thread. It is disabled by default. jms.tracing The name of a tracing provider. Supported values are opentracing and noop . The default is noop . Prefetch policy options Prefetch policy determines how many messages each MessageConsumer fetches from the remote peer and holds in a local "prefetch" buffer. jms.prefetchPolicy.queuePrefetch The default is 1000. jms.prefetchPolicy.topicPrefetch The default is 1000. jms.prefetchPolicy.queueBrowserPrefetch The default is 1000. jms.prefetchPolicy.durableTopicPrefetch The default is 1000. jms.prefetchPolicy.all This can be used to set all prefetch values at once. The value of prefetch can affect the distribution of messages to multiple consumers on a queue or shared subscription. A higher value can result in larger batches sent at once to each consumer. To achieve more even round-robin distribution, use a lower value. Redelivery policy options Redelivery policy controls how redelivered messages are handled on the client. jms.redeliveryPolicy.maxRedeliveries Controls when an incoming message is rejected based on the number of times it has been redelivered. A value of 0 indicates that no message redeliveries are accepted. A value of 5 allows a message to be redelivered five times, and so on. The default is -1, meaning no limit. jms.redeliveryPolicy.outcome Controls the outcome applied to a message once it has exceeded the configured maxRedeliveries value. Supported values are: ACCEPTED , REJECTED , RELEASED , MODIFIED_FAILED and MODIFIED_FAILED_UNDELIVERABLE . The default value is MODIFIED_FAILED_UNDELIVERABLE . Message ID policy options Message ID policy controls the data type of the message ID assigned to messages sent from the client. jms.messageIDPolicy.messageIDType By default, a generated String value is used for the message ID on outgoing messages. Other available types are UUID , UUID_STRING , and PREFIXED_UUID_STRING . Presettle policy options Presettle policy controls when a producer or consumer instance is configured to use AMQP presettled messaging semantics. jms.presettlePolicy.presettleAll If enabled, all producers and non-transacted consumers created operate in presettled mode. It is disabled by default. jms.presettlePolicy.presettleProducers If enabled, all producers operate in presettled mode. It is disabled by default. jms.presettlePolicy.presettleTopicProducers If enabled, any producer that is sending to a Topic or TemporaryTopic destination operates in presettled mode. It is disabled by default. jms.presettlePolicy.presettleQueueProducers If enabled, any producer that is sending to a Queue or TemporaryQueue destination operates in presettled mode. It is disabled by default. jms.presettlePolicy.presettleTransactedProducers If enabled, any producer that is created in a transacted Session operates in presettled mode. It is disabled by default. jms.presettlePolicy.presettleConsumers If enabled, all consumers operate in presettled mode. It is disabled by default. jms.presettlePolicy.presettleTopicConsumers If enabled, any consumer that is receiving from a Topic or TemporaryTopic destination operates in presettled mode. It is disabled by default. jms.presettlePolicy.presettleQueueConsumers If enabled, any consumer that is receiving from a Queue or TemporaryQueue destination operates in presettled mode. It is disabled by default. Deserialization policy options Deserialization policy provides a means of controlling which Java types are trusted to be deserialized from the object stream while retrieving the body from an incoming ObjectMessage composed of serialized Java Object content. By default all types are trusted during an attempt to deserialize the body. The default deserialization policy provides URI options that allow specifying a whitelist and a blacklist of Java class or package names. jms.deserializationPolicy.whiteList A comma-separated list of class and package names that should be allowed when deserializing the contents of an ObjectMessage , unless overridden by blackList . The names in this list are not pattern values. The exact class or package name must be configured, as in java.util.Map or java.util . Package matches include sub-packages. The default is to allow all. jms.deserializationPolicy.blackList A comma-separated list of class and package names that should be rejected when deserializing the contents of a ObjectMessage . The names in this list are not pattern values. The exact class or package name must be configured, as in java.util.Map or java.util . Package matches include sub-packages. The default is to prevent none. 5.2. TCP options When connected to a remote server using plain TCP, the following options specify the behavior of the underlying socket. These options are appended to the connection URI along with any other configuration options. Example: A connection URI with transport options The complete set of TCP transport options is listed below. transport.sendBufferSize The send buffer size in bytes. The default is 65536 (64 KiB). transport.receiveBufferSize The receive buffer size in bytes. The default is 65536 (64 KiB). transport.trafficClass The default is 0. transport.connectTimeout The default is 60 seconds. transport.soTimeout The default is -1. transport.soLinger The default is -1. transport.tcpKeepAlive The default is false. transport.tcpNoDelay If enabled, do not delay and buffer TCP sends. It is enabled by default. transport.useEpoll When available, use the native epoll IO layer instead of the NIO layer. This can improve performance. It is enabled by default. 5.3. SSL/TLS options The SSL/TLS transport is enabled by using the amqps URI scheme. Because the SSL/TLS transport extends the functionality of the TCP-based transport, all of the TCP transport options are valid on an SSL/TLS transport URI. Example: A simple SSL/TLS connection URI The complete set of SSL/TLS transport options is listed below. transport.keyStoreLocation The path to the SSL/TLS key store. If unset, the value of the javax.net.ssl.keyStore system property is used. transport.keyStorePassword The password for the SSL/TLS key store. If unset, the value of the javax.net.ssl.keyStorePassword system property is used. transport.trustStoreLocation The path to the SSL/TLS trust store. If unset, the value of the javax.net.ssl.trustStore system property is used. transport.trustStorePassword The password for the SSL/TLS trust store. If unset, the value of the javax.net.ssl.trustStorePassword system property is used. transport.keyStoreType If unset, the value of the javax.net.ssl.keyStoreType system property is used. If the system property is unset, the default is JKS . transport.trustStoreType If unset, the value of the javax.net.ssl.trustStoreType system property is used. If the system property is unset, the default is JKS . transport.storeType Sets both keyStoreType and trustStoreType to the same value. If unset, keyStoreType and trustStoreType default to the values specified above. transport.contextProtocol The protocol argument used when getting an SSLContext. The default is TLS , or TLSv1.2 if using OpenSSL. transport.enabledCipherSuites A comma-separated list of cipher suites to enable. If unset, the context-default ciphers are used. Any disabled ciphers are removed from this list. transport.disabledCipherSuites A comma-separated list of cipher suites to disable. Ciphers listed here are removed from the enabled ciphers. transport.enabledProtocols A comma-separated list of protocols to enable. If unset, the context-default protocols are used. Any disabled protocols are removed from this list. transport.disabledProtocols A comma-separated list of protocols to disable. Protocols listed here are removed from the enabled protocol list. The default is SSLv2Hello,SSLv3 . transport.trustAll If enabled, trust the provided server certificate implicitly, regardless of any configured trust store. It is disabled by default. transport.verifyHost If enabled, verify that the connection hostname matches the provided server certificate. It is enabled by default. transport.keyAlias The alias to use when selecting a key pair from the key store if required to send a client certificate to the server. transport.useOpenSSL If enabled, use native OpenSSL libraries for SSL/TLS connections if available. It is disabled by default. For more information, see Section 7.1, "Enabling OpenSSL support" . 5.4. AMQP options The following options apply to aspects of behavior related to the AMQP wire protocol. amqp.idleTimeout The time in milliseconds after which the connection is failed if the peer sends no AMQP frames. The default is 60000 (1 minute). amqp.vhost The virtual host to connect to. This is used to populate the SASL and AMQP hostname fields. The default is the main hostname from the connection URI. amqp.saslLayer If enabled, SASL is used when establishing connections. It is enabled by default. amqp.saslMechanisms A comma-separated list of SASL mechanisms the client should allow selection of, if offered by the server and usable with the configured credentials. The supported mechanisms are EXTERNAL, SCRAM-SHA-256, SCRAM-SHA-1, CRAM-MD5, PLAIN, ANONYMOUS, and GSSAPI for Kerberos. The default is to allow selection from all mechanisms except GSSAPI, which must be explicitly included here to enable. amqp.maxFrameSize The maximum AMQP frame size in bytes allowed by the client. This value is advertised to the remote peer. The default is 1048576 (1 MiB). amqp.drainTimeout The time in milliseconds that the client waits for a response from the remote peer when a consumer drain request is made. If no response is seen in the allotted timeout period, the link is considered failed and the associated consumer is closed. The default is 60000 (1 minute). amqp.allowNonSecureRedirects If enabled, allow AMQP redirects to alternative hosts when the existing connection is secure and the alternative connection is not. For example, if enabled this would permit redirecting an SSL/TLS connection to a raw TCP connection. It is disabled by default. 5.5. Failover options Failover URIs start with the prefix failover: and contain a comma-separated list of connection URIs inside parentheses. Additional options are specified at the end. Options prefixed with jms. are applied to the overall failover URI, outside of parentheses, and affect the Connection object for its lifetime. Example: A failover URI with failover options The individual broker details within the parentheses can use the transport. or amqp. options defined earlier. These are applied as each host is connected to. Example: A failover URI with per-connection transport and AMQP options All of the configuration options for failover are listed below. failover.initialReconnectDelay The time in milliseconds the client waits before the first attempt to reconnect to a remote peer. The default is 0, meaning the first attempt happens immediately. failover.reconnectDelay The time in milliseconds between reconnection attempts. If the backoff option is not enabled, this value remains constant. The default is 10. failover.maxReconnectDelay The maximum time that the client waits before attempting to reconnect. This value is only used when the backoff feature is enabled to ensure that the delay does not grow too large. The default is 30 seconds. failover.useReconnectBackOff If enabled, the time between reconnection attempts grows based on a configured multiplier. It is enabled by default. failover.reconnectBackOffMultiplier The multiplier used to grow the reconnection delay value. The default is 2.0. failover.maxReconnectAttempts The number of reconnection attempts allowed before reporting the connection as failed to the client. The default is -1, meaning no limit. failover.startupMaxReconnectAttempts For a client that has never connected to a remote peer before, this option controls how many attempts are made to connect before reporting the connection as failed. If unset, the value of maxReconnectAttempts is used. failover.warnAfterReconnectAttempts The number of failed reconnection attempts until a warning is logged. The default is 10. failover.randomize If enabled, the set of failover URIs is randomly shuffled before attempting to connect to one of them. This can help to distribute client connections more evenly across multiple remote peers. It is disabled by default. failover.amqpOpenServerListAction Controls how the failover transport behaves when the connection "open" frame from the server provides a list of failover hosts to the client. Valid values are REPLACE , ADD , or IGNORE . If REPLACE is configured, all failover URIs other than the one for the current server are replaced with those provided by the server. If ADD is configured, the URIs provided by the server are added to the existing set of failover URIs, with deduplication. If IGNORE is configured, any updates from the server are ignored and no changes are made to the set of failover URIs in use. The default is REPLACE . The failover URI also supports defining nested options as a means of specifying AMQP and transport option values applicable to all the individual nested broker URIs. This is accomplished using the same transport. and amqp. URI options outlined earlier for a non-failover broker URI but prefixed with failover.nested. . For example, to apply the same value for the amqp.vhost option to every broker connected to you might have a URI like the following: Example: A failover URI with shared transport and AMQP options 5.6. Discovery options The client has an optional discovery module that provides a customized failover layer where the broker URIs to connect to are not given in the initial URI but instead are discovered by interacting with a discovery agent. There are currently two discovery agent implementations: a file watcher that loads URIs from a file and a multicast listener that works with ActiveMQ 5.x brokers that are configured to broadcast their broker addresses for listening clients. The general set of failover-related options when using discovery are the same as those detailed earlier, with the main prefix changed from failover. to discovery. , and with the nested prefix used to supply URI options common to all the discovered broker URIs. For example, without the agent URI details, a general discovery URI might look like the following: Example: A discovery URI To use the file watcher discovery agent, create an agent URI like the following: Example: A discovery URI using the file watcher agent The URI options for the file watcher discovery agent are listed below. updateInterval The time in milliseconds between checks for file changes. The default is 30000 (30 seconds). To use the multicast discovery agent with an ActiveMQ 5.x broker, create an agent URI like the following: Example: A discovery URI using the multicast listener agent Note that the use of default as the host in the multicast agent URI above is a special value that is substituted by the agent with the default 239.255.2.3:6155 . You can change this to specify the actual IP address and port in use with your multicast configuration. The URI option for the multicast discovery agent is listed below. group The multicast group used to listen for updates. The default is default . | [
"amqp://localhost:5672?jms.clientID=foo&transport.connectTimeout=30000",
"amqps://myhost.mydomain:5671",
"failover:(amqp://host1:5672,amqp://host2:5672)?jms.clientID=foo&failover.maxReconnectAttempts=20",
"failover:(amqp://host1:5672?amqp.option=value,amqp://host2:5672?transport.option=value)?jms.clientID=foo",
"failover:(amqp://host1:5672,amqp://host2:5672)?jms.clientID=foo&failover.nested.amqp.vhost=myhost",
"discovery:(<agent-uri>)?discovery.maxReconnectAttempts=20&discovery.discovered.jms.clientID=foo",
"discovery:(file:///path/to/monitored-file?updateInterval=60000)",
"discovery:(multicast://default?group=default)"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_jms/2.4/html/using_qpid_jms/configuration_options |
Chapter 1. OpenShift Data Foundation deployed using dynamic devices | Chapter 1. OpenShift Data Foundation deployed using dynamic devices 1.1. OpenShift Data Foundation deployed on AWS To replace an operational node, see: Section 1.1.1, "Replacing an operational AWS node on user-provisioned infrastructure" . Section 1.1.2, "Replacing an operational AWS node on installer-provisioned infrastructure" . To replace a failed node, see: Section 1.1.3, "Replacing a failed AWS node on user-provisioned infrastructure" . Section 1.1.4, "Replacing a failed AWS node on installer-provisioned infrastructure" . 1.1.1. Replacing an operational AWS node on user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure and resources to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Note When replacing an AWS node on user-provisioned infrastructure, the new node needs to be created in the same AWS zone as the original node. Procedure Identify the node that you need to replace. Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Delete the node: Create a new Amazon Web Service (AWS) machine instance with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform node using the new AWS machine instance. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.1.2. Replacing an operational AWS node on installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm that the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.1.3. Replacing a failed AWS node on user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure and resources to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the Amazon Web Service (AWS) machine instance of the node that you need to replace. Log in to AWS, and terminate the AWS machine instance that you identified. Create a new AWS machine instance with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform node using the new AWS machine instance. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.1.4. Replacing a failed AWS node on installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created, wait for new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Amazon Web Service (AWS) instance is not removed automatically, terminate the instance from the AWS console. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.2. OpenShift Data Foundation deployed on VMware To replace an operational node, see: Section 1.2.1, "Replacing an operational VMware node on user-provisioned infrastructure" . Section 1.2.2, "Replacing an operational VMware node on installer-provisioned infrastructure" . To replace a failed node, see: Section 1.2.3, "Replacing a failed VMware node on user-provisioned infrastructure" . Section 1.2.4, "Replacing a failed VMware node on installer-provisioned infrastructure" . 1.2.1. Replacing an operational VMware node on user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure and resources to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node and its Virtual Machine (VM) that you need replace. Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Delete the node: Log in to VMware vSphere, and terminate the VM that you identified: Important Delete the VM only from the inventory and not from the disk. Create a new VM on VMware vSphere with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new VM. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.2.2. Replacing an operational VMware node on installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.2.3. Replacing a failed VMware node on user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure and resources to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node and its Virtual Machine (VM) that you need to replace. Delete the node: <node_name> Specify the name of node that you need to replace. Log in to VMware vSphere and terminate the VM that you identified. Important Delete the VM only from the inventory and not from the disk. Create a new VM on VMware vSphere with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new VM. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.2.4. Replacing a failed VMware node on installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created. Wait for te new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Virtual Machine (VM) is not removed automatically, terminate the VM from VMware vSphere. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.3. OpenShift Data Foundation deployed on Microsoft Azure 1.3.1. Replacing operational nodes on Azure installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Machines . Search for the required machine. Besides the required machine, click the Action menu (...) Delete Machine . Click Delete to confirm the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads -> Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.3.2. Replacing failed nodes on Azure installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created. Wait for the new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Azure instance is not removed automatically, terminate the instance from the Azure console. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that new the Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.4. OpenShift Data Foundation deployed on Google cloud 1.4.1. Replacing operational nodes on Google Cloud installer-provisioned infrastructure Procedure Log in to OpenShift Web Console and click Compute Nodes . Identify the node that needs to be replaced. Take a note of its Machine Name . Mark the node as unschedulable using the following command: Drain the node using the following command: Important This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional. Click Compute Machines . Search for the required machine. Besides the required machine, click the Action menu (...) Delete Machine . Click Delete to confirm the machine deletion. A new machine is automatically created. Wait for new machine to start and transition into Running state. Important This activity may take at least 5-10 minutes or more. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels Add cluster.ocs.openshift.io/openshift-storage and click Save . From Command line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.4.2. Replacing failed nodes on Google Cloud installer-provisioned infrastructure Procedure Log in to OpenShift Web Console and click Compute Nodes . Identify the faulty node and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created, wait for new machine to start. Important This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the web user interface For the new node, click Action Menu (...) Edit Labels Add cluster.ocs.openshift.io/openshift-storage and click Save . From the command line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Google Cloud instance is not removed automatically, terminate the instance from Google Cloud console. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that new the Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . | [
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc delete nodes <node_name>",
"oc get csr",
"oc adm certificate approve <certificate_name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc get csr",
"oc adm certificate approve <certificate_name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc delete nodes <node_name>",
"oc get csr",
"oc adm certificate approve <certificate_name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc delete nodes <node_name>",
"oc get csr",
"oc adm certificate approve <certificate_name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/replacing_nodes/openshift_data_foundation_deployed_using_dynamic_devices |
Red Hat Software Certification Workflow Guide | Red Hat Software Certification Workflow Guide Red Hat Software Certification 2025 For Use with Red Hat Enterprise Linux and Red Hat OpenShift Red Hat Customer Content Services | [
"For example: {Partner Certification} Error occurred while submitting certification test results using the Red Hat Certification application.",
"subscription-manager register",
"subscription-manager list --available*",
"subscription-manager attach --pool= <pool_ID >",
"subscription-manager repos --enable=cert-1-for-rhel-8- <HOSTTYPE> -rpms",
"uname -m",
"subscription-manager repos --enable=cert-1-for-rhel-8-x86_64-rpms",
"subscription-manager repos --enable=cert-1-for-rhel-9- <HOSTTYPE> -rpms",
"uname -m",
"subscription-manager repos --enable=cert-1-for-rhel-9-x86_64-rpms",
"dnf install redhat-certification-software",
"rhcert-provision <path_to_test_plan_document>",
"rhcert-provision",
"rhcert-run",
"rhcert-save",
"dnf install redhat-certification-cockpit",
"podman login --username <your_username> --password <your_password> --authfile ./temp-authfile.json <registry>",
"preflight check container registry.example.org/<namespace>/<image_name>:<image_tag>",
"preflight support",
"preflight check container registry.example.org/<namespace>/<image_name>:<image_tag> --submit --pyxis-api-token=<api_token> --certification-project-id=<project_id> --docker-config=./temp-authfile.json",
"chcon -Rv -t container_file_t \"storage_path(/.*)?\"",
"βββ operators βββ my-operator βββ v1.0",
"βββ config.yaml βββ operators βββ my-operator βββ v1.4.8 β βββ manifests β β βββ cache.example.com_my-operators.yaml β β βββ my-operator-controller-manager-metrics-service_v1_service.yaml β β βββ my-operator-manager-config_v1_configmap.yaml β β βββ my-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml β β βββ my-operator.clusterserviceversion.yaml β βββ metadata β βββ annotations.yaml βββ ci.yaml",
"new-project oco",
"export KUBECONFIG=/path/to/your/cluster/kubeconfig",
"create secret generic kubeconfig --from-file=kubeconfig=USDKUBECONFIG",
"create secret generic github-api-token --from-literal GITHUB_TOKEN=<github token>",
"create secret generic pyxis-api-secret --from-literal pyxis_api_key=< API KEY >",
"apiVersion: v1 kind: PersistentVolume metadata: name: my-local-pv spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVoumeReclaimPolicy: Delete local: path: /dev/vda4 <- use a path from your cluster nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - crc-8k6jw-master-0 <- use the name of one of your cluster's node",
"get pods -n openshift-marketplace",
"get logs -f -n openshift-operators <pod name> manager",
"cat <<EOF> workspace-template.yml spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi EOF",
"export KUBECONFIG=/path/to/your/cluster/kubeconfig",
"adm new-project <my-project-name> # create the project project <my-project-name> # switch into the project",
"create secret generic kubeconfig --from-file=kubeconfig=USDKUBECONFIG",
"import-image certified-operator-index --from=registry.redhat.io/redhat/certified-operator-index --reference-policy local --scheduled --confirm --all",
"USDgit clone https://github.com/redhat-openshift-ecosystem/operator-pipelines USDcd operator-pipelines USDoc apply -R -f ansible/roles/operator-pipeline/templates/openshift/pipelines USDoc apply -R -f ansible/roles/operator-pipeline/templates/openshift/tasks",
"create secret generic github-api-token --from-literal GITHUB_TOKEN=<github token>",
"create secret generic pyxis-api-secret --from-literal pyxis_api_key=< API KEY >",
"base64 /path/to/private/key",
"cat << EOF > ssh-secret.yml kind: Secret apiVersion: v1 metadata: name: github-ssh-credentials data: id_rsa: | <base64 encoded private key> EOF",
"create -f ssh-secret.yml",
"create secret docker-registry registry-dockerconfig-secret --docker-server=quay.io --docker-username=<registry username> --docker-password=<registry password> --docker-email=<registry email>",
"apply -f ansible/roles/operator-pipeline/templates/openshift/openshift-pipelines-custom-scc.yml",
"adm policy add-scc-to-user pipelines-custom-scc -z pipeline",
"GIT_REPO_URL=<Git URL to your certified-operators fork > BUNDLE_PATH=<path to the bundle in the Git Repo> (For example - operators/my-operator/1.2.8) tkn pipeline start operator-ci-pipeline --param git_repo_url=USDGIT_REPO_URL --param git_branch=main --param bundle_path=USDBUNDLE_PATH --param env=prod --workspace name=pipeline,volumeClaimTemplateFile=templates/workspace-template.yml --showlog",
"--param kubeconfig_secret_name=kubeconfig --param kubeconfig_secret_key=kubeconfig",
"GIT_REPO_URL=<Git URL to your certified-operators fork > BUNDLE_PATH=<path to the bundle in the Git Repo> (ie: operators/my-operator/1.2.8) GIT_USERNAME=<your github username> GIT_EMAIL=<your github email address> tkn pipeline start operator-ci-pipeline --param git_repo_url=USDGIT_REPO_URL --param git_branch=main --param bundle_path=USDBUNDLE_PATH --param env=prod --param pin_digests=true --param git_username=USDGIT_USERNAME --param git_email=USDGIT_EMAIL --workspace name=pipeline,volumeClaimTemplateFile=templates/workspace-template.yml --workspace name=ssh-dir,secret=github-ssh-credentials --showlog",
"GIT_REPO_URL=<Git URL to your certified-operators fork > BUNDLE_PATH=<path to the bundle in the Git Repo> (ie: operators/my-operator/1.2.8) GIT_USERNAME=<your github username> GIT_EMAIL=<your github email address> REGISTRY=<your image registry. ie: quay.io> IMAGE_NAMESPACE=<namespace in the container registry> tkn pipeline start operator-ci-pipeline --param git_repo_url=USDGIT_REPO_URL --param git_branch=main --param bundle_path=USDBUNDLE_PATH --param env=prod --param pin_digests=true --param git_username=USDGIT_USERNAME --param git_email=USDGIT_EMAIL --param registry=USDREGISTRY --param image_namespace=USDIMAGE_NAMESPACE --workspace name=pipeline,volumeClaimTemplateFile=templates/workspace-template.yml --workspace name=ssh-dir,secret=github-ssh-credentials --workspace name=registry-credentials,secret=registry-docker config-secret --showlog \\",
"-param upstream_repo_name=USDUPSTREAM_REPO_NAME #Repo where Pull Request (PR) will be opened --param submit=true",
"--param pyxis_api_key_secret_name=pyxis-api-secret --param pyxis_api_key_secret_key=pyxis_api_key",
"GIT_REPO_URL=<Git URL to your certified-operators fork > BUNDLE_PATH=<path to the bundle in the Git Repo> (ie: operators/my-operator/1.2.8) tkn pipeline start operator-ci-pipeline --param git_repo_url=USDGIT_REPO_URL --param git_branch=main --param bundle_path=USDBUNDLE_PATH --param upstream_repo_name=redhat-openshift-ecosystem/certified-operators --param submit=true --param env=prod --workspace name=pipeline,volumeClaimTemplateFile=templates/workspace-template.yml --showlog",
"GIT_REPO_URL=<Git URL to your certified-operators fork > BUNDLE_PATH=<path to the bundle in the Git Repo> (ie: operators/my-operator/1.2.8) GIT_USERNAME=<your github username> GIT_EMAIL=<your github email address> tkn pipeline start operator-ci-pipeline --param git_repo_url=USDGIT_REPO_URL --param git_branch=main --param bundle_path=USDBUNDLE_PATH --param env=prod --param pin_digests=true --param git_username=USDGIT_USERNAME --param git_email=USDGIT_EMAIL --param upstream_repo_name=redhat-openshift-ecosystem/certified-operators --param submit=true --workspace name=pipeline,volumeClaimTemplateFile=templates/workspace-template.yml --workspace name=ssh-dir,secret=github-ssh-credentials --showlog",
"GIT_REPO_URL=<Git URL to your certified-operators fork > BUNDLE_PATH=<path to the bundle in the Git Repo> (ie: operators/my-operator/1.2.8) GIT_USERNAME=<your github username> GIT_EMAIL=<your github email address> REGISTRY=<your image registry. ie: quay.io> IMAGE_NAMESPACE=<namespace in the container registry> tkn pipeline start operator-ci-pipeline --param git_repo_url=USDGIT_REPO_URL --param git_branch=main --param bundle_path=USDBUNDLE_PATH --param env=prod --param pin_digests=true --param git_username=USDGIT_USERNAME --param git_email=USDGIT_EMAIL --param registry=USDREGISTRY --param image_namespace=USDIMAGE_NAMESPACE --param upstream_repo_name=redhat-openshift-ecosystem/certified-operators --param submit=true --workspace name=pipeline,volumeClaimTemplateFile=templates/workspace-template.yml --workspace name=ssh-dir,secret=github-ssh-credentials --workspace name=registry-credentials,secret=registry-docker config-secret --showlog",
"GIT_REPO_URL=<Git URL to your certified-operators fork > BUNDLE_PATH=<path to the bundle in the Git Repo> (ie: operators/my-operator/1.2.8) GIT_USERNAME=<your github username> GIT_EMAIL=<your github email address> REGISTRY=<your image registry. ie: quay.io> IMAGE_NAMESPACE=<namespace in the container registry> tkn pipeline start operator-ci-pipeline --param git_repo_url=USDGIT_REPO_URL --param git_branch=main --param bundle_path=USDBUNDLE_PATH --param env=prod --param pin_digests=true --param git_username=USDGIT_USERNAME --param git_email=USDGIT_EMAIL --param upstream_repo_name=redhat-openshift-ecosystem/certified-operators --param registry=USDREGISTRY --param image_namespace=USDIMAGE_NAMESPACE --param submit=true --workspace name=pipeline,volumeClaimTemplateFile=templates/workspace-template.yml --workspace name=ssh-dir,secret=github-ssh-credentials --workspace name=registry-credentials,secret=registry-docker config-secret --showlog",
"βββ operators βββ my-operator βββ v1.0",
"βββ config.yaml βββ operators βββ my-operator βββ v1.4.8 β βββ manifests β β βββ cache.example.com_my-operators.yaml β β βββ my-operator-controller-manager-metrics-service_v1_service.yaml β β βββ my-operator-manager-config_v1_configmap.yaml β β βββ my-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml β β βββ my-operator.clusterserviceversion.yaml β βββ metadata β βββ annotations.yaml βββ ci.yaml",
". βββ src βββ Chart.yaml βββ README.md βββ templates β βββ deployment.yaml β βββ _helpers.tpl β βββ hpa.yaml β βββ ingress.yaml β βββ NOTES.txt β βββ serviceaccount.yaml β βββ service.yaml β βββ tests β βββ test-connection.yaml βββ values.schema.json βββ values.yaml",
"helm package <helmchart folder>",
"podman run --rm -i -e KUBECONFIG=/.kube/config -v \"USD{HOME}/.kube\":/.kube \"quay.io/redhat-certification/chart-verifier\" verify <chart-uri>",
"podman run --rm -e KUBECONFIG=/.kube/config -v \"USD{HOME}/.kube\":/.kube -v USD(pwd):/charts \"quay.io/redhat-certification/chart-verifier\" verify /charts/<chart>",
"podman run -it --rm quay.io/redhat-certification/chart-verifier verify --help",
"Verifies a Helm chart by checking some of its characteristics Usage: chart-verifier verify <chart-uri> [flags] Flags: -S, --chart-set strings set values for the chart (can specify multiple or separate values with commas: key1=val1,key2=val2) -G, --chart-set-file strings set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2) -X, --chart-set-string strings set STRING values for the chart (can specify multiple or separate values with commas: key1=val1,key2=val2) -F, --chart-values strings specify values in a YAML file or a URL (can specify multiple) --debug enable verbose output -x, --disable strings all checks will be enabled except the informed ones -e, --enable strings only the informed checks will be enabled --helm-install-timeout duration helm install timeout (default 5m0s) -h, --help help for verify --kube-apiserver string the address and the port for the Kubernetes API server --kube-as-group stringArray group to impersonate for the operation, this flag can be repeated to specify multiple groups. --kube-as-user string username to impersonate for the operation --kube-ca-file string the certificate authority file for the Kubernetes API server connection --kube-context string name of the kubeconfig context to use --kube-token string bearer token used for authentication --kubeconfig string path to the kubeconfig file -n, --namespace string namespace scope for this request -V, --openshift-version string set the value of certifiedOpenShiftVersions in the report -o, --output string the output format: default, json or yaml -k, --pgp-public-key string file containing gpg public key of the key used to sign the chart -W, --web-catalog-only set this to indicate that the distribution method is web catalog only (default: true) --registry-config string path to the registry config file (default \"/home/baiju/.config/helm/registry.json\") --repository-cache string path to the file containing cached repository indexes (default \"/home/baiju/.cache/helm/repository\") --repository-config string path to the file containing repository names and URLs (default \"/home/baiju/.config/helm/repositories.yaml\") -s, --set strings overrides a configuration, e.g: dummy.ok=false -f, --set-values strings specify application and check configuration values in a YAML file or a URL (can specify multiple) -E, --suppress-error-log suppress the error log (default: written to ./chartverifier/verifier-<timestamp>.log) --timeout duration time to wait for completion of chart install and test (default 30m0s) -w, --write-to-file write report to ./chartverifier/report.yaml (default: stdout) Global Flags: --config string config file (default is USDHOME/.chart-verifier.yaml)",
"podman run --rm -i -e KUBECONFIG=/.kube/config -v \"USD{HOME}/.kube\":/.kube \"quay.io/redhat-certification/chart-verifier\" verify -enable images-are-certified,helm-lint <chart-uri>",
"podman run --rm -i -e KUBECONFIG=/.kube/config -v \"USD{HOME}/.kube\":/.kube \"quay.io/redhat-certification/chart-verifier\" verify -disable images-are-certified,helm-lint <chart-uri>",
"podman run --rm -i -e KUBECONFIG=/.kube/config -v \"USD{HOME}/.kube\":/.kube \"quay.io/redhat-certification/chart-verifier\" verify -chart-set default.port=8080 <chart-uri>",
"podman run --rm -i -e KUBECONFIG=/.kube/config -v \"USD{HOME}/.kube\":/.kube -v USD(pwd):/values \"quay.io/redhat-certification/chart-verifier\" verify -chart-values /values/overrides.yaml <chart-uri>",
"podman run --rm -i -e KUBECONFIG=/.kube/config -v \"USD{HOME}/.kube\":/.kube -v USD(pwd):/values \"quay.io/redhat-certification/chart-verifier\" verify --timeout 40m <chart-uri>",
"podman run --rm -i -e KUBECONFIG=/.kube/config -v \"USD{HOME}/.kube\":/.kube \"quay.io/redhat-certification/chart-verifier\" verify -enable images-are-certified,helm-lint <chart-uri> > report.yaml",
"podman run --rm -i -e KUBECONFIG=/.kube/config -v \"USD{HOME}/.kube\":/.kube -v USD(pwd)/chartverifier:/app/chartverifier -w \"quay.io/redhat-certification/chart-verifier\" verify -enable images-are-certified,helm-lint <chart-uri>",
"podman run --rm -i -e KUBECONFIG=/.kube/config -v \"USD{HOME}/.kube\":/.kube -v USD(pwd)/chartverifier:/app/chartverifier \"quay.io/redhat-certification/chart-verifier\" verify -enable images-are-certified,helm-lint <chart-uri> > report.yaml",
"tar zxvf <tarball>",
"./chart-verifier verify <chart-uri>",
"chart: name: awesome shortDescription: A Helm chart for Awesomeness publicPgpKey: null providerDelivery: False users: - githubUsername: <username-one> - githubUsername: <username-two> vendor: label: acme name: ACME Inc.",
"charts/partners/acme/awesome/0.1.0/",
"charts/partners/acme/awesome/0.1.0/awesome-0.1.0.tgz charts/partners/acme/awesome/0.1.0/awesome-0.1.0.tgz.prov",
"awesome-0.1.0.tgz awesome-0.1.0.tgz.prov awesome-0.1.0.tgz.key report.yaml",
". βββ src βββ Chart.yaml βββ README.md βββ templates β βββ deployment.yaml β βββ _helpers.tpl β βββ hpa.yaml β βββ ingress.yaml β βββ NOTES.txt β βββ serviceaccount.yaml β βββ service.yaml β βββ tests β βββ test-connection.yaml βββ values.schema.json βββ values.yaml",
"gpg --sign --armor --detach-sign --output report.yaml.asc report.yaml",
"awesome-0.1.0.tgz.key report.yaml",
"<partner-label>-<chart-name>-<version-number> index.yaml (#<PR-number>) (e.g, acme-psql-service-0.1.1 index.yaml (#7)).",
"mkdir -p test-results; cd test-results podman run -v \"USD(pwd):/data:z\" -w /data --rm -it USD(KUBECONFIG=USD(pwd)/kubeconfig.yaml oc adm release info --image-for=tests) sh -c \"KUBECONFIG=/data/kubeconfig.yaml /usr/bin/openshift-tests run openshift/network/third-party -o /data/results.txt\"",
"curl -L https://github.com/kubevirt/kubevirt/releases/download/v<KubeVirt version>/conformance.yaml -o kubevirt-conformance.yaml",
"sonobuoy run --skip-preflight --plugin kubevirt-conformance.yaml",
"sonobuoy status",
"sonobuoy retrieve",
"sonobuoy results <tarball>",
"Plugin: kubevirt-conformance Status: passed Total: 637 Passed: 9 Failed: 0 Skipped: 628",
"describe",
"oc config view --raw > kubeconfig.yaml",
"mkdir -p test-results; cd test-results podman run -v \"USD(pwd):/data:z\" -w /data --rm -it USD(KUBECONFIG=USD(pwd)/kubeconfig.yaml oc adm release info --image-for=tests) sh -c \"KUBECONFIG=/data/kubeconfig.yaml TEST_CSI_DRIVER_FILES=/data/manifest.yaml /usr/bin/openshift-tests run openshift/csi -o /data/results.txt\"",
"run -v `pwd`:/data:z --rm -it registry.redhat.io/openshift4/ose-tests sh -c \"KUBECONFIG=/data/kubeconfig.yaml TEST_CSI_DRIVER_FILES=/data/manifest.yaml /usr/bin/openshift-tests run --dry-run openshift/csi",
"oc get clusterversion -o yaml",
"podman image list registry.redhat.io/openshift4/ose-tests",
"oc get configmap storage-checkup-config -n <target_namespace> -o yaml"
] | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html-single/red_hat_software_certification_workflow_guide/index |
Chapter 8. Verifying your IdM and AD trust configuration using IdM Healthcheck | Chapter 8. Verifying your IdM and AD trust configuration using IdM Healthcheck Learn more about identifying issues with IdM and an Active Directory trust in Identity Management (IdM) by using the Healthcheck tool. Prerequisites The Healthcheck tool is only available on RHEL 8.1 or newer 8.1. IdM and AD trust Healthcheck tests The Healthcheck tool includes several tests for testing the status of your Identity Management (IdM) and Active Directory (AD) trust. To see all trust tests, run ipa-healthcheck with the --list-sources option: You can find all tests under the ipahealthcheck.ipa.trust source: IPATrustAgentCheck This test checks the SSSD configuration when the machine is configured as a trust agent. For each domain in /etc/sssd/sssd.conf where id_provider=ipa ensure that ipa_server_mode is True . IPATrustDomainsCheck This test checks if the trust domains match SSSD domains by comparing the list of domains in sssctl domain-list with the list of domains from ipa trust-find excluding the IPA domain. IPATrustCatalogCheck This test resolves resolves an AD user, Administrator@REALM . This populates the AD Global catalog and AD Domain Controller values in sssctl domain-status output. For each trust domain look up the user with the id of the SID + 500 (the administrator) and then check the output of sssctl domain-status <domain> --active-server to ensure that the domain is active. IPAsidgenpluginCheck This test verifies that the sidgen plugin is enabled in the IPA 389-ds instance. The test also verifies that the IPA SIDGEN and ipa-sidgen-task plugins in cn=plugins,cn=config include the nsslapd-pluginEnabled option. IPATrustAgentMemberCheck This test verifies that the current host is a member of cn=adtrust agents,cn=sysaccounts,cn=etc,SUFFIX . IPATrustControllerPrincipalCheck This test verifies that the current host is a member of cn=adtrust agents,cn=sysaccounts,cn=etc,SUFFIX . IPATrustControllerServiceCheck This test verifies that the current host starts the ADTRUST service in ipactl. IPATrustControllerConfCheck This test verifies that ldapi is enabled for the passdb backend in the output of net conf list. IPATrustControllerGroupSIDCheck This test verifies that the admins group's SID ends with 512 (Domain Admins RID). IPATrustPackageCheck This test verifies that the trust-ad package is installed if the trust controller and AD trust are not enabled. Note Run these tests on all IdM servers when trying to find an issue. 8.2. Screening the trust with the Healthcheck tool Follow this procedure to run a standalone manual test of an Identity Management (IdM) and Active Directory (AD) trust health check using the Healthcheck tool. The Healthcheck tool includes many tests, therefore, you can shorten the results by: Excluding all successful test: --failures-only Including only trust tests: --source=ipahealthcheck.ipa.trust Procedure To run Healthcheck with warnings, errors and critical issues in the trust, enter: Successful test displays empty brackets: Additional resources See man ipa-healthcheck . | [
"ipa-healthcheck --list-sources",
"ipa-healthcheck --source=ipahealthcheck.ipa.trust --failures-only",
"ipa-healthcheck --source=ipahealthcheck.ipa.trust --failures-only []"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_idm_healthcheck_to_monitor_your_idm_environment/verifying-your-idm-and-ad-trust-configuration-using-idm-healthcheck_using-idm-healthcheck-to-monitor-your-idm-environment |
Chapter 6. Creating content | Chapter 6. Creating content Use the guidelines in this section of the Creator Guide to learn more about the developing the content you will use in Red Hat Ansible Automation Platform. 6.1. Creating playbooks Playbooks contain one or more plays. A basic play contains the following sections: Name: a brief description of the overall function of the playbook, which assists in keeping it readable and organized for all users. Hosts: identifies the target(s) for Ansible to run against. Become statements: this optional statement can be set to true / yes to enable privilege escalation using a become plugin (such as sudo , su , pfexec , doas , pbrun , dzdo , ksu ). Tasks: this is the list actions that get executed against each host in the play. Example playbook 6.2. Creating collections You can create your own Collections locally with the Ansible Galaxy CLI tool. All of the Collection-specific commands can be activated by using the collection subcommand. Prerequisites You have Ansible version 2.9 or newer installed in your development environment. Procedure In your terminal, navigate to where you want your namespace root directory to be. For simplicity, this should be a path in COLLECTIONS_PATH but that is not required. Run the following command, replacing my_namespace and my_collection_name with the values you choose: USD ansible-galaxy collection init <my_namespace>.<my_collection_name> Note Make sure you have the proper permissions to upload to a namespace by checking under the "My Content" tab on galaxy.ansible.com or cloud.redhat.com/ansible/automation-hub The above command will create a directory named from the namespace argument above (if one does not already exist) and then create a directory under that with the Collection name. Inside of that directory will be the default or "skeleton" Collection. This is where you can add your roles or plugins and start working on developing your own Collection. In relation to execution environments, Collection developers can declare requirements for their content by providing the appropriate metadata in Ansible Builder. Requirements from a Collection can be recognized in these ways: A file meta/execution-environment.yml references the Python and/or bindep requirements files A file named requirements.txt , which contains information on the Python dependencies and can sometimes be found at the root level of the Collection A file named bindep.txt , which contains system-level dependencies and can be sometimes found in the root level of the Collection If any of these files are in the build_ignore of the Collection, Ansible Builder will not pick up on these since this section is used to filter any files or directories that should not be included in the build artifact Collection maintainers can verify that ansible-builder recognizes the requirements they expect by using the introspect command: USD ansible-builder introspect --sanitize ~/.ansible/collections/ Additional resources For more information about creating collections, see Creating collections in the Ansible Developer Guide . 6.3. Creating roles You can create roles by using the Ansible Galaxy CLI tool. Role-specific commands can be accessed from the roles subcommand. ansible-galaxy role init <role_name> Standalone roles outside of Collections are still supported, but new roles should be created inside of a Collection to take advantage of all the features Ansible Automation Platform has to offer. Procedure In your terminal, navigate to the roles directory inside a collection. Create a role called role_name inside the collection created previously: USD ansible-galaxy role init my_role The collection now contains a role named my_role inside the roles directory: ~/.ansible/collections/ansible_collections/<my_namespace>/<my_collection_name> ... βββ roles/ βββ my_role/ βββ .travis.yml βββ README.md βββ defaults/ β βββ main.yml βββ files/ βββ handlers/ β βββ main.yml βββ meta/ β βββ main.yml βββ tasks/ β βββ main.yml βββ templates/ βββ tests/ β βββ inventory β βββ test.yml βββ vars/ βββ main.yml A custom role skeleton directory can be supplied using the --role-skeleton argument. This allows organizations to create standardized templates for new roles to suit their needs. ansible-galaxy role init my_role --role-skeleton ~/role_skeleton This will create a role named my_role by copying the contents of ~/role_skeleton into my_role . The contents of role_skeleton can be any files or folders that are valid inside a role directory. Additional resources For more information about creating roles, see Creating roles in the Ansible Galaxy documentation. 6.4. Creating automation execution environments An automation execution environments definition file will specify An Ansible version A Python version (defaults to system Python) A set of required Python libraries Zero or more Content Collections (optional) Python dependencies for those specific Collections The concept of specifying a set of Collections for an environment is to resolve and install their dependencies. The Collections themselves are not required to be installed on the machine that you are generating the automation execution environments on. An automation execution environments is built from this definition, and results in a container image. Please read the Ansible Builder documentation to learn the steps involved in creating these images. | [
"- name: Set Up a Project and Job Template hosts: host.name.ip become: true tasks: - name: Create a Project ansible.controller.project: name: Job Template Test Project state: present scm_type: git scm_url: https://github.com/ansible/ansible-tower-samples.git - name: Create a Job Template ansible.controller.job_template: name: my-job-1 project: Job Template Test Project inventory: Demo Inventory playbook: hello_world.yml job_type: run state: present",
"ansible-galaxy collection init <my_namespace>.<my_collection_name>",
"ansible-builder introspect --sanitize ~/.ansible/collections/",
"ansible-galaxy role init <role_name>",
"ansible-galaxy role init my_role",
"~/.ansible/collections/ansible_collections/<my_namespace>/<my_collection_name> βββ roles/ βββ my_role/ βββ .travis.yml βββ README.md βββ defaults/ β βββ main.yml βββ files/ βββ handlers/ β βββ main.yml βββ meta/ β βββ main.yml βββ tasks/ β βββ main.yml βββ templates/ βββ tests/ β βββ inventory β βββ test.yml βββ vars/ βββ main.yml",
"ansible-galaxy role init my_role --role-skeleton ~/role_skeleton"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_creator_guide/creating-content |
Chapter 3. CSINode [storage.k8s.io/v1] | Chapter 3. CSINode [storage.k8s.io/v1] Description CSINode holds information about all CSI drivers installed on a node. CSI drivers do not need to create the CSINode object directly. As long as they use the node-driver-registrar sidecar container, the kubelet will automatically populate the CSINode object for the CSI driver as part of kubelet plugin registration. CSINode has the same name as a node. If the object is missing, it means either there are no CSI Drivers available on the node, or the Kubelet version is low enough that it doesn't create this object. CSINode has an OwnerReference that points to the corresponding node object. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. metadata.name must be the Kubernetes node name. spec object CSINodeSpec holds information about the specification of all CSI drivers installed on a node 3.1.1. .spec Description CSINodeSpec holds information about the specification of all CSI drivers installed on a node Type object Required drivers Property Type Description drivers array drivers is a list of information of all CSI Drivers existing on a node. If all drivers in the list are uninstalled, this can become empty. drivers[] object CSINodeDriver holds information about the specification of one CSI driver installed on a node 3.1.2. .spec.drivers Description drivers is a list of information of all CSI Drivers existing on a node. If all drivers in the list are uninstalled, this can become empty. Type array 3.1.3. .spec.drivers[] Description CSINodeDriver holds information about the specification of one CSI driver installed on a node Type object Required name nodeID Property Type Description allocatable object VolumeNodeResources is a set of resource limits for scheduling of volumes. name string name represents the name of the CSI driver that this object refers to. This MUST be the same name returned by the CSI GetPluginName() call for that driver. nodeID string nodeID of the node from the driver point of view. This field enables Kubernetes to communicate with storage systems that do not share the same nomenclature for nodes. For example, Kubernetes may refer to a given node as "node1", but the storage system may refer to the same node as "nodeA". When Kubernetes issues a command to the storage system to attach a volume to a specific node, it can use this field to refer to the node name using the ID that the storage system will understand, e.g. "nodeA" instead of "node1". This field is required. topologyKeys array (string) topologyKeys is the list of keys supported by the driver. When a driver is initialized on a cluster, it provides a set of topology keys that it understands (e.g. "company.com/zone", "company.com/region"). When a driver is initialized on a node, it provides the same topology keys along with values. Kubelet will expose these topology keys as labels on its own node object. When Kubernetes does topology aware provisioning, it can use this list to determine which labels it should retrieve from the node object and pass back to the driver. It is possible for different nodes to use different topology keys. This can be empty if driver does not support topology. 3.1.4. .spec.drivers[].allocatable Description VolumeNodeResources is a set of resource limits for scheduling of volumes. Type object Property Type Description count integer count indicates the maximum number of unique volumes managed by the CSI driver that can be used on a node. A volume that is both attached and mounted on a node is considered to be used once, not twice. The same rule applies for a unique volume that is shared among multiple pods on the same node. If this field is not specified, then the supported number of volumes on this node is unbounded. 3.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/csinodes DELETE : delete collection of CSINode GET : list or watch objects of kind CSINode POST : create a CSINode /apis/storage.k8s.io/v1/watch/csinodes GET : watch individual changes to a list of CSINode. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/csinodes/{name} DELETE : delete a CSINode GET : read the specified CSINode PATCH : partially update the specified CSINode PUT : replace the specified CSINode /apis/storage.k8s.io/v1/watch/csinodes/{name} GET : watch changes to an object of kind CSINode. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/storage.k8s.io/v1/csinodes HTTP method DELETE Description delete collection of CSINode Table 3.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CSINode Table 3.3. HTTP responses HTTP code Reponse body 200 - OK CSINodeList schema 401 - Unauthorized Empty HTTP method POST Description create a CSINode Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body CSINode schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 201 - Created CSINode schema 202 - Accepted CSINode schema 401 - Unauthorized Empty 3.2.2. /apis/storage.k8s.io/v1/watch/csinodes HTTP method GET Description watch individual changes to a list of CSINode. deprecated: use the 'watch' parameter with a list operation instead. Table 3.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/storage.k8s.io/v1/csinodes/{name} Table 3.8. Global path parameters Parameter Type Description name string name of the CSINode HTTP method DELETE Description delete a CSINode Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.10. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 202 - Accepted CSINode schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSINode Table 3.11. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSINode Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 201 - Created CSINode schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSINode Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.15. Body parameters Parameter Type Description body CSINode schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 201 - Created CSINode schema 401 - Unauthorized Empty 3.2.4. /apis/storage.k8s.io/v1/watch/csinodes/{name} Table 3.17. Global path parameters Parameter Type Description name string name of the CSINode HTTP method GET Description watch changes to an object of kind CSINode. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/storage_apis/csinode-storage-k8s-io-v1 |
function::user_string_n_warn | function::user_string_n_warn Name function::user_string_n_warn - Retrieves string from user space Synopsis Arguments addr the user space address to retrieve the string from n the maximum length of the string (if not null terminated) Description Returns up to n characters of a C string from a given user space memory address. Reports " <unknown> " on the rare cases when userspace data is not accessible and warns (but does not abort) about the failure. | [
"user_string_n_warn:string(addr:long,n:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-string-n-warn |
Chapter 9. Organizations, locations, and lifecycle environments | Chapter 9. Organizations, locations, and lifecycle environments Red Hat Satellite takes a consolidated approach to Organization and Location management. System administrators define multiple Organizations and multiple Locations in a single Satellite Server. For example, a company might have three Organizations (Finance, Marketing, and Sales) across three countries (United States, United Kingdom, and Japan). In this example, Satellite Server manages all Organizations across all geographical Locations, creating nine distinct contexts for managing systems. In addition, users can define specific locations and nest them to create a hierarchy. For example, Satellite administrators might divide the United States into specific cities, such as Boston, Phoenix, or San Francisco. Figure 9.1. Example topology for Red Hat Satellite Satellite Server defines all locations and organizations. Each respective Satellite Capsule Server synchronizes content and handles configuration of systems in a different location. The main Satellite Server retains the management function, while the content and configuration is synchronized between the main Satellite Server and a Satellite Capsule Server assigned to certain locations. 9.1. Organizations Organizations divide Red Hat Satellite resources into logical groups based on ownership, purpose, content, security level, or other divisions. You can create and manage multiple organizations through Red Hat Satellite, then divide and assign your subscriptions to each individual organization. This provides a method of managing the content of several individual organizations under one management system. 9.2. Locations Locations divide organizations into logical groups based on geographical location. Each location is created and used by a single account, although each account can manage multiple locations and organizations. 9.3. Lifecycle environments Application lifecycles are divided into lifecycle environments which represent each stage of the application lifecycle. Lifecycle environments are linked to form an environment path . You can promote content along the environment path to the lifecycle environment when required. For example, if development ends on a particular version of an application, you can promote this version to the testing environment and start development on the version. Figure 9.2. An environment path containing four environments | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/overview_concepts_and_deployment_considerations/chap-architecture_guide-org_loc_and_lifecycle_environments |
10.4. Driver Changes | 10.4. Driver Changes This section describes the driver changes in Red Hat Enterprise Linux 6. Note that all drivers are now loaded to initramfs by default. 10.4.1. Discontinued Drivers aic7xxx_old atp870u cpqarray DAC960 dc395x gdth hfs hfsplus megaraid net/tokenring/ paride qla1280 sound/core/oss sound/drivers/opl3/* sound/pci/nm256 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-package_changes-driver_changes |
10.4. The Multi-Source Column in System Metadata | 10.4. The Multi-Source Column in System Metadata The pseudo column is by default not present in your actual metadata; it is not added on source tables/procedures when you import the metadata. If you would like to use the multi-source column in your transformations to control which sources are accessed or updated or you would like the column reported via metadata facilities, there are several options: With either VDB type to make the multi-source column present in the system metadata, you can set the model property multisource.addColumn to true on a multi-source model. Care must be taken though when using this property in Teiid Designer as any transformation logic (views/procedures) that you have defined will not have been aware of the multi-source column and may fail validation upon server deployment. If using Teiid Designer, you can manually add the multi-source column. If using Dynamic VDBs, the pseudo-column will already be available to transformations, but will not be present in your System metadata by default. If you are using DDL and you would like to be selective (rather than using the multisource.addColumn property), you can manually add the column via DDL. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/the_multi-source_column_in_system_metadata |
probe::signal.procmask | probe::signal.procmask Name probe::signal.procmask - Examining or changing blocked signals Synopsis signal.procmask Values name Name of the probe point sigset The actual value to be set for sigset_t (correct?) how Indicates how to change the blocked signals; possible values are SIG_BLOCK=0 (for blocking signals), SIG_UNBLOCK=1 (for unblocking signals), and SIG_SETMASK=2 for setting the signal mask. sigset_addr The address of the signal set (sigset_t) to be implemented oldsigset_addr The old address of the signal set (sigset_t) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-signal-procmask |
Deployment Recommendations for Specific Red Hat OpenStack Platform Services | Deployment Recommendations for Specific Red Hat OpenStack Platform Services Red Hat OpenStack Platform 16.2 Maximizing the performance of the Red Hat OpenStack Platform Telemetry and Object Storage services OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deployment_recommendations_for_specific_red_hat_openstack_platform_services/index |
Chapter 1. Introduction to scaling storage | Chapter 1. Introduction to scaling storage Red Hat OpenShift Data Foundation is a highly scalable storage system. OpenShift Data Foundation allows you to scale by adding the disks in the multiple of three, or three or any number depending upon the deployment type. For internal (dynamic provisioning) deployment mode, you can increase the capacity by adding 3 disks at a time. For internal-attached (Local Storage Operator based) mode, you can deploy with less than 3 failure domains. With flexible scale deployment enabled, you can scale up by adding any number of disks. For deployment with 3 failure domains, you will be able to scale up by adding disks in the multiple of 3. For scaling your storage in external mode, see Red Hat Ceph Storage documentation . Note You can use a maximum of nine storage devices per node. The high number of storage devices will lead to a higher recovery time during the loss of a node. This recommendation ensures that nodes stay below the cloud provider dynamic storage device attachment limits, and limits the recovery time after node failure with local storage devices. While scaling, you must ensure that there are enough CPU and Memory resources as per scaling requirement. Supported storage classes by default gp2-csi on AWS thin on VMware managed_premium on Microsoft Azure 1.1. Supported Deployments for Red Hat OpenShift Data Foundation User-provisioned infrastructure: Amazon Web Services (AWS) VMware Bare metal IBM Power IBM Z or IBM(R) LinuxONE Installer-provisioned infrastructure: Amazon Web Services (AWS) Microsoft Azure VMware Bare metal | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/scaling_storage/scaling-overview_rhodf |
Release Notes for AMQ Streams 2.5 on RHEL | Release Notes for AMQ Streams 2.5 on RHEL Red Hat Streams for Apache Kafka 2.5 Highlights of what's new and what's changed with this release of AMQ Streams on Red Hat Enterprise Linux | [
"strimzi.authorization.grants.max.idle.time.seconds=\"300\" strimzi.authorization.grants.gc.period.seconds=\"300\" strimzi.authorization.reuse.grants=\"false\"",
"listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; # oauth.username.claim=\"['user.info'].['user.id']\" \\ 1 oauth.fallback.username.claim=\"['client.info'].['client.id']\" \\ 2 #",
"client.quota.callback.class= io.strimzi.kafka.quotas.StaticQuotaCallback client.quota.callback.static.produce= 1000000 client.quota.callback.static.fetch= 1000000 client.quota.callback.static.storage.soft= 400000000000 client.quota.callback.static.storage.hard= 500000000000 client.quota.callback.static.storage.check-interval= 5"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html-single/release_notes_for_amq_streams_2.5_on_rhel/%7BBookURLDeploying%7D |
31.4. Performance Testing Procedures | 31.4. Performance Testing Procedures The goal of this section is to construct a performance profile of the device with VDO installed. Each test should be run with and without VDO installed, so that VDO's performance can be evaluated relative to the performance of the base system. 31.4.1. Phase 1: Effects of I/O Depth, Fixed 4 KB Blocks The goal of this test is to determine the I/O depth that produces the optimal throughput and the lowest latency for your appliance. VDO uses a 4 KB sector size rather than the traditional 512 B used on legacy storage devices. The larger sector size allows it to support higher-capacity storage, improve performance, and match the cache buffer size used by most operating systems. Perform four-corner testing at 4 KB I/O, and I/O depth of 1, 8, 16, 32, 64, 128, 256, 512, 1024: Sequential 100% reads, at fixed 4 KB * Sequential 100% write, at fixed 4 KB Random 100% reads, at fixed 4 KB * Random 100% write, at fixed 4 KB ** * Prefill any areas that may be read during the read test by performing a write fio job first ** Re-create the VDO volume after 4 KB random write I/O runs Example shell test input stimulus (write): Record throughput and latency at each data point, and then graph. Repeat test to complete four-corner testing: --rw=randwrite , --rw=read , and --rw=randread . The result is a graph as shown below. Points of interest are the behavior across the range and the points of inflection where increased I\ufeff/\ufeffO depth proves to provide diminishing throughput gains. Likely, sequential access and random access will peak at different values, but it may be different for all types of storage configurations. In Figure 31.1, "I/O Depth Analysis" notice the "knee" in each performance curve. Marker 1 identifies the peak sequential throughput at point X, and marker 2 identifies peak random 4 KB throughput at point Z. This particular appliance does not benefit from sequential 4 KB I\ufeff/\ufeffO depth > X. Beyond that depth, there are diminishing bandwidth bandwidth gains, and average request latency will increase 1:1 for each additional I\ufeff/\ufeffO request. This particular appliance does not benefit from random 4 KB I\ufeff/\ufeffO depth > Z. Beyond that depth, there are diminishing bandwidth gains, and average request latency will increase 1:1 for each additional I\ufeff/\ufeffO request. Figure 31.1. I/O Depth Analysis Figure 31.2, "Latency Response of Increasing I/O for Random Writes" shows an example of the random write latency after the "knee" of the curve in Figure 31.1, "I/O Depth Analysis" . Benchmarking practice should test at these points for maximum throughput that incurs the least response time penalty. As we move forward in the test plan for this example appliance, we will collect additional data with I\ufeff/\ufeffO depth = Z Figure 31.2. Latency Response of Increasing I/O for Random Writes 31.4.2. Phase 2: Effects of I/O Request Size The goal of this test is to understand the block size that produces the best performance of the system under test at the optimal I/O depth determined in the step. Perform four-corner testing at fixed I/O depth, with varied block size (powers of 2) over the range 8 KB to 1 MB. Remember to prefill any areas to be read and to recreate volumes between tests. Set the I/O Depth to the value determined in Section 31.4.1, "Phase 1: Effects of I/O Depth, Fixed 4 KB Blocks" . Example test input stimulus (write): Record throughput and latency at each data point, and then graph. Repeat test to complete four-corner testing: --rw=randwrite , --rw=read , and --rw=randread . There are several points of interest that you may find in the results. In this example: Sequential writes reach a peak throughput at request size Y. This curve demonstrates how applications that are configurable or naturally dominated by certain request sizes may perceive performance. Larger request sizes often provide more throughput because 4 KB I/Os may benefit from merging. Sequential reads reach a similar peak throughput at point Z. Remember that after these peaks, overall latency before the I/O completes will increase with no additional throughput. It would be wise to tune the device to not accept I/Os larger than this size. Random reads achieve peak throughput at point X. Some devices may achieve near-sequential throughput rates at large request size random accesses, while others suffer more penalty when varying from purely sequential access. Random writes achieve peak throughput at point Y. Random writes involve the most interaction of a deduplication device, and VDO achieves high performance especially when request sizes and/or I/O depths are large. The results from this test Figure 31.3, "Request Size vs. Throughput Analysis and Key Inflection Points" help in understanding the characteristics of the storage device and the user experience for specific applications. Consult with a Red Hat Sales Engineer to determine if there may be further tuning needed to increase performance at different request sizes. Figure 31.3. Request Size vs. Throughput Analysis and Key Inflection Points 31.4.3. Phase 3: Effects of Mixing Read & Write I/Os The goal of this test is to understand how your appliance with VDO behaves when presented with mixed I/O loads (read/write), analyzing the effects of read/write mix at the optimal random queue depth and request sizes from 4 KB to 1 MB. You should use whatever is appropriate in your case. Perform four-corner testing at fixed I/O depth, varied block size (powers of 2) over the 8 KB to 256 KB range, and set read percentage at 10% increments, beginning with 0%. Remember to prefill any areas to be read and to recreate volumes between tests. Set the I/O Depth to the value determined in Section 31.4.1, "Phase 1: Effects of I/O Depth, Fixed 4 KB Blocks" . Example test input stimulus (read/write mix): Record throughput and latency at each data point, and then graph. Figure 31.4, "Performance Is Consistent across Varying Read/Write Mixes" shows an example of how VDO may respond to I/O loads: Figure 31.4. Performance Is Consistent across Varying Read/Write Mixes Performance (aggregate) and latency (aggregate) are relatively consistent across the range of mixing reads and writes, trending from the lower max write throughput to the higher max read throughput. This behavior may vary with different storage, but the important observation is that the performance is consistent under varying loads and/or that you can understand performance expectation for applications that demonstrate specific read/write mixes. If you discover any unexpected results, Red Hat Sales Engineers will be able to help you understand if it is VDO or the storage device itself that needs modification. Note: Systems that do not exhibit a similar response consistency often signify a sub-optimal configuration. Contact your Red Hat Sales Engineer if this occurs. 31.4.4. Phase 4: Application Environments The goal of these final tests is to understand how the system with VDO behaves when deployed in a real application environment. If possible, use real applications and use the knowledge learned so far; consider limiting the permissible queue depth on your appliance, and if possible tune the application to issue requests with those block sizes most beneficial to VDO performance. Request sizes, I/O loads, read/write patterns, etc., are generally hard to predict, as they will vary by application use case (i.e., filers vs. virtual desktops vs. database), and applications often vary in the types of I/O based on the specific operation or due to multi-tenant access. The final test shows general VDO performance in a mixed environment. If more specific details are known about your expected environment, test those settings as well. Example test input stimulus (read/write mix): Record throughput and latency at each data point, and then graph ( Figure 31.5, "Mixed Environment Performance" ). Figure 31.5. Mixed Environment Performance | [
"for depth in 1 2 4 8 16 32 64 128 256 512 1024 2048; do fio --rw=write --bs=4096 --name=vdo --filename=/dev/mapper/vdo0 --ioengine=libaio --numjobs=1 --thread --norandommap --runtime=300 --direct=1 --iodepth=USDdepth --scramble_buffers=1 --offset=0 --size=100g done",
"z= [see previous step] for iosize in 4 8 16 32 64 128 256 512 1024; do fio --rw=write --bs=USDiosize\\k --name=vdo --filename=/dev/mapper/vdo0 --ioengine=libaio --numjobs=1 --thread --norandommap --runtime=300 --direct=1 --iodepth=USDz --scramble_buffers=1 --offset=0 --size=100g done",
"z= [see previous step] for readmix in 0 10 20 30 40 50 60 70 80 90 100; do for iosize in 4 8 16 32 64 128 256 512 1024; do fio --rw=rw --rwmixread=USDreadmix --bs=USDiosize\\k --name=vdo --filename=/dev/mapper/vdo0 --ioengine=libaio --numjobs=1 --thread --norandommap --runtime=300 --direct=0 --iodepth=USDz --scramble_buffers=1 --offset=0 --size=100g done done",
"for readmix in 20 50 80; do for iosize in 4 8 16 32 64 128 256 512 1024; do fio --rw=rw --rwmixread=USDreadmix --bsrange=4k-256k --name=vdo --filename=/dev/mapper/vdo0 --ioengine=libaio --numjobs=1 --thread --norandommap --runtime=300 --direct=0 --iodepth=USDiosize --scramble_buffers=1 --offset=0 --size=100g done done"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/vdo-ev-performance-testing |
Chapter 4. Enabling and configuring Data Grid statistics and JMX monitoring | Chapter 4. Enabling and configuring Data Grid statistics and JMX monitoring Data Grid can provide Cache Manager and cache statistics as well as export JMX MBeans. 4.1. Configuring Data Grid metrics Data Grid generates metrics that are compatible with any monitoring system. Gauges provide values such as the average number of nanoseconds for write operations or JVM uptime. Histograms provide details about operation execution times such as read, write, and remove times. By default, Data Grid generates gauges when you enable statistics but you can also configure it to generate histograms. Note Data Grid metrics are provided at the vendor scope. Metrics related to the JVM are provided in the base scope. Procedure Open your Data Grid configuration for editing. Add the metrics element or object to the cache container. Enable or disable gauges with the gauges attribute or field. Enable or disable histograms with the histograms attribute or field. Save and close your client configuration. Metrics configuration XML <infinispan> <cache-container statistics="true"> <metrics gauges="true" histograms="true" /> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "statistics" : "true", "metrics" : { "gauges" : "true", "histograms" : "true" } } } } YAML infinispan: cacheContainer: statistics: "true" metrics: gauges: "true" histograms: "true" Additional resources Micrometer Prometheus 4.2. Registering JMX MBeans Data Grid can register JMX MBeans that you can use to collect statistics and perform administrative operations. You must also enable statistics otherwise Data Grid provides 0 values for all statistic attributes in JMX MBeans. Procedure Open your Data Grid configuration for editing. Add the jmx element or object to the cache container and specify true as the value for the enabled attribute or field. Add the domain attribute or field and specify the domain where JMX MBeans are exposed, if required. Save and close your client configuration. JMX configuration XML <infinispan> <cache-container statistics="true"> <jmx enabled="true" domain="example.com"/> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "statistics" : "true", "jmx" : { "enabled" : "true", "domain" : "example.com" } } } } YAML infinispan: cacheContainer: statistics: "true" jmx: enabled: "true" domain: "example.com" 4.2.1. Enabling JMX remote ports Provide unique remote JMX ports to expose Data Grid MBeans through connections in JMXServiceURL format. You can enable remote JMX ports using one of the following approaches: Enable remote JMX ports that require authentication to one of the Data Grid Server security realms. Enable remote JMX ports manually using the standard Java management configuration options. Prerequisites For remote JMX with authentication, define JMX specific user roles using the default security realm. Users must have controlRole with read/write access or the monitorRole with read-only access to access any JMX resources. Procedure Start Data Grid Server with a remote JMX port enabled using one of the following ways: Enable remote JMX through port 9999 . Warning Using remote JMX with SSL disabled is not intended for production environments. Pass the following system properties to Data Grid Server at startup. Warning Enabling remote JMX with no authentication or SSL is not secure and not recommended in any environment. Disabling authentication and SSL allows unauthorized users to connect to your server and access the data hosted there. Additional resources Creating security realms 4.2.2. Data Grid MBeans Data Grid exposes JMX MBeans that represent manageable resources. org.infinispan:type=Cache Attributes and operations available for cache instances. org.infinispan:type=CacheManager Attributes and operations available for Cache Managers, including Data Grid cache and cluster health statistics. For a complete list of available JMX MBeans along with descriptions and available operations and attributes, see the Data Grid JMX Components documentation. Additional resources Data Grid JMX Components 4.2.3. Registering MBeans in custom MBean servers Data Grid includes an MBeanServerLookup interface that you can use to register MBeans in custom MBeanServer instances. Prerequisites Create an implementation of MBeanServerLookup so that the getMBeanServer() method returns the custom MBeanServer instance. Configure Data Grid to register JMX MBeans. Procedure Open your Data Grid configuration for editing. Add the mbean-server-lookup attribute or field to the JMX configuration for the Cache Manager. Specify fully qualified name (FQN) of your MBeanServerLookup implementation. Save and close your client configuration. JMX MBean server lookup configuration XML <infinispan> <cache-container statistics="true"> <jmx enabled="true" domain="example.com" mbean-server-lookup="com.example.MyMBeanServerLookup"/> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "statistics" : "true", "jmx" : { "enabled" : "true", "domain" : "example.com", "mbean-server-lookup" : "com.example.MyMBeanServerLookup" } } } } YAML infinispan: cacheContainer: statistics: "true" jmx: enabled: "true" domain: "example.com" mbeanServerLookup: "com.example.MyMBeanServerLookup" 4.3. Exporting metrics during a state transfer operation You can export time metrics for clustered caches that Data Grid redistributes across nodes. A state transfer operation occurs when a clustered cache topology changes, such as a node joining or leaving a cluster. During a state transfer operation, Data Grid exports metrics from each cache, so that you can determine a cache's status. A state transfer exposes attributes as properties, so that Data Grid can export metrics from each cache. Note You cannot perform a state transfer operation in invalidation mode. Data Grid generates time metrics that are compatible with the REST API and the JMX API. Prerequisites Configure Data Grid metrics. Enable metrics for your cache type, such as embedded cache or remote cache. Initiate a state transfer operation by changing your clustered cache topology. Procedure Choose one of the following methods: Configure Data Grid to use the REST API to collect metrics. Configure Data Grid to use the JMX API to collect metrics. Additional resources Enabling and configuring Data Grid statistics and JMX monitoring (Data Grid caches) StateTransferManager (Data Grid 14.0 API) 4.4. Monitoring the status of cross-site replication Monitor the site status of your backup locations to detect interruptions in the communication between the sites. When a remote site status changes to offline , Data Grid stops replicating your data to the backup location. Your data become out of sync and you must fix the inconsistencies before bringing the clusters back online. Monitoring cross-site events is necessary for early problem detection. Use one of the following monitoring strategies: Monitoring cross-site replication with the REST API Monitoring cross-site replication with the Prometheus metrics or any other monitoring system Monitoring cross-site replication with the REST API Monitor the status of cross-site replication for all caches using the REST endpoint. You can implement a custom script to poll the REST endpoint or use the following example. Prerequisites Enable cross-site replication. Procedure Implement a script to poll the REST endpoint. The following example demonstrates how you can use a Python script to poll the site status every five seconds. #!/usr/bin/python3 import time import requests from requests.auth import HTTPDigestAuth class InfinispanConnection: def __init__(self, server: str = 'http://localhost:11222', cache_manager: str = 'default', auth: tuple = ('admin', 'change_me')) -> None: super().__init__() self.__url = f'{server}/rest/v2/cache-managers/{cache_manager}/x-site/backups/' self.__auth = auth self.__headers = { 'accept': 'application/json' } def get_sites_status(self): try: rsp = requests.get(self.__url, headers=self.__headers, auth=HTTPDigestAuth(self.__auth[0], self.__auth[1])) if rsp.status_code != 200: return None return rsp.json() except: return None # Specify credentials for Data Grid user with permission to access the REST endpoint USERNAME = 'admin' PASSWORD = 'change_me' # Set an interval between cross-site status checks POLL_INTERVAL_SEC = 5 # Provide a list of servers SERVERS = [ InfinispanConnection('http://127.0.0.1:11222', auth=(USERNAME, PASSWORD)), InfinispanConnection('http://127.0.0.1:12222', auth=(USERNAME, PASSWORD)) ] #Specify the names of remote sites REMOTE_SITES = [ 'nyc' ] #Provide a list of caches to monitor CACHES = [ 'work', 'sessions' ] def on_event(site: str, cache: str, old_status: str, new_status: str): # TODO implement your handling code here print(f'site={site} cache={cache} Status changed {old_status} -> {new_status}') def __handle_mixed_state(state: dict, site: str, site_status: dict): if site not in state: state[site] = {c: 'online' if c in site_status['online'] else 'offline' for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, 'online' if cache in site_status['online'] else 'offline') def __handle_online_or_offline_state(state: dict, site: str, new_status: str): if site not in state: state[site] = {c: new_status for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, new_status) def __update_cache_state(state: dict, site: str, cache: str, new_status: str): old_status = state[site].get(cache) if old_status != new_status: on_event(site, cache, old_status, new_status) state[site][cache] = new_status def update_state(state: dict): rsp = None for conn in SERVERS: rsp = conn.get_sites_status() if rsp: break if rsp is None: print('Unable to fetch site status from any server') return for site in REMOTE_SITES: site_status = rsp.get(site, {}) new_status = site_status.get('status') if new_status == 'mixed': __handle_mixed_state(state, site, site_status) else: __handle_online_or_offline_state(state, site, new_status) if __name__ == '__main__': _state = {} while True: update_state(_state) time.sleep(POLL_INTERVAL_SEC) When a site status changes from online to offline or vice-versa, the function on_event is invoked. If you want to use this script, you must specify the following variables: USERNAME and PASSWORD : The username and password of Data Grid user with permission to access the REST endpoint. POLL_INTERVAL_SEC : The number of seconds between polls. SERVERS : The list of Data Grid Servers at this site. The script only requires a single valid response but the list is provided to allow fail over. REMOTE_SITES : The list of remote sites to monitor on these servers. CACHES : The list of cache names to monitor. Additional resources REST API: Getting status of backup locations Monitoring cross-site replication with the Prometheus metrics Prometheus, and other monitoring systems, let you configure alerts to detect when a site status changes to offline . Tip Monitoring cross-site latency metrics can help you to discover potential issues. Prerequisites Enable cross-site replication. Procedure Configure Data Grid metrics. Configure alerting rules using the Prometheus metrics format. For the site status, use 1 for online and 0 for offline . For the expr filed, use the following format: vendor_cache_manager_default_cache_<cache name>_x_site_admin_<site name>_status . In the following example, Prometheus alerts you when the NYC site gets offline for cache named work or sessions . groups: - name: Cross Site Rules rules: - alert: Cache Work and Site NYC expr: vendor_cache_manager_default_cache_work_x_site_admin_nyc_status == 0 - alert: Cache Sessions and Site NYC expr: vendor_cache_manager_default_cache_sessions_x_site_admin_nyc_status == 0 The following image shows an alert that the NYC site is offline for cache work . Figure 4.1. Prometheus Alert Additional resources Configuring Data Grid metrics Prometheus Alerting Overview Grafana Alerting Documentation Openshift Managing Alerts | [
"<infinispan> <cache-container statistics=\"true\"> <metrics gauges=\"true\" histograms=\"true\" /> </cache-container> </infinispan>",
"{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"metrics\" : { \"gauges\" : \"true\", \"histograms\" : \"true\" } } } }",
"infinispan: cacheContainer: statistics: \"true\" metrics: gauges: \"true\" histograms: \"true\"",
"<infinispan> <cache-container statistics=\"true\"> <jmx enabled=\"true\" domain=\"example.com\"/> </cache-container> </infinispan>",
"{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"jmx\" : { \"enabled\" : \"true\", \"domain\" : \"example.com\" } } } }",
"infinispan: cacheContainer: statistics: \"true\" jmx: enabled: \"true\" domain: \"example.com\"",
"bin/server.sh --jmx 9999",
"bin/server.sh -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false",
"<infinispan> <cache-container statistics=\"true\"> <jmx enabled=\"true\" domain=\"example.com\" mbean-server-lookup=\"com.example.MyMBeanServerLookup\"/> </cache-container> </infinispan>",
"{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"jmx\" : { \"enabled\" : \"true\", \"domain\" : \"example.com\", \"mbean-server-lookup\" : \"com.example.MyMBeanServerLookup\" } } } }",
"infinispan: cacheContainer: statistics: \"true\" jmx: enabled: \"true\" domain: \"example.com\" mbeanServerLookup: \"com.example.MyMBeanServerLookup\"",
"#!/usr/bin/python3 import time import requests from requests.auth import HTTPDigestAuth class InfinispanConnection: def __init__(self, server: str = 'http://localhost:11222', cache_manager: str = 'default', auth: tuple = ('admin', 'change_me')) -> None: super().__init__() self.__url = f'{server}/rest/v2/cache-managers/{cache_manager}/x-site/backups/' self.__auth = auth self.__headers = { 'accept': 'application/json' } def get_sites_status(self): try: rsp = requests.get(self.__url, headers=self.__headers, auth=HTTPDigestAuth(self.__auth[0], self.__auth[1])) if rsp.status_code != 200: return None return rsp.json() except: return None Specify credentials for Data Grid user with permission to access the REST endpoint USERNAME = 'admin' PASSWORD = 'change_me' Set an interval between cross-site status checks POLL_INTERVAL_SEC = 5 Provide a list of servers SERVERS = [ InfinispanConnection('http://127.0.0.1:11222', auth=(USERNAME, PASSWORD)), InfinispanConnection('http://127.0.0.1:12222', auth=(USERNAME, PASSWORD)) ] #Specify the names of remote sites REMOTE_SITES = [ 'nyc' ] #Provide a list of caches to monitor CACHES = [ 'work', 'sessions' ] def on_event(site: str, cache: str, old_status: str, new_status: str): # TODO implement your handling code here print(f'site={site} cache={cache} Status changed {old_status} -> {new_status}') def __handle_mixed_state(state: dict, site: str, site_status: dict): if site not in state: state[site] = {c: 'online' if c in site_status['online'] else 'offline' for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, 'online' if cache in site_status['online'] else 'offline') def __handle_online_or_offline_state(state: dict, site: str, new_status: str): if site not in state: state[site] = {c: new_status for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, new_status) def __update_cache_state(state: dict, site: str, cache: str, new_status: str): old_status = state[site].get(cache) if old_status != new_status: on_event(site, cache, old_status, new_status) state[site][cache] = new_status def update_state(state: dict): rsp = None for conn in SERVERS: rsp = conn.get_sites_status() if rsp: break if rsp is None: print('Unable to fetch site status from any server') return for site in REMOTE_SITES: site_status = rsp.get(site, {}) new_status = site_status.get('status') if new_status == 'mixed': __handle_mixed_state(state, site, site_status) else: __handle_online_or_offline_state(state, site, new_status) if __name__ == '__main__': _state = {} while True: update_state(_state) time.sleep(POLL_INTERVAL_SEC)",
"groups: - name: Cross Site Rules rules: - alert: Cache Work and Site NYC expr: vendor_cache_manager_default_cache_work_x_site_admin_nyc_status == 0 - alert: Cache Sessions and Site NYC expr: vendor_cache_manager_default_cache_sessions_x_site_admin_nyc_status == 0"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/configuring_data_grid_caches/statistics-jmx |
Chapter 18. Backing up and restoring Data Grid clusters | Chapter 18. Backing up and restoring Data Grid clusters Data Grid Operator lets you back up and restore Data Grid cluster state for disaster recovery and to migrate Data Grid resources between clusters. 18.1. Backup and Restore CRs Backup and Restore CRs save in-memory data at runtime so you can easily recreate Data Grid clusters. Applying a Backup or Restore CR creates a new pod that joins the Data Grid cluster as a zero-capacity member, which means it does not require cluster rebalancing or state transfer to join. For backup operations, the pod iterates over cache entries and other resources and creates an archive, a .zip file, in the /opt/infinispan/backups directory on the persistent volume (PV). Note Performing backups does not significantly impact performance because the other pods in the Data Grid cluster only need to respond to the backup pod as it iterates over cache entries. For restore operations, the pod retrieves Data Grid resources from the archive on the PV and applies them to the Data Grid cluster. When either the backup or restore operation completes, the pod leaves the cluster and is terminated. Reconciliation Data Grid Operator does not reconcile Backup and Restore CRs which mean that backup and restore operations are "one-time" events. Modifying an existing Backup or Restore CR instance does not perform an operation or have any effect. If you want to update .spec fields, you must create a new instance of the Backup or Restore CR. 18.2. Backing up Data Grid clusters Create a backup file that stores Data Grid cluster state to a persistent volume. Prerequisites Create an Infinispan CR with spec.service.type: DataGrid . Ensure there are no active client connections to the Data Grid cluster. Data Grid backups do not provide snapshot isolation and data modifications are not written to the archive after the cache is backed up. To archive the exact state of the cluster, you should always disconnect any clients before you back it up. Procedure Name the Backup CR with the metadata.name field. Specify the Data Grid cluster to backup with the spec.cluster field. Configure the persistent volume claim (PVC) that adds the backup archive to the persistent volume (PV) with the spec.volume.storage and spec.volume.storage.storageClassName fields. Optionally include spec.resources fields to specify which Data Grid resources you want to back up. If you do not include any spec.resources fields, the Backup CR creates an archive that contains all Data Grid resources. If you do specify spec.resources fields, the Backup CR creates an archive that contains those resources only. You can also use the * wildcard character as in the following example: Apply your Backup CR. Verification Check that the status.phase field has a status of Succeeded in the Backup CR and that Data Grid logs have the following message: Run the following command to check that the backup is successfully created: 18.3. Restoring Data Grid clusters Restore Data Grid cluster state from a backup archive. Prerequisites Create a Backup CR on a source cluster. Create a target Data Grid cluster of Data Grid service pods. Note If you restore an existing cache, the operation overwrites the data in the cache but not the cache configuration. For example, you back up a distributed cache named mycache on the source cluster. You then restore mycache on a target cluster where it already exists as a replicated cache. In this case, the data from the source cluster is restored and mycache continues to have a replicated configuration on the target cluster. Ensure there are no active client connections to the target Data Grid cluster you want to restore. Cache entries that you restore from a backup can overwrite more recent cache entries. For example, a client performs a cache.put(k=2) operation and you then restore a backup that contains k=1 . Procedure Name the Restore CR with the metadata.name field. Specify a Backup CR to use with the spec.backup field. Specify the Data Grid cluster to restore with the spec.cluster field. Optionally add the spec.resources field to restore specific resources only. Apply your Restore CR. Verification Check that the status.phase field has a status of Succeeded in the Restore CR and that Data Grid logs have the following message: You should then open the Data Grid Console or establish a CLI connection to verify data and Data Grid resources are restored as expected. 18.4. Backup and restore status Backup and Restore CRs include a status.phase field that provides the status for each phase of the operation. Status Description Initializing The system has accepted the request and the controller is preparing the underlying resources to create the pod. Initialized The controller has prepared all underlying resources successfully. Running The pod is created and the operation is in progress on the Data Grid cluster. Succeeded The operation has completed successfully on the Data Grid cluster and the pod is terminated. Failed The operation did not successfully complete and the pod is terminated. Unknown The controller cannot obtain the status of the pod or determine the state of the operation. This condition typically indicates a temporary communication error with the pod. 18.4.1. Handling failed backup and restore operations If the status.phase field of the Backup or Restore CR is Failed , you should examine pod logs to determine the root cause before you attempt the operation again. Procedure Examine the logs for the pod that performed the failed operation. Pods are terminated but remain available until you delete the Backup or Restore CR. Resolve any error conditions or other causes of failure as indicated by the pod logs. Create a new instance of the Backup or Restore CR and attempt the operation again. | [
"apiVersion: infinispan.org/v2alpha1 kind: Backup metadata: name: my-backup spec: cluster: source-cluster volume: storage: 1Gi storageClassName: my-storage-class",
"spec: resources: templates: - distributed-sync-prod - distributed-sync-dev caches: - cache-one - cache-two counters: - counter-name protoSchemas: - authors.proto - books.proto tasks: - wordStream.js",
"spec: resources: caches: - \"*\" protoSchemas: - \"*\"",
"apply -f my-backup.yaml",
"ISPN005044: Backup file created 'my-backup.zip'",
"describe Backup my-backup",
"apiVersion: infinispan.org/v2alpha1 kind: Restore metadata: name: my-restore spec: backup: my-backup cluster: target-cluster",
"spec: resources: templates: - distributed-sync-prod - distributed-sync-dev caches: - cache-one - cache-two counters: - counter-name protoSchemas: - authors.proto - books.proto tasks: - wordStream.js",
"apply -f my-restore.yaml",
"ISPN005045: Restore 'my-backup' complete",
"logs <backup|restore_pod_name>"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_guide/backing-up-restoring |
Chapter 64. SFTP Sink | Chapter 64. SFTP Sink Send data to an SFTP Server. The Kamelet expects the following headers to be set: file / ce-file : as the file name to upload If the header won't be set the exchange ID will be used as file name. 64.1. Configuration Options The following table summarizes the configuration options available for the sftp-sink Kamelet: Property Name Description Type Default Example connectionHost * Connection Host Hostname of the FTP server string connectionPort * Connection Port Port of the FTP server string 22 directoryName * Directory Name The starting directory string password * Password The password to access the FTP server string username * Username The username to access the FTP server string fileExist File Existence How to behave in case of file already existent. There are 4 enums and the value can be one of Override, Append, Fail or Ignore string "Override" passiveMode Passive Mode Sets passive mode connection boolean false Note Fields marked with an asterisk (*) are mandatory. 64.2. Dependencies At runtime, the sftp-sink Kamelet relies upon the presence of the following dependencies: camel:ftp camel:core camel:kamelet 64.3. Usage This section describes how you can use the sftp-sink . 64.3.1. Knative Sink You can use the sftp-sink Kamelet as a Knative sink by binding it to a Knative object. sftp-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sftp-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sftp-sink properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" 64.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 64.3.1.2. Procedure for using the cluster CLI Save the sftp-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f sftp-sink-binding.yaml 64.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel sftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 64.3.2. Kafka Sink You can use the sftp-sink Kamelet as a Kafka sink by binding it to a Kafka topic. sftp-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sftp-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sftp-sink properties: connectionHost: "The Connection Host" directoryName: "The Directory Name" password: "The Password" username: "The Username" 64.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 64.3.2.2. Procedure for using the cluster CLI Save the sftp-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f sftp-sink-binding.yaml 64.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sftp-sink -p "sink.connectionHost=The Connection Host" -p "sink.directoryName=The Directory Name" -p "sink.password=The Password" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 64.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/sftp-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sftp-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sftp-sink properties: connectionHost: \"The Connection Host\" directoryName: \"The Directory Name\" password: \"The Password\" username: \"The Username\"",
"apply -f sftp-sink-binding.yaml",
"kamel bind channel:mychannel sftp-sink -p \"sink.connectionHost=The Connection Host\" -p \"sink.directoryName=The Directory Name\" -p \"sink.password=The Password\" -p \"sink.username=The Username\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sftp-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: sftp-sink properties: connectionHost: \"The Connection Host\" directoryName: \"The Directory Name\" password: \"The Password\" username: \"The Username\"",
"apply -f sftp-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic sftp-sink -p \"sink.connectionHost=The Connection Host\" -p \"sink.directoryName=The Directory Name\" -p \"sink.password=The Password\" -p \"sink.username=The Username\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/sftp-sink |
Chapter 4. Accessing Kafka outside of the OpenShift cluster | Chapter 4. Accessing Kafka outside of the OpenShift cluster Use an external listener to expose your AMQ Streams Kafka cluster to a client outside an OpenShift environment. Specify the connection type to expose Kafka in the external listener configuration. nodeport uses NodePort type Services loadbalancer uses Loadbalancer type Services ingress uses Kubernetes Ingress and the NGINX Ingress Controller for Kubernetes route uses OpenShift Routes and the HAProxy router For more information on listener configuration, see GenericKafkaListener schema reference . If you want to know more about the pros and cons of each connection type, refer to Accessing Apache Kafka in Strimzi . Note route is only supported on OpenShift 4.1. Accessing Kafka using node ports This procedure describes how to access an AMQ Streams Kafka cluster from an external client using node ports. To connect to a broker, you need a hostname and port number for the Kafka bootstrap address , as well as the certificate used for authentication. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Configure a Kafka resource with an external listener set to the nodeport type. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: external port: 9094 type: nodeport tls: true authentication: type: tls # ... # ... zookeeper: # ... Create or update the resource. oc apply -f <kafka_configuration_file> NodePort type services are created for each Kafka broker, as well as an external bootstrap service . The bootstrap service routes external traffic to the Kafka brokers. Node addresses used for connection are propagated to the status of the Kafka custom resource. The cluster CA certificate to verify the identity of the kafka brokers is also created in the secret <cluster_name> -cluster-ca-cert . Retrieve the bootstrap address you can use to access the Kafka cluster from the status of the Kafka resource. oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==" <listener_name> ")].bootstrapServers}{"\n"}' For example: oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}{"\n"}' If TLS encryption is enabled, extract the public certificate of the broker certification authority. oc get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication. 4.2. Accessing Kafka using loadbalancers This procedure describes how to access an AMQ Streams Kafka cluster from an external client using loadbalancers. To connect to a broker, you need the address of the bootstrap loadbalancer , as well as the certificate used for TLS encryption. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Configure a Kafka resource with an external listener set to the loadbalancer type. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: external port: 9094 type: loadbalancer tls: true # ... # ... zookeeper: # ... Create or update the resource. oc apply -f <kafka_configuration_file> loadbalancer type services and loadbalancers are created for each Kafka broker, as well as an external bootstrap service . The bootstrap service routes external traffic to all Kafka brokers. DNS names and IP addresses used for connection are propagated to the status of each service. The cluster CA certificate to verify the identity of the kafka brokers is also created in the secret <cluster_name> -cluster-ca-cert . Retrieve the address of the bootstrap service you can use to access the Kafka cluster from the status of the Kafka resource. oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==" <listener_name> ")].bootstrapServers}{"\n"}' For example: oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="external")].bootstrapServers}{"\n"}' If TLS encryption is enabled, extract the public certificate of the broker certification authority. oc get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication. 4.3. Accessing Kafka using ingress This procedure shows how to access an AMQ Streams Kafka cluster from an external client outside of OpenShift using Nginx Ingress. To connect to a broker, you need a hostname (advertised address) for the Ingress bootstrap address , as well as the certificate used for authentication. For access using Ingress, the port is always 443. TLS passthrough Kafka uses a binary protocol over TCP, but the NGINX Ingress Controller for Kubernetes is designed to work with the HTTP protocol. To be able to pass the Kafka connections through the Ingress, AMQ Streams uses the TLS passthrough feature of the NGINX Ingress Controller for Kubernetes . Ensure TLS passthrough is enabled in your NGINX Ingress Controller for Kubernetes deployment. Because it is using the TLS passthrough functionality, TLS encryption cannot be disabled when exposing Kafka using Ingress . For more information about enabling TLS passthrough, see TLS passthrough documentation . Prerequisites OpenShift cluster Deployed NGINX Ingress Controller for Kubernetes with TLS passthrough enabled A running Cluster Operator Procedure Configure a Kafka resource with an external listener set to the ingress type. Specify the Ingress hosts for the bootstrap service and Kafka brokers. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: external port: 9094 type: ingress tls: true authentication: type: tls configuration: 1 bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com # ... zookeeper: # ... 1 Ingress hosts for the bootstrap service and Kafka brokers. Create or update the resource. oc apply -f <kafka_configuration_file> ClusterIP type services are created for each Kafka broker, as well as an additional bootstrap service . These services are used by the Ingress controller to route traffic to the Kafka brokers. An Ingress resource is also created for each service to expose them using the Ingress controller. The Ingress hosts are propagated to the status of each service. The cluster CA certificate to verify the identity of the kafka brokers is also created in the secret <cluster_name> -cluster-ca-cert . Use the address for the bootstrap host you specified in the configuration and port 443 ( BOOTSTRAP-HOST:443 ) in your Kafka client as the bootstrap address to connect to the Kafka cluster. Extract the public certificate of the broker certificate authority. oc get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Use the extracted certificate in your Kafka client to configure the TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication. 4.4. Accessing Kafka using OpenShift routes This procedure describes how to access an AMQ Streams Kafka cluster from an external client outside of OpenShift using routes. To connect to a broker, you need a hostname for the route bootstrap address , as well as the certificate used for TLS encryption. For access using routes, the port is always 443. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Configure a Kafka resource with an external listener set to the route type. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # ... listeners: - name: listener1 port: 9094 type: route tls: true # ... # ... zookeeper: # ... Warning An OpenShift Route address comprises the name of the Kafka cluster, the name of the listener, and the name of the namespace it is created in. For example, my-cluster-kafka-listener1-bootstrap-myproject ( CLUSTER-NAME -kafka- LISTENER-NAME -bootstrap- NAMESPACE ). Be careful that the whole length of the address does not exceed a maximum limit of 63 characters. Create or update the resource. oc apply -f <kafka_configuration_file> ClusterIP type services are created for each Kafka broker, as well as an external bootstrap service . The services route the traffic from the OpenShift Routes to the Kafka brokers. An OpenShift Route resource is also created for each service to expose them using the HAProxy load balancer. DNS addresses used for connection are propagated to the status of each service. The cluster CA certificate to verify the identity of the kafka brokers is also created in the secret <cluster_name> -cluster-ca-cert . Retrieve the address of the bootstrap service you can use to access the Kafka cluster from the status of the Kafka resource. oc get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==" <listener_name> ")].bootstrapServers}{"\n"}' For example: oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="listener1")].bootstrapServers}{"\n"}' Extract the public certificate of the broker certification authority. oc get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt Use the extracted certificate in your Kafka client to configure TLS connection. If you enabled any authentication, you will also need to configure SASL or TLS authentication. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: external port: 9094 type: nodeport tls: true authentication: type: tls # # zookeeper: #",
"apply -f <kafka_configuration_file>",
"get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==\" <listener_name> \")].bootstrapServers}{\"\\n\"}'",
"get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external\")].bootstrapServers}{\"\\n\"}'",
"get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: external port: 9094 type: loadbalancer tls: true # # zookeeper: #",
"apply -f <kafka_configuration_file>",
"get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==\" <listener_name> \")].bootstrapServers}{\"\\n\"}'",
"get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"external\")].bootstrapServers}{\"\\n\"}'",
"get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: external port: 9094 type: ingress tls: true authentication: type: tls configuration: 1 bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com # zookeeper: #",
"apply -f <kafka_configuration_file>",
"get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: kafka: # listeners: - name: listener1 port: 9094 type: route tls: true # # zookeeper: #",
"apply -f <kafka_configuration_file>",
"get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==\" <listener_name> \")].bootstrapServers}{\"\\n\"}'",
"get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"listener1\")].bootstrapServers}{\"\\n\"}'",
"get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\\.crt}' | base64 -d > ca.crt"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/configuring_amq_streams_on_openshift/assembly-accessing-kafka-outside-cluster-str |
Chapter 52. CertificateAuthority schema reference | Chapter 52. CertificateAuthority schema reference Used in: KafkaSpec Configuration of how TLS certificates are used within the cluster. This applies to certificates used for both internal communication within the cluster and to certificates used for client access via Kafka.spec.kafka.listeners.tls . Property Description generateCertificateAuthority If true then Certificate Authority certificates will be generated automatically. Otherwise the user will need to provide a Secret with the CA certificate. Default is true. boolean generateSecretOwnerReference If true , the Cluster and Client CA Secrets are configured with the ownerReference set to the Kafka resource. If the Kafka resource is deleted when true , the CA Secrets are also deleted. If false , the ownerReference is disabled. If the Kafka resource is deleted when false , the CA Secrets are retained and available for reuse. Default is true . boolean validityDays The number of days generated certificates should be valid for. The default is 365. integer renewalDays The number of days in the certificate renewal period. This is the number of days before the a certificate expires during which renewal actions may be performed. When generateCertificateAuthority is true, this will cause the generation of a new certificate. When generateCertificateAuthority is true, this will cause extra logging at WARN level about the pending certificate expiry. Default is 30. integer certificateExpirationPolicy How should CA certificate expiration be handled when generateCertificateAuthority=true . The default is for a new CA certificate to be generated reusing the existing private key. string (one of [replace-key, renew-certificate]) | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-CertificateAuthority-reference |
Installing Red Hat Developer Hub on Amazon Elastic Kubernetes Service | Installing Red Hat Developer Hub on Amazon Elastic Kubernetes Service Red Hat Developer Hub 1.3 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_red_hat_developer_hub_on_amazon_elastic_kubernetes_service/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/making-open-source-more-inclusive |
Chapter 4. OCI referrers OAuth access token | Chapter 4. OCI referrers OAuth access token In some cases, depending on the features that your Red Hat Quay deployment is configured to use, you might need to leverage an OCI referrers OAuth access token . OCI referrers OAuth access tokens are used to list OCI referrers of a manifest under a repository, and uses a curl command to make a GET request to the Red Hat Quay v2/auth endpoint. These tokens are obtained via basic HTTP authentication, wherein the user provides a username and password encoded in Base64 to authenticate directly with the v2/auth API endpoint. As such, they are based directly on the user's credentials aod do not follow the same detailed authorization flow as OAuth 2, but still allow a user to authorize API requests. OCI referrers OAuth access tokens do not offer scope-based permissions and do not expire. They are solely used to list OCI referrers of a manifest under a repository. Additional resource Attaching referrers to an image tag 4.1. Creating an OCI referrers OAuth access token This OCI referrers OAuth access token is used to list OCI referrers of a manifest under a repository. Procedure Update your config.yaml file to include the FEATURE_REFERRERS_API: true field. For example: # ... FEATURE_REFERRERS_API: true # ... Enter the following command to Base64 encode your credentials: USD echo -n '<username>:<password>' | base64 Example output abcdeWFkbWluOjE5ODlraWROZXQxIQ== Enter the following command to use the base64 encoded string and modify the URL endpoint to your Red Hat Quay server: USD curl --location '<quay-server.example.com>/v2/auth?service=<quay-server.example.com>&scope=repository:quay/listocireferrs:pull,push' --header 'Authorization: Basic <base64_username:password_encode_token>' -k | jq Example output { "token": "<example_secret> } | [
"FEATURE_REFERRERS_API: true",
"echo -n '<username>:<password>' | base64",
"abcdeWFkbWluOjE5ODlraWROZXQxIQ==",
"curl --location '<quay-server.example.com>/v2/auth?service=<quay-server.example.com>&scope=repository:quay/listocireferrs:pull,push' --header 'Authorization: Basic <base64_username:password_encode_token>' -k | jq",
"{ \"token\": \"<example_secret> }"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_api_guide/oci-referrers-oauth-access-token |
Chapter 4. Remote health monitoring with connected clusters | Chapter 4. Remote health monitoring with connected clusters 4.1. About remote health monitoring OpenShift Dedicated collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. The data that is provided to Red Hat enables the benefits outlined in this document. A cluster that reports data to Red Hat through Telemetry and the Insights Operator is considered a connected cluster . Telemetry is the term that Red Hat uses to describe the information being sent to Red Hat by the OpenShift Dedicated Telemeter Client. Lightweight attributes are sent from connected clusters to Red Hat to enable subscription management automation, monitor the health of clusters, assist with support, and improve customer experience. The Insights Operator gathers OpenShift Dedicated configuration data and sends it to Red Hat. The data is used to produce insights about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators on OpenShift Cluster Manager . More information is provided in this document about these two processes. Telemetry and Insights Operator benefits Telemetry and the Insights Operator enable the following benefits for end-users: Enhanced identification and resolution of issues . Events that might seem normal to an end-user can be observed by Red Hat from a broader perspective across a fleet of clusters. Some issues can be more rapidly identified from this point of view and resolved without an end-user needing to open a support case or file a Jira issue . Advanced release management . OpenShift Dedicated offers the candidate , fast , and stable release channels, which enable you to choose an update strategy. The graduation of a release from fast to stable is dependent on the success rate of updates and on the events seen during upgrades. With the information provided by connected clusters, Red Hat can improve the quality of releases to stable channels and react more rapidly to issues found in the fast channels. Targeted prioritization of new features and functionality . The data collected provides insights about which areas of OpenShift Dedicated are used most. With this information, Red Hat can focus on developing the new features and functionality that have the greatest impact for our customers. A streamlined support experience . You can provide a cluster ID for a connected cluster when creating a support ticket on the Red Hat Customer Portal . This enables Red Hat to deliver a streamlined support experience that is specific to your cluster, by using the connected information. This document provides more information about that enhanced support experience. Predictive analytics . The insights displayed for your cluster on OpenShift Cluster Manager are enabled by the information collected from connected clusters. Red Hat is investing in applying deep learning, machine learning, and artificial intelligence automation to help identify issues that OpenShift Dedicated clusters are exposed to. On OpenShift Dedicated, remote health reporting is always enabled. You cannot opt out of it. 4.1.1. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document. This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Dedicated upgrades to customers to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Dedicated better and more intuitive to use. Additional resources See the OpenShift Dedicated upgrade documentation for more information about upgrading a cluster. 4.1.1.1. Information collected by Telemetry The following information is collected by Telemetry: 4.1.1.1.1. System information Version information, including the OpenShift Dedicated cluster version and installed update details that are used to determine update version availability Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update The unique random identifier that is generated during an installation Configuration details that help Red Hat Support to provide beneficial support for customers, including node configuration at the cloud infrastructure level, hostnames, IP addresses, Kubernetes pod names, namespaces, and services The OpenShift Dedicated framework components installed in a cluster and their condition and status Events for all namespaces listed as "related objects" for a degraded Operator Information about degraded software Information about the validity of certificates The name of the provider platform that OpenShift Dedicated is deployed on and the data center location 4.1.1.1.2. Sizing Information Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each The number of etcd members and the number of objects stored in the etcd cluster 4.1.1.1.3. Usage information Usage information about components, features, and extensions Usage details about Technology Previews and unsupported configurations Telemetry does not collect identifying information such as usernames or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat's privacy practices. 4.1.1.2. User Telemetry Red Hat collects anonymized user data from your browser. This anonymized data includes what pages, features, and resource types that the user of all clusters with enabled telemetry uses. Other considerations: User events are grouped as a SHA-1 hash. User's IP address is saved as 0.0.0.0 . User names and IP addresses are never saved as separate values. Additional resources See Showing data collected by Telemetry for details about how to list the attributes that Telemetry gathers from Prometheus in OpenShift Dedicated. See the upstream cluster-monitoring-operator source code for a list of the attributes that Telemetry gathers from Prometheus. 4.1.2. About the Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Dedicated can display the report of each cluster in the Insights Advisor service on Red Hat Hybrid Cloud Console. If any issues have been identified, Insights provides further details and, if available, steps on how to solve a problem. The Insights Operator does not collect identifying information, such as user names, passwords, or certificates. See Red Hat Insights Data & Application Security for information about Red Hat Insights data collection and controls. Red Hat uses all connected cluster information to: Identify potential cluster issues and provide a solution and preventive actions in the Insights Advisor service on Red Hat Hybrid Cloud Console Improve OpenShift Dedicated by providing aggregated and critical information to product and support teams Make OpenShift Dedicated more intuitive 4.1.2.1. Information collected by the Insights Operator The following information is collected by the Insights Operator: General information about your cluster and its components to identify issues that are specific to your OpenShift Dedicated version and environment. Configuration files, such as the image registry configuration, of your cluster to determine incorrect settings and issues that are specific to parameters you set. Errors that occur in the cluster components. Progress information of running updates, and the status of any component upgrades. Details of the platform that OpenShift Dedicated is deployed on and the region that the cluster is located in If an Operator reports an issue, information is collected about core OpenShift Dedicated pods in the openshift-* and kube-* projects. This includes state, resource, security context, volume information, and more. Additional resources What data is being collected by the Insights Operator in OpenShift? The Insights Operator source code is available for review and contribution. See the Insights Operator upstream project for a list of the items collected by the Insights Operator. 4.1.3. Understanding Telemetry and Insights Operator data flow The Telemeter Client collects selected time series data from the Prometheus API. The time series data is uploaded to api.openshift.com every four minutes and thirty seconds for processing. The Insights Operator gathers selected data from the Kubernetes API and the Prometheus API into an archive. The archive is uploaded to OpenShift Cluster Manager every two hours for processing. The Insights Operator also downloads the latest Insights analysis from OpenShift Cluster Manager . This is used to populate the Insights status pop-up that is included in the Overview page in the OpenShift Dedicated web console. All of the communication with Red Hat occurs over encrypted channels by using Transport Layer Security (TLS) and mutual certificate authentication. All of the data is encrypted in transit and at rest. Access to the systems that handle customer data is controlled through multi-factor authentication and strict authorization controls. Access is granted on a need-to-know basis and is limited to required operations. Telemetry and Insights Operator data flow Additional resources See Monitoring overview 4.1.4. Additional details about how remote health monitoring data is used The information collected to enable remote health monitoring is detailed in Information collected by Telemetry and Information collected by the Insights Operator . As further described in the preceding sections of this document, Red Hat collects data about your use of the Red Hat Product(s) for purposes such as providing support and upgrades, optimizing performance or configuration, minimizing service impacts, identifying and remediating threats, troubleshooting, improving the offerings and user experience, responding to issues, and for billing purposes if applicable. Collection safeguards Red Hat employs technical and organizational measures designed to protect the telemetry and configuration data. Sharing Red Hat may share the data collected through Telemetry and the Insights Operator internally within Red Hat to improve your user experience. Red Hat may share telemetry and configuration data with its business partners in an aggregated form that does not identify customers to help the partners better understand their markets and their customers' use of Red Hat offerings or to ensure the successful integration of products jointly supported by those partners. Third parties Red Hat may engage certain third parties to assist in the collection, analysis, and storage of the Telemetry and configuration data. 4.2. Showing data collected by remote health monitoring User control / enabling and disabling telemetry and configuration data collection As an administrator, you can review the metrics collected by Telemetry and the Insights Operator. 4.2.1. Showing data collected by Telemetry You can view the cluster and components time series data captured by Telemetry. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have access to the cluster as a user with the dedicated-admin role. Procedure Log in to a cluster. Run the following command, which queries a cluster's Prometheus service and returns the full set of time series data captured by Telemetry: Note The following example contains some values that are specific to OpenShift Dedicated on AWS. USD curl -G -k -H "Authorization: Bearer USD(oc whoami -t)" \ https://USD(oc get route prometheus-k8s-federate -n \ openshift-monitoring -o jsonpath="{.spec.host}")/federate \ --data-urlencode 'match[]={__name__=~"cluster:usage:.*"}' \ --data-urlencode 'match[]={__name__="count:up0"}' \ --data-urlencode 'match[]={__name__="count:up1"}' \ --data-urlencode 'match[]={__name__="cluster_version"}' \ --data-urlencode 'match[]={__name__="cluster_version_available_updates"}' \ --data-urlencode 'match[]={__name__="cluster_version_capability"}' \ --data-urlencode 'match[]={__name__="cluster_operator_up"}' \ --data-urlencode 'match[]={__name__="cluster_operator_conditions"}' \ --data-urlencode 'match[]={__name__="cluster_version_payload"}' \ --data-urlencode 'match[]={__name__="cluster_installer"}' \ --data-urlencode 'match[]={__name__="cluster_infrastructure_provider"}' \ --data-urlencode 'match[]={__name__="cluster_feature_set"}' \ --data-urlencode 'match[]={__name__="instance:etcd_object_counts:sum"}' \ --data-urlencode 'match[]={__name__="ALERTS",alertstate="firing"}' \ --data-urlencode 'match[]={__name__="code:apiserver_request_total:rate:sum"}' \ --data-urlencode 'match[]={__name__="cluster:capacity_cpu_cores:sum"}' \ --data-urlencode 'match[]={__name__="cluster:capacity_memory_bytes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="cluster:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="openshift:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="openshift:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="workload:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="workload:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:virt_platform_nodes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:node_instance_type_count:sum"}' \ --data-urlencode 'match[]={__name__="cnv:vmi_status_running:count"}' \ --data-urlencode 'match[]={__name__="cluster:vmi_request_cpu_cores:sum"}' \ --data-urlencode 'match[]={__name__="node_role_os_version_machine:cpu_capacity_cores:sum"}' \ --data-urlencode 'match[]={__name__="node_role_os_version_machine:cpu_capacity_sockets:sum"}' \ --data-urlencode 'match[]={__name__="subscription_sync_total"}' \ --data-urlencode 'match[]={__name__="olm_resolution_duration_seconds"}' \ --data-urlencode 'match[]={__name__="csv_succeeded"}' \ --data-urlencode 'match[]={__name__="csv_abnormal"}' \ --data-urlencode 'match[]={__name__="cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum"}' \ --data-urlencode 'match[]={__name__="cluster:kubelet_volume_stats_used_bytes:provisioner:sum"}' \ --data-urlencode 'match[]={__name__="ceph_cluster_total_bytes"}' \ --data-urlencode 'match[]={__name__="ceph_cluster_total_used_raw_bytes"}' \ --data-urlencode 'match[]={__name__="ceph_health_status"}' \ --data-urlencode 'match[]={__name__="odf_system_raw_capacity_total_bytes"}' \ --data-urlencode 'match[]={__name__="odf_system_raw_capacity_used_bytes"}' \ --data-urlencode 'match[]={__name__="odf_system_health_status"}' \ --data-urlencode 'match[]={__name__="job:ceph_osd_metadata:count"}' \ --data-urlencode 'match[]={__name__="job:kube_pv:count"}' \ --data-urlencode 'match[]={__name__="job:odf_system_pvs:count"}' \ --data-urlencode 'match[]={__name__="job:ceph_pools_iops:total"}' \ --data-urlencode 'match[]={__name__="job:ceph_pools_iops_bytes:total"}' \ --data-urlencode 'match[]={__name__="job:ceph_versions_running:count"}' \ --data-urlencode 'match[]={__name__="job:noobaa_total_unhealthy_buckets:sum"}' \ --data-urlencode 'match[]={__name__="job:noobaa_bucket_count:sum"}' \ --data-urlencode 'match[]={__name__="job:noobaa_total_object_count:sum"}' \ --data-urlencode 'match[]={__name__="odf_system_bucket_count", system_type="OCS", system_vendor="Red Hat"}' \ --data-urlencode 'match[]={__name__="odf_system_objects_total", system_type="OCS", system_vendor="Red Hat"}' \ --data-urlencode 'match[]={__name__="noobaa_accounts_num"}' \ --data-urlencode 'match[]={__name__="noobaa_total_usage"}' \ --data-urlencode 'match[]={__name__="console_url"}' \ --data-urlencode 'match[]={__name__="cluster:ovnkube_master_egress_routing_via_host:max"}' \ --data-urlencode 'match[]={__name__="cluster:network_attachment_definition_instances:max"}' \ --data-urlencode 'match[]={__name__="cluster:network_attachment_definition_enabled_instance_up:max"}' \ --data-urlencode 'match[]={__name__="cluster:ingress_controller_aws_nlb_active:sum"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:min"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:max"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:avg"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:median"}' \ --data-urlencode 'match[]={__name__="cluster:openshift_route_info:tls_termination:sum"}' \ --data-urlencode 'match[]={__name__="insightsclient_request_send_total"}' \ --data-urlencode 'match[]={__name__="cam_app_workload_migrations"}' \ --data-urlencode 'match[]={__name__="cluster:apiserver_current_inflight_requests:sum:max_over_time:2m"}' \ --data-urlencode 'match[]={__name__="cluster:alertmanager_integrations:max"}' \ --data-urlencode 'match[]={__name__="cluster:telemetry_selected_series:count"}' \ --data-urlencode 'match[]={__name__="openshift:prometheus_tsdb_head_series:sum"}' \ --data-urlencode 'match[]={__name__="openshift:prometheus_tsdb_head_samples_appended_total:sum"}' \ --data-urlencode 'match[]={__name__="monitoring:container_memory_working_set_bytes:sum"}' \ --data-urlencode 'match[]={__name__="namespace_job:scrape_series_added:topk3_sum1h"}' \ --data-urlencode 'match[]={__name__="namespace_job:scrape_samples_post_metric_relabeling:topk3"}' \ --data-urlencode 'match[]={__name__="monitoring:haproxy_server_http_responses_total:sum"}' \ --data-urlencode 'match[]={__name__="rhmi_status"}' \ --data-urlencode 'match[]={__name__="status:upgrading:version:rhoam_state:max"}' \ --data-urlencode 'match[]={__name__="state:rhoam_critical_alerts:max"}' \ --data-urlencode 'match[]={__name__="state:rhoam_warning_alerts:max"}' \ --data-urlencode 'match[]={__name__="rhoam_7d_slo_percentile:max"}' \ --data-urlencode 'match[]={__name__="rhoam_7d_slo_remaining_error_budget:max"}' \ --data-urlencode 'match[]={__name__="cluster_legacy_scheduler_policy"}' \ --data-urlencode 'match[]={__name__="cluster_master_schedulable"}' \ --data-urlencode 'match[]={__name__="che_workspace_status"}' \ --data-urlencode 'match[]={__name__="che_workspace_started_total"}' \ --data-urlencode 'match[]={__name__="che_workspace_failure_total"}' \ --data-urlencode 'match[]={__name__="che_workspace_start_time_seconds_sum"}' \ --data-urlencode 'match[]={__name__="che_workspace_start_time_seconds_count"}' \ --data-urlencode 'match[]={__name__="cco_credentials_mode"}' \ --data-urlencode 'match[]={__name__="cluster:kube_persistentvolume_plugin_type_counts:sum"}' \ --data-urlencode 'match[]={__name__="visual_web_terminal_sessions_total"}' \ --data-urlencode 'match[]={__name__="acm_managed_cluster_info"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_vcenter_info:sum"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_esxi_version_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_node_hw_version_total:sum"}' \ --data-urlencode 'match[]={__name__="openshift:build_by_strategy:sum"}' \ --data-urlencode 'match[]={__name__="rhods_aggregate_availability"}' \ --data-urlencode 'match[]={__name__="rhods_total_users"}' \ --data-urlencode 'match[]={__name__="instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="instance:etcd_mvcc_db_total_size_in_bytes:sum"}' \ --data-urlencode 'match[]={__name__="instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum"}' \ --data-urlencode 'match[]={__name__="instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_storage_types"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_strategies"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_agent_strategies"}' \ --data-urlencode 'match[]={__name__="appsvcs:cores_by_product:sum"}' \ --data-urlencode 'match[]={__name__="nto_custom_profiles:count"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_configmap"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_secret"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_mount_failures_total"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_mount_requests_total"}' \ --data-urlencode 'match[]={__name__="cluster:velero_backup_total:max"}' \ --data-urlencode 'match[]={__name__="cluster:velero_restore_total:max"}' \ --data-urlencode 'match[]={__name__="eo_es_storage_info"}' \ --data-urlencode 'match[]={__name__="eo_es_redundancy_policy_info"}' \ --data-urlencode 'match[]={__name__="eo_es_defined_delete_namespaces_total"}' \ --data-urlencode 'match[]={__name__="eo_es_misconfigured_memory_resources_info"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_data_nodes_total:max"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_documents_created_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_documents_deleted_total:sum"}' \ --data-urlencode 'match[]={__name__="pod:eo_es_shards_total:max"}' \ --data-urlencode 'match[]={__name__="eo_es_cluster_management_state_info"}' \ --data-urlencode 'match[]={__name__="imageregistry:imagestreamtags_count:sum"}' \ --data-urlencode 'match[]={__name__="imageregistry:operations_count:sum"}' \ --data-urlencode 'match[]={__name__="log_logging_info"}' \ --data-urlencode 'match[]={__name__="log_collector_error_count_total"}' \ --data-urlencode 'match[]={__name__="log_forwarder_pipeline_info"}' \ --data-urlencode 'match[]={__name__="log_forwarder_input_info"}' \ --data-urlencode 'match[]={__name__="log_forwarder_output_info"}' \ --data-urlencode 'match[]={__name__="cluster:log_collected_bytes_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:log_logged_bytes_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:kata_monitor_running_shim_count:sum"}' \ --data-urlencode 'match[]={__name__="platform:hypershift_hostedclusters:max"}' \ --data-urlencode 'match[]={__name__="platform:hypershift_nodepools:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_bucket_claims:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_buckets_claims:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_namespace_resources:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_namespace_resources:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_namespace_buckets:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_namespace_buckets:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_accounts:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_usage:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_system_health_status:max"}' \ --data-urlencode 'match[]={__name__="ocs_advanced_feature_usage"}' \ --data-urlencode 'match[]={__name__="os_image_url_override:sum"}' \ --data-urlencode 'match[]={__name__="openshift:openshift_network_operator_ipsec_state:info"}' 4.3. Using Insights to identify issues with your cluster Insights repeatedly analyzes the data Insights Operator sends. Users of OpenShift Dedicated can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. 4.3.1. About Red Hat Insights Advisor for OpenShift Dedicated You can use Insights Advisor to assess and monitor the health of your OpenShift Dedicated clusters. Whether you are concerned about individual clusters, or with your whole infrastructure, it is important to be aware of the exposure of your cluster infrastructure to issues that can affect service availability, fault tolerance, performance, or security. Using cluster data collected by the Insights Operator, Insights repeatedly compares that data against a library of recommendations . Each recommendation is a set of cluster-environment conditions that can leave OpenShift Dedicated clusters at risk. The results of the Insights analysis are available in the Insights Advisor service on Red Hat Hybrid Cloud Console. In the Console, you can perform the following actions: See clusters impacted by a specific recommendation. Use robust filtering capabilities to refine your results to those recommendations. Learn more about individual recommendations, details about the risks they present, and get resolutions tailored to your individual clusters. Share results with other stakeholders. 4.3.2. Understanding Insights Advisor recommendations Insights Advisor bundles information about various cluster states and component configurations that can negatively affect the service availability, fault tolerance, performance, or security of your clusters. This information set is called a recommendation in Insights Advisor and includes the following information: Name: A concise description of the recommendation Added: When the recommendation was published to the Insights Advisor archive Category: Whether the issue has the potential to negatively affect service availability, fault tolerance, performance, or security Total risk: A value derived from the likelihood that the condition will negatively affect your infrastructure, and the impact on operations if that were to happen Clusters: A list of clusters on which a recommendation is detected Description: A brief synopsis of the issue, including how it affects your clusters Link to associated topics: More information from Red Hat about the issue 4.3.3. Displaying potential issues with your cluster This section describes how to display the Insights report in Insights Advisor on OpenShift Cluster Manager . Note that Insights repeatedly analyzes your cluster and shows the latest results. These results can change, for example, if you fix an issue or a new issue has been detected. Prerequisites Your cluster is registered on OpenShift Cluster Manager . Remote health reporting is enabled, which is the default. You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Depending on the result, Insights Advisor displays one of the following: No matching recommendations found , if Insights did not identify any issues. A list of issues Insights has detected, grouped by risk (low, moderate, important, and critical). No clusters yet , if Insights has not yet analyzed the cluster. The analysis starts shortly after the cluster has been installed, registered, and connected to the internet. If any issues are displayed, click the > icon in front of the entry for more details. Depending on the issue, the details can also contain a link to more information from Red Hat about the issue. 4.3.4. Displaying all Insights Advisor recommendations The Recommendations view, by default, only displays the recommendations that are detected on your clusters. However, you can view all of the recommendations in the advisor archive. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on Red Hat Hybrid Cloud Console. You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Click the X icons to the Clusters Impacted and Status filters. You can now browse through all of the potential recommendations for your cluster. 4.3.5. Advisor recommendation filters The Insights advisor service can return a large number of recommendations. To focus on your most critical recommendations, you can apply filters to the Advisor recommendations list to remove low-priority recommendations. By default, filters are set to only show enabled recommendations that are impacting one or more clusters. To view all or disabled recommendations in the Insights library, you can customize the filters. To apply a filter, select a filter type and then set its value based on the options that are available in the drop-down list. You can apply multiple filters to the list of recommendations. You can set the following filter types: Name: Search for a recommendation by name. Total risk: Select one or more values from Critical , Important , Moderate , and Low indicating the likelihood and the severity of a negative impact on a cluster. Impact: Select one or more values from Critical , High , Medium , and Low indicating the potential impact to the continuity of cluster operations. Likelihood: Select one or more values from Critical , High , Medium , and Low indicating the potential for a negative impact to a cluster if the recommendation comes to fruition. Category: Select one or more categories from Service Availability , Performance , Fault Tolerance , Security , and Best Practice to focus your attention on. Status: Click a radio button to show enabled recommendations (default), disabled recommendations, or all recommendations. Clusters impacted: Set the filter to show recommendations currently impacting one or more clusters, non-impacting recommendations, or all recommendations. Risk of change: Select one or more values from High , Moderate , Low , and Very low indicating the risk that the implementation of the resolution could have on cluster operations. 4.3.5.1. Filtering Insights advisor recommendations As an OpenShift Dedicated cluster manager, you can filter the recommendations that are displayed on the recommendations list. By applying filters, you can reduce the number of reported recommendations and concentrate on your highest priority recommendations. The following procedure demonstrates how to set and remove Category filters; however, the procedure is applicable to any of the filter types and respective values. Prerequisites You are logged in to the OpenShift Cluster Manager Hybrid Cloud Console . Procedure Go to Red Hat Hybrid Cloud Console OpenShift Advisor recommendations . In the main, filter-type drop-down list, select the Category filter type. Expand the filter-value drop-down list and select the checkbox to each category of recommendation you want to view. Leave the checkboxes for unnecessary categories clear. Optional: Add additional filters to further refine the list. Only recommendations from the selected categories are shown in the list. Verification After applying filters, you can view the updated recommendations list. The applied filters are added to the default filters. 4.3.5.2. Removing filters from Insights Advisor recommendations You can apply multiple filters to the list of recommendations. When ready, you can remove them individually or completely reset them. Removing filters individually Click the X icon to each filter, including the default filters, to remove them individually. Removing all non-default filters Click Reset filters to remove only the filters that you applied, leaving the default filters in place. 4.3.6. Disabling Insights Advisor recommendations You can disable specific recommendations that affect your clusters, so that they no longer appear in your reports. It is possible to disable a recommendation for a single cluster or all of your clusters. Note Disabling a recommendation for all of your clusters also applies to any future clusters. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on OpenShift Cluster Manager . You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Optional: Use the Clusters Impacted and Status filters as needed. Disable an alert by using one of the following methods: To disable an alert: Click the Options menu for that alert, and then click Disable recommendation . Enter a justification note and click Save . To view the clusters affected by this alert before disabling the alert: Click the name of the recommendation to disable. You are directed to the single recommendation page. Review the list of clusters in the Affected clusters section. Click Actions Disable recommendation to disable the alert for all of your clusters. Enter a justification note and click Save . 4.3.7. Enabling a previously disabled Insights Advisor recommendation When a recommendation is disabled for all clusters, you no longer see the recommendation in the Insights Advisor. You can change this behavior. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on OpenShift Cluster Manager . You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Filter the recommendations to display on the disabled recommendations: From the Status drop-down menu, select Status . From the Filter by status drop-down menu, select Disabled . Optional: Clear the Clusters impacted filter. Locate the recommendation to enable. Click the Options menu , and then click Enable recommendation . 4.3.8. Displaying the Insights status in the web console Insights repeatedly analyzes your cluster and you can display the status of identified potential issues of your cluster in the OpenShift Dedicated web console. This status shows the number of issues in the different categories and, for further details, links to the reports in OpenShift Cluster Manager . Prerequisites Your cluster is registered in OpenShift Cluster Manager . Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Dedicated web console. Procedure Navigate to Home Overview in the OpenShift Dedicated web console. Click Insights on the Status card. The pop-up window lists potential issues grouped by risk. Click the individual categories or View all recommendations in Insights Advisor to display more details. 4.4. Using the Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Dedicated can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. Additional resources For more information on using Insights Advisor to identify issues with your cluster, see Using Insights to identify issues with your cluster . 4.4.1. Understanding Insights Operator alerts The Insights Operator declares alerts through the Prometheus monitoring system to the Alertmanager. You can view these alerts in the Alerting UI in the OpenShift Dedicated web console by using one of the following methods: In the Administrator perspective, click Observe Alerting . In the Developer perspective, click Observe <project_name> Alerts tab. Currently, Insights Operator sends the following alerts when the conditions are met: Table 4.1. Insights Operator alerts Alert Description InsightsDisabled Insights Operator is disabled. SimpleContentAccessNotAvailable Simple content access is not enabled in Red Hat Subscription Management. InsightsRecommendationActive Insights has an active recommendation for the cluster. 4.4.2. Obfuscating Deployment Validation Operator data Cluster administrators can configure the Insight Operator to obfuscate data from the Deployment Validation Operator (DVO), if the Operator is installed. When the workload_names value is added to the insights-config ConfigMap object, workload names-rather than UIDs-are displayed in Insights for Openshift, making them more recognizable for cluster administrators. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Dedicated web console with the "cluster-admin" role. The insights-config ConfigMap object exists in the openshift-insights namespace. The cluster is self managed and the Deployment Validation Operator is installed. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the obfuscation attribute with the workload_names value. apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | dataReporting: obfuscation: - workload_names # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml obfuscation attribute is set to - workload_names . | [
"curl -G -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://USD(oc get route prometheus-k8s-federate -n openshift-monitoring -o jsonpath=\"{.spec.host}\")/federate --data-urlencode 'match[]={__name__=~\"cluster:usage:.*\"}' --data-urlencode 'match[]={__name__=\"count:up0\"}' --data-urlencode 'match[]={__name__=\"count:up1\"}' --data-urlencode 'match[]={__name__=\"cluster_version\"}' --data-urlencode 'match[]={__name__=\"cluster_version_available_updates\"}' --data-urlencode 'match[]={__name__=\"cluster_version_capability\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_up\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_conditions\"}' --data-urlencode 'match[]={__name__=\"cluster_version_payload\"}' --data-urlencode 'match[]={__name__=\"cluster_installer\"}' --data-urlencode 'match[]={__name__=\"cluster_infrastructure_provider\"}' --data-urlencode 'match[]={__name__=\"cluster_feature_set\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_object_counts:sum\"}' --data-urlencode 'match[]={__name__=\"ALERTS\",alertstate=\"firing\"}' --data-urlencode 'match[]={__name__=\"code:apiserver_request_total:rate:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_memory_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"workload:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"workload:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:virt_platform_nodes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:node_instance_type_count:sum\"}' --data-urlencode 'match[]={__name__=\"cnv:vmi_status_running:count\"}' --data-urlencode 'match[]={__name__=\"cluster:vmi_request_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_sockets:sum\"}' --data-urlencode 'match[]={__name__=\"subscription_sync_total\"}' --data-urlencode 'match[]={__name__=\"olm_resolution_duration_seconds\"}' --data-urlencode 'match[]={__name__=\"csv_succeeded\"}' --data-urlencode 'match[]={__name__=\"csv_abnormal\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_used_raw_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_health_status\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_total_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_used_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_health_status\"}' --data-urlencode 'match[]={__name__=\"job:ceph_osd_metadata:count\"}' --data-urlencode 'match[]={__name__=\"job:kube_pv:count\"}' --data-urlencode 'match[]={__name__=\"job:odf_system_pvs:count\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops_bytes:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_versions_running:count\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_unhealthy_buckets:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_bucket_count:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_object_count:sum\"}' --data-urlencode 'match[]={__name__=\"odf_system_bucket_count\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"odf_system_objects_total\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"noobaa_accounts_num\"}' --data-urlencode 'match[]={__name__=\"noobaa_total_usage\"}' --data-urlencode 'match[]={__name__=\"console_url\"}' --data-urlencode 'match[]={__name__=\"cluster:ovnkube_master_egress_routing_via_host:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_instances:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_enabled_instance_up:max\"}' --data-urlencode 'match[]={__name__=\"cluster:ingress_controller_aws_nlb_active:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:min\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:max\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:avg\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:median\"}' --data-urlencode 'match[]={__name__=\"cluster:openshift_route_info:tls_termination:sum\"}' --data-urlencode 'match[]={__name__=\"insightsclient_request_send_total\"}' --data-urlencode 'match[]={__name__=\"cam_app_workload_migrations\"}' --data-urlencode 'match[]={__name__=\"cluster:apiserver_current_inflight_requests:sum:max_over_time:2m\"}' --data-urlencode 'match[]={__name__=\"cluster:alertmanager_integrations:max\"}' --data-urlencode 'match[]={__name__=\"cluster:telemetry_selected_series:count\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_series:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_samples_appended_total:sum\"}' --data-urlencode 'match[]={__name__=\"monitoring:container_memory_working_set_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_series_added:topk3_sum1h\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_samples_post_metric_relabeling:topk3\"}' --data-urlencode 'match[]={__name__=\"monitoring:haproxy_server_http_responses_total:sum\"}' --data-urlencode 'match[]={__name__=\"rhmi_status\"}' --data-urlencode 'match[]={__name__=\"status:upgrading:version:rhoam_state:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_critical_alerts:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_warning_alerts:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_percentile:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_remaining_error_budget:max\"}' --data-urlencode 'match[]={__name__=\"cluster_legacy_scheduler_policy\"}' --data-urlencode 'match[]={__name__=\"cluster_master_schedulable\"}' --data-urlencode 'match[]={__name__=\"che_workspace_status\"}' --data-urlencode 'match[]={__name__=\"che_workspace_started_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_failure_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_sum\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_count\"}' --data-urlencode 'match[]={__name__=\"cco_credentials_mode\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolume_plugin_type_counts:sum\"}' --data-urlencode 'match[]={__name__=\"visual_web_terminal_sessions_total\"}' --data-urlencode 'match[]={__name__=\"acm_managed_cluster_info\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_vcenter_info:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_esxi_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_node_hw_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:build_by_strategy:sum\"}' --data-urlencode 'match[]={__name__=\"rhods_aggregate_availability\"}' --data-urlencode 'match[]={__name__=\"rhods_total_users\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_storage_types\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_strategies\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_agent_strategies\"}' --data-urlencode 'match[]={__name__=\"appsvcs:cores_by_product:sum\"}' --data-urlencode 'match[]={__name__=\"nto_custom_profiles:count\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_configmap\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_secret\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_failures_total\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_requests_total\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_backup_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_restore_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_storage_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_redundancy_policy_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_defined_delete_namespaces_total\"}' --data-urlencode 'match[]={__name__=\"eo_es_misconfigured_memory_resources_info\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_data_nodes_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_created_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_deleted_total:sum\"}' --data-urlencode 'match[]={__name__=\"pod:eo_es_shards_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_cluster_management_state_info\"}' --data-urlencode 'match[]={__name__=\"imageregistry:imagestreamtags_count:sum\"}' --data-urlencode 'match[]={__name__=\"imageregistry:operations_count:sum\"}' --data-urlencode 'match[]={__name__=\"log_logging_info\"}' --data-urlencode 'match[]={__name__=\"log_collector_error_count_total\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_pipeline_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_input_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_output_info\"}' --data-urlencode 'match[]={__name__=\"cluster:log_collected_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:log_logged_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kata_monitor_running_shim_count:sum\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_hostedclusters:max\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_nodepools:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_bucket_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_buckets_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_accounts:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_usage:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_system_health_status:max\"}' --data-urlencode 'match[]={__name__=\"ocs_advanced_feature_usage\"}' --data-urlencode 'match[]={__name__=\"os_image_url_override:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:openshift_network_operator_ipsec_state:info\"}'",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | dataReporting: obfuscation: - workload_names"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/support/remote-health-monitoring-with-connected-clusters |
function::read_stopwatch_us | function::read_stopwatch_us Name function::read_stopwatch_us - Reads the time in microseconds for a stopwatch Synopsis Arguments name stopwatch name Description Returns time in microseconds for stopwatch name . Creates stopwatch name if it does not currently exist. | [
"read_stopwatch_us:long(name:string)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-read-stopwatch-us |
probe::vm.munmap | probe::vm.munmap Name probe::vm.munmap - Fires when an munmap is requested. Synopsis Values length The length of the memory segment name Name of the probe point address The requested address Context The process calling munmap. | [
"vm.munmap"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-vm-munmap |
Chapter 8. Ingress Node Firewall Operator in OpenShift Container Platform | Chapter 8. Ingress Node Firewall Operator in OpenShift Container Platform The Ingress Node Firewall Operator provides a stateless, eBPF-based firewall for managing node-level ingress traffic in OpenShift Container Platform. 8.1. Ingress Node Firewall Operator The Ingress Node Firewall Operator provides ingress firewall rules at a node level by deploying the daemon set to nodes you specify and manage in the firewall configurations. To deploy the daemon set, you create an IngressNodeFirewallConfig custom resource (CR). The Operator applies the IngressNodeFirewallConfig CR to create ingress node firewall daemon set daemon , which run on all nodes that match the nodeSelector . You configure rules of the IngressNodeFirewall CR and apply them to clusters using the nodeSelector and setting values to "true". Important The Ingress Node Firewall Operator supports only stateless firewall rules. Network interface controllers (NICs) that do not support native XDP drivers will run at a lower performance. For OpenShift Container Platform 4.14 or later, you must run Ingress Node Firewall Operator on RHEL 9.0 or later. Ingress Node Firewall Operator is not supported on Amazon Web Services (AWS) with the default OpenShift installation or on Red Hat OpenShift Service on AWS (ROSA). For more information on Red Hat OpenShift Service on AWS support and ingress, see Ingress Operator in Red Hat OpenShift Service on AWS . 8.2. Installing the Ingress Node Firewall Operator As a cluster administrator, you can install the Ingress Node Firewall Operator by using the OpenShift Container Platform CLI or the web console. 8.2.1. Installing the Ingress Node Firewall Operator using the CLI As a cluster administrator, you can install the Operator using the CLI. Prerequisites You have installed the OpenShift CLI ( oc ). You have an account with administrator privileges. Procedure To create the openshift-ingress-node-firewall namespace, enter the following command: USD cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/enforce-version: v1.24 name: openshift-ingress-node-firewall EOF To create an OperatorGroup CR, enter the following command: USD cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ingress-node-firewall-operators namespace: openshift-ingress-node-firewall EOF Subscribe to the Ingress Node Firewall Operator. To create a Subscription CR for the Ingress Node Firewall Operator, enter the following command: USD cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ingress-node-firewall-sub namespace: openshift-ingress-node-firewall spec: name: ingress-node-firewall channel: stable source: redhat-operators sourceNamespace: openshift-marketplace EOF To verify that the Operator is installed, enter the following command: USD oc get ip -n openshift-ingress-node-firewall Example output NAME CSV APPROVAL APPROVED install-5cvnz ingress-node-firewall.4.14.0-202211122336 Automatic true To verify the version of the Operator, enter the following command: USD oc get csv -n openshift-ingress-node-firewall Example output NAME DISPLAY VERSION REPLACES PHASE ingress-node-firewall.4.14.0-202211122336 Ingress Node Firewall Operator 4.14.0-202211122336 ingress-node-firewall.4.14.0-202211102047 Succeeded 8.2.2. Installing the Ingress Node Firewall Operator using the web console As a cluster administrator, you can install the Operator using the web console. Prerequisites You have installed the OpenShift CLI ( oc ). You have an account with administrator privileges. Procedure Install the Ingress Node Firewall Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Select Ingress Node Firewall Operator from the list of available Operators, and then click Install . On the Install Operator page, under Installed Namespace , select Operator recommended Namespace . Click Install . Verify that the Ingress Node Firewall Operator is installed successfully: Navigate to the Operators Installed Operators page. Ensure that Ingress Node Firewall Operator is listed in the openshift-ingress-node-firewall project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not have a Status of InstallSucceeded , troubleshoot using the following steps: Inspect the Operator Subscriptions and Install Plans tabs for any failures or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-ingress-node-firewall project. Check the namespace of the YAML file. If the annotation is missing, you can add the annotation workload.openshift.io/allowed=management to the Operator namespace with the following command: USD oc annotate ns/openshift-ingress-node-firewall workload.openshift.io/allowed=management Note For single-node OpenShift clusters, the openshift-ingress-node-firewall namespace requires the workload.openshift.io/allowed=management annotation. 8.3. Deploying Ingress Node Firewall Operator Prerequisite The Ingress Node Firewall Operator is installed. Procedure To deploy the Ingress Node Firewall Operator, create a IngressNodeFirewallConfig custom resource that will deploy the Operator's daemon set. You can deploy one or multiple IngressNodeFirewall CRDs to nodes by applying firewall rules. Create the IngressNodeFirewallConfig inside the openshift-ingress-node-firewall namespace named ingressnodefirewallconfig . Run the following command to deploy Ingress Node Firewall Operator rules: USD oc apply -f rule.yaml 8.3.1. Ingress Node Firewall configuration object The fields for the Ingress Node Firewall configuration object are described in the following table: Table 8.1. Ingress Node Firewall Configuration object Field Type Description metadata.name string The name of the CR object. The name of the firewall rules object must be ingressnodefirewallconfig . metadata.namespace string Namespace for the Ingress Firewall Operator CR object. The IngressNodeFirewallConfig CR must be created inside the openshift-ingress-node-firewall namespace. spec.nodeSelector string A node selection constraint used to target nodes through specified node labels. For example: spec: nodeSelector: node-role.kubernetes.io/worker: "" Note One label used in nodeSelector must match a label on the nodes in order for the daemon set to start. For example, if the node labels node-role.kubernetes.io/worker and node-type.kubernetes.io/vm are applied to a node, then at least one label must be set using nodeSelector for the daemon set to start. Note The Operator consumes the CR and creates an ingress node firewall daemon set on all the nodes that match the nodeSelector . Ingress Node Firewall Operator example configuration A complete Ingress Node Firewall Configuration is specified in the following example: Example Ingress Node Firewall Configuration object apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewallConfig metadata: name: ingressnodefirewallconfig namespace: openshift-ingress-node-firewall spec: nodeSelector: node-role.kubernetes.io/worker: "" Note The Operator consumes the CR and creates an ingress node firewall daemon set on all the nodes that match the nodeSelector . 8.3.2. Ingress Node Firewall rules object The fields for the Ingress Node Firewall rules object are described in the following table: Table 8.2. Ingress Node Firewall rules object Field Type Description metadata.name string The name of the CR object. interfaces array The fields for this object specify the interfaces to apply the firewall rules to. For example, - en0 and - en1 . nodeSelector array You can use nodeSelector to select the nodes to apply the firewall rules to. Set the value of your named nodeselector labels to true to apply the rule. ingress object ingress allows you to configure the rules that allow outside access to the services on your cluster. Ingress object configuration The values for the ingress object are defined in the following table: Table 8.3. ingress object Field Type Description sourceCIDRs array Allows you to set the CIDR block. You can configure multiple CIDRs from different address families. Note Different CIDRs allow you to use the same order rule. In the case that there are multiple IngressNodeFirewall objects for the same nodes and interfaces with overlapping CIDRs, the order field will specify which rule is applied first. Rules are applied in ascending order. rules array Ingress firewall rules.order objects are ordered starting at 1 for each source.CIDR with up to 100 rules per CIDR. Lower order rules are executed first. rules.protocolConfig.protocol supports the following protocols: TCP, UDP, SCTP, ICMP and ICMPv6. ICMP and ICMPv6 rules can match against ICMP and ICMPv6 types or codes. TCP, UDP, and SCTP rules can match against a single destination port or a range of ports using <start : end-1> format. Set rules.action to allow to apply the rule or deny to disallow the rule. Note Ingress firewall rules are verified using a verification webhook that blocks any invalid configuration. The verification webhook prevents you from blocking any critical cluster services such as the API server or SSH. Ingress Node Firewall rules object example A complete Ingress Node Firewall configuration is specified in the following example: Example Ingress Node Firewall configuration apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewall metadata: name: ingressnodefirewall spec: interfaces: - eth0 nodeSelector: matchLabels: <ingress_firewall_label_name>: <label_value> 1 ingress: - sourceCIDRs: - 172.16.0.0/12 rules: - order: 10 protocolConfig: protocol: ICMP icmp: icmpType: 8 #ICMP Echo request action: Deny - order: 20 protocolConfig: protocol: TCP tcp: ports: "8000-9000" action: Deny - sourceCIDRs: - fc00:f853:ccd:e793::0/64 rules: - order: 10 protocolConfig: protocol: ICMPv6 icmpv6: icmpType: 128 #ICMPV6 Echo request action: Deny 1 A <label_name> and a <label_value> must exist on the node and must match the nodeselector label and value applied to the nodes you want the ingressfirewallconfig CR to run on. The <label_value> can be true or false . By using nodeSelector labels, you can target separate groups of nodes to apply different rules to using the ingressfirewallconfig CR. Zero trust Ingress Node Firewall rules object example Zero trust Ingress Node Firewall rules can provide additional security to multi-interface clusters. For example, you can use zero trust Ingress Node Firewall rules to drop all traffic on a specific interface except for SSH. A complete configuration of a zero trust Ingress Node Firewall rule set is specified in the following example: Important Users need to add all ports their application will use to their allowlist in the following case to ensure proper functionality. Example zero trust Ingress Node Firewall rules apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewall metadata: name: ingressnodefirewall-zero-trust spec: interfaces: - eth1 1 nodeSelector: matchLabels: <ingress_firewall_label_name>: <label_value> 2 ingress: - sourceCIDRs: - 0.0.0.0/0 3 rules: - order: 10 protocolConfig: protocol: TCP tcp: ports: 22 action: Allow - order: 20 action: Deny 4 1 Network-interface cluster 2 The <label_name> and <label_value> needs to match the nodeSelector label and value applied to the specific nodes with which you wish to apply the ingressfirewallconfig CR. 3 0.0.0.0/0 set to match any CIDR 4 action set to Deny 8.4. Viewing Ingress Node Firewall Operator rules Procedure Run the following command to view all current rules : USD oc get ingressnodefirewall Choose one of the returned <resource> names and run the following command to view the rules or configs: USD oc get <resource> <name> -o yaml 8.5. Troubleshooting the Ingress Node Firewall Operator Run the following command to list installed Ingress Node Firewall custom resource definitions (CRD): USD oc get crds | grep ingressnodefirewall Example output NAME READY UP-TO-DATE AVAILABLE AGE ingressnodefirewallconfigs.ingressnodefirewall.openshift.io 2022-08-25T10:03:01Z ingressnodefirewallnodestates.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z ingressnodefirewalls.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z Run the following command to view the state of the Ingress Node Firewall Operator: USD oc get pods -n openshift-ingress-node-firewall Example output NAME READY STATUS RESTARTS AGE ingress-node-firewall-controller-manager 2/2 Running 0 5d21h ingress-node-firewall-daemon-pqx56 3/3 Running 0 5d21h The following fields provide information about the status of the Operator: READY , STATUS , AGE , and RESTARTS . The STATUS field is Running when the Ingress Node Firewall Operator is deploying a daemon set to the assigned nodes. Run the following command to collect all ingress firewall node pods' logs: USD oc adm must-gather - gather_ingress_node_firewall The logs are available in the sos node's report containing eBPF bpftool outputs at /sos_commands/ebpf . These reports include lookup tables used or updated as the ingress firewall XDP handles packet processing, updates statistics, and emits events. | [
"cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/enforce-version: v1.24 name: openshift-ingress-node-firewall EOF",
"cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ingress-node-firewall-operators namespace: openshift-ingress-node-firewall EOF",
"cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ingress-node-firewall-sub namespace: openshift-ingress-node-firewall spec: name: ingress-node-firewall channel: stable source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get ip -n openshift-ingress-node-firewall",
"NAME CSV APPROVAL APPROVED install-5cvnz ingress-node-firewall.4.14.0-202211122336 Automatic true",
"oc get csv -n openshift-ingress-node-firewall",
"NAME DISPLAY VERSION REPLACES PHASE ingress-node-firewall.4.14.0-202211122336 Ingress Node Firewall Operator 4.14.0-202211122336 ingress-node-firewall.4.14.0-202211102047 Succeeded",
"oc annotate ns/openshift-ingress-node-firewall workload.openshift.io/allowed=management",
"oc apply -f rule.yaml",
"spec: nodeSelector: node-role.kubernetes.io/worker: \"\"",
"apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewallConfig metadata: name: ingressnodefirewallconfig namespace: openshift-ingress-node-firewall spec: nodeSelector: node-role.kubernetes.io/worker: \"\"",
"apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewall metadata: name: ingressnodefirewall spec: interfaces: - eth0 nodeSelector: matchLabels: <ingress_firewall_label_name>: <label_value> 1 ingress: - sourceCIDRs: - 172.16.0.0/12 rules: - order: 10 protocolConfig: protocol: ICMP icmp: icmpType: 8 #ICMP Echo request action: Deny - order: 20 protocolConfig: protocol: TCP tcp: ports: \"8000-9000\" action: Deny - sourceCIDRs: - fc00:f853:ccd:e793::0/64 rules: - order: 10 protocolConfig: protocol: ICMPv6 icmpv6: icmpType: 128 #ICMPV6 Echo request action: Deny",
"apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewall metadata: name: ingressnodefirewall-zero-trust spec: interfaces: - eth1 1 nodeSelector: matchLabels: <ingress_firewall_label_name>: <label_value> 2 ingress: - sourceCIDRs: - 0.0.0.0/0 3 rules: - order: 10 protocolConfig: protocol: TCP tcp: ports: 22 action: Allow - order: 20 action: Deny 4",
"oc get ingressnodefirewall",
"oc get <resource> <name> -o yaml",
"oc get crds | grep ingressnodefirewall",
"NAME READY UP-TO-DATE AVAILABLE AGE ingressnodefirewallconfigs.ingressnodefirewall.openshift.io 2022-08-25T10:03:01Z ingressnodefirewallnodestates.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z ingressnodefirewalls.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z",
"oc get pods -n openshift-ingress-node-firewall",
"NAME READY STATUS RESTARTS AGE ingress-node-firewall-controller-manager 2/2 Running 0 5d21h ingress-node-firewall-daemon-pqx56 3/3 Running 0 5d21h",
"oc adm must-gather - gather_ingress_node_firewall"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/ingress-node-firewall-operator |
Chapter 2. Installing the MTR plugin for IntelliJ IDEA | Chapter 2. Installing the MTR plugin for IntelliJ IDEA You can install the MTR plugin in the Ultimate and the Community Edition releases of IntelliJ IDEA. Prerequisites The following are the prerequisites for the Migration Toolkit for Runtimes (MTR) installation: Java Development Kit (JDK) is installed. MTR supports the following JDKs: OpenJDK 11 OpenJDK 17 Oracle JDK 11 Oracle JDK 17 Eclipse TemurinTM JDK 11 Eclipse TemurinTM JDK 17 8 GB RAM macOS installation: the value of maxproc must be 2048 or greater. The latest version of mtr-cli from the MTR download page Procedure In IntelliJ IDEA, click the Plugins tab on the Welcome screen. Enter Migration Toolkit for Runtimes in the Search field on the Marketplace tab. Select the Migration Toolkit for Runtimes (MTR) by Red Hat plugin and click Install . The plugin is listed on the Installed tab. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/intellij_idea_plugin_guide/intellij-idea-plugin-extension_idea-plugin-guide |
Chapter 40. OProfile | Chapter 40. OProfile OProfile is a low overhead, system-wide performance monitoring tool. It uses the performance monitoring hardware on the processor to retrieve information about the kernel and executables on the system, such as when memory is referenced, the number of L2 cache requests, and the number of hardware interrupts received. On a Red Hat Enterprise Linux system, the oprofile RPM package must be installed to use this tool. Many processors include dedicated performance monitoring hardware. This hardware makes it possible to detect when certain events happen (such as the requested data not being in cache). The hardware normally takes the form of one or more counters that are incremented each time an event takes place. When the counter value, essentially rolls over, an interrupt is generated, making it possible to control the amount of detail (and therefore, overhead) produced by performance monitoring. OProfile uses this hardware (or a timer-based substitute in cases where performance monitoring hardware is not present) to collect samples of performance-related data each time a counter generates an interrupt. These samples are periodically written out to disk; later, the data contained in these samples can then be used to generate reports on system-level and application-level performance. OProfile is a useful tool, but be aware of some limitations when using it: Use of shared libraries - Samples for code in shared libraries are not attributed to the particular application unless the --separate=library option is used. Performance monitoring samples are inexact - When a performance monitoring register triggers a sample, the interrupt handling is not precise like a divide by zero exception. Due to the out-of-order execution of instructions by the processor, the sample may be recorded on a nearby instruction. opreport does not associate samples for inline functions' properly - opreport uses a simple address range mechanism to determine which function an address is in. Inline function samples are not attributed to the inline function but rather to the function the inline function was inserted into. OProfile accumulates data from multiple runs - OProfile is a system-wide profiler and expects processes to start up and shut down multiple times. Thus, samples from multiple runs accumulate. Use the command opcontrol --reset to clear out the samples from runs. Non-CPU-limited performance problems - OProfile is oriented to finding problems with CPU-limited processes. OProfile does not identify processes that are asleep because they are waiting on locks or for some other event to occur (for example an I/O device to finish an operation). 40.1. Overview of Tools Table 40.1, "OProfile Commands" provides a brief overview of the tools provided with the oprofile package. Table 40.1. OProfile Commands Command Description op_help Displays available events for the system's processor along with a brief description of each. op_import Converts sample database files from a foreign binary format to the native format for the system. Only use this option when analyzing a sample database from a different architecture. opannotate Creates annotated source for an executable if the application was compiled with debugging symbols. Refer to Section 40.5.3, "Using opannotate " for details. opcontrol Configures what data is collected. Refer to Section 40.2, "Configuring OProfile" for details. opreport Retrieves profile data. Refer to Section 40.5.1, "Using opreport " for details. oprofiled Runs as a daemon to periodically write sample data to disk. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/oprofile |
8.59. grub | 8.59. grub 8.59.1. RHBA-2013:1649 - grub bug fix and enhancement update Updated grub packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The grub packages provide GRUB (Grand Unified Boot Loader), a boot loader capable of booting a wide variety of operating systems. Bug Fixes BZ#851706 If the title of the GRUB menu entry exceeded the line length of 80 characters, the text showing the remaining time to a boot was inconsistent and thus appeared to be incorrect. The overflowing text was displayed on a new line and the whole text was moved one line down with every passing second. This update splits the text into two lines, and only the second line is rewritten as a boot countdown proceeds so that GRUB behaves correctly for long menu entries. BZ# 854652 When building a new version of grub packages, GRUB did not remove the grub.info file upon the "make clean" command. As a consequence, the grub.info file did not contain the latest changes after applying an update. To fix this problem, the GRUB Makefile has been modified so the grub.info file is now explicitly removed and generated with every package build. BZ#911715 The GRUB code did not comply with the Unified Extensible Firmware Interface (UEFI) specification and did not disable an EFI platform's watchdog timer as is required by the specification. Consequently, the system was rebooted if the watchdog was not disabled within 5-minutes time frame, which is undesirable behavior. A patch has been applied that disables the EFI watchdog immediately after GRUB is initialized so that EFI systems are no longer restarted unexpectedly. BZ# 916016 When booting a system in QEMU KVM with Open Virtual Machine Firmware (OVMF) BIOS, GRUB was not able to recognize virtio block devices, and the booting process exited to the GRUB shell. This happened because GRUB did not correctly tested paths to EFI devices. The GRUB code now verifies EFI device paths against EFI PCI device paths, and recognizes disk devices as expected in this scenario. BZ#918824 GRUB did not comply with the UEFI specification when handling the ExitBootServices() EFI function. If ExitBootServices() failed while retrieving a memory map, GRUB exited immediately instead of repeating the attempt. With this update, GRUB retries to obtain a memory map 5 times before exiting, and boot process continues on success. BZ# 922705 When building a 64-bit version of GRUB from a source package, it fails to link executable during the configure phase, unless a 32-bit version of the glibc-static package is installed. No error message was displayed upon GRUB failure in this situation. This has been fixed by setting the grub packages to depend directly on the /usr/lib/libc.a file, which can be provided in different environments. If the file is missing when building the grub packages, an appropriate error message is displayed. BZ# 928938 When installed on a multipath device, GRUB was unreadable and the system was unable to boot. This happened due to a bug in a regular expression used to match devices, and because the grub-install command could not resolve symbolic links to obtain device statistics. This update fixes these problems so that GRUB now boots as expected when installed on a multipath device. BZ#1008305 When booting in UEFI mode, GRUB previously allocated memory for a pointer to a structure instead allocating memory for the structure. This rendered GRUB to be unable to finish and pass control to the kernel on specific hardware configurations. This update fixes this problem so GRUB now allocates memory for a structure as expected and successfully passes control to the kernel. BZ#1017296 Previously, GRUB could not be installed on Non-Volatile Memory Express (NVMe) devices because it was unable to parse a device name during the installation process. This update adds a regular expression support for matching NVMe devices, and GRUB can now be successfully installed on these devices. Enhancements BZ#848628 GRUB now provides a new menu option "macappend". When "macappend" is used either in the grub.conf file or on the GRUB command line, the "BOOTIF=<MAC_address>" parameter is appended to the kernel command line. This allows specifying a network interface for Anaconda to use during a PXE boot. Users of grub are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/grub |
Chapter 1. Introduction | Chapter 1. Introduction Red Hat Satellite provides a Representational State Transfer (REST) API. The API provides software developers and system administrators with control over their Red Hat Satellite environment outside of the standard web interface. The REST API is useful for developers and administrators who aim to integrate the functionality of Red Hat Satellite with custom scripts or external applications that access the API over HTTP. 1.1. Overview of the Red Hat Satellite API The benefits of using the REST API are: Broad client support - any programming language, framework, or system with support for HTTP protocol can use the API. Self-descriptive - client applications require minimal knowledge of the Red Hat Satellite infrastructure because a user discovers many details at runtime. Resource-based model - the resource-based REST model provides a natural way to manage a virtualization platform. You can use the REST API to perform the following tasks: Integrate with enterprise IT systems. Integrate with third-party applications. Perform automated maintenance or error checking tasks. Automate repetitive tasks with scripts. As you prepare to upgrade Satellite Server, ensure that any scripts you use that contain Satellite API commands are up to date. API commands differ between versions of Satellite. 1.2. Satellite API Compared to Hammer CLI Tool For many tasks, you can use both Hammer and Satellite API. You can use Hammer as a human-friendly interface to Satellite API. For example, to test responses to API calls before applying them in a script, use the --debug option to inspect API calls that Hammer issues: hammer --debug organization list . In the background, each Hammer command first establishes a binding to the API and then sends a request. This can have performance implications when executing a large number of Hammer commands in sequence. In contrast, scripts that use API commands communicate directly with the Satellite API. Note that you must manually update scripts that use API commands, while Hammer automatically reflects changes in the API. For more information, see the Hammer CLI Guide . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/api_guide/chap-red_hat_satellite-api_guide-the_red_hat_satellite_api |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_certificates_in_idm/proc_providing-feedback-on-red-hat-documentation_managing-certificates-in-idm |
7.4 Release Notes | 7.4 Release Notes Red Hat Enterprise Linux 7 Release Notes for Red Hat Enterprise Linux 7.4 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/index |
Chapter 4. Deploy standalone Multicloud Object Gateway | Chapter 4. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator. Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 4.1. Installing Local Storage Operator Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword... box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page: Update Channel as stable . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Approval Strategy as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 4.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment . Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in Managing and Allocating Storage Resources guide. Procedure Navigate in the left pane of the OpenShift Web Console to click Operators OperatorHub . Scroll or type a keyword into the Filter by keyword box to search for OpenShift Data Foundation Operator. Click Install on the OpenShift Data Foundation operator page. On the Install Operator page, the following required options are selected by default: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if OpenShift Data Foundation is available. 4.3. Creating standalone Multicloud Object Gateway on IBM Power Use this section to create only the Multicloud Object Gateway component with OpenShift Data Foundation. Prerequisites Ensure that OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. To identify storage devices on each node, refer to Finding available storage devices . Procedure Log into the OpenShift Web Console. In openshift-local-storage namespace, click Operators Installed Operators to view the installed operators. Click the Local Storage installed operator. On the Operator Details page, click the Local Volume link. Click Create Local Volume . Click on YAML view for configuring Local Volume. Define a LocalVolume custom resource for filesystem PVs using the following YAML. The above definition selects sda local device from the worker-0 , worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda . Important Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths. Click Create . Make localblock storage class as the default storage class by annotating it. Click Storage StorageClasses from the left pane of the OpenShift Web Console. Click on the localblock storageClass. Edit the Annotations by adding the Key as storageclass.kubernetes.io/is-default-class and Value as true . Click Save . In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, expand Advanced . Select Multicloud Object Gateway for Deployment type . Click . Optional: In the Security page, select Connect to an external key management service . Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> '), Port number , and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verify the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) noobaa-default-backing-store-noobaa-pod-* (1 pod on any worker node) | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Filesystem"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_ibm_power/deploy-standalone-multicloud-object-gateway-ibm-power |
Chapter 8. Configuring KIE Server to connect to Business Central | Chapter 8. Configuring KIE Server to connect to Business Central Warning This section provides a sample setup that you can use for testing purposes. Some of the values are unsuitable for a production environment, and are marked as such. If a KIE Server is not configured in your Red Hat Process Automation Manager environment, or if you require additional KIE Servers in your Red Hat Process Automation Manager environment, you must configure a KIE Server to connect to Business Central. Note If you are deploying KIE Server on Red Hat OpenShift Container Platform, see the Deploying an Red Hat Decision Manager environment on Red Hat OpenShift Container Platform 4 using Operators document for instructions about configuring it to connect to Business Central. KIE Server can be managed or unmanaged. If KIE Server is unmanaged, you must manually create and maintain KIE containers (deployment units). If KIE Server is managed, the Process Automation Manager controller manages the KIE Server configuration and you interact with the Process Automation Manager controller to create and maintain the KIE containers. Note Make the changes described in this section if KIE Server is managed by Business Central and you have installed Red Hat Decision Manager from the ZIP files. If you have installed Business Central, you can use the headless Process Automation Manager controller to manage KIE Server, as described in Chapter 9, Installing and running the headless Process Automation Manager controller . Prerequisites Business Central and KIE Server are installed in the base directory of the Red Hat JBoss EAP installation ( EAP_HOME ). Note You must install Business Central and KIE Server on different servers in production environments. In this sample situation, we use only one user named controllerUser , containing both rest-all and the kie-server roles. However, if you install KIE Server and Business Central on the same server, for example in a development environment, make the changes in the shared standalone-full.xml file as described in this section. Users with the following roles exist: In Business Central, a user with the role rest-all On KIE Server, a user with the role kie-server Procedure In your Red Hat Process Automation Manager installation directory, navigate to the standalone-full.xml file. For example, if you use a Red Hat JBoss EAP installation for Red Hat Process Automation Manager, go to USDEAP_HOME/standalone/configuration/standalone-full.xml . Open the standalone-full.xml file and under the <system-properties> tag, set the following JVM properties: Table 8.1. JVM Properties for the managed KIE Server instance Property Value Note org.kie.server.id default-kie-server The KIE Server ID. org.kie.server.controller http://localhost:8080/business-central/rest/controller The location of Business Central. The URL for connecting to the API of Business Central. org.kie.server.controller.user controllerUser The user name with the role rest-all who can log in to the Business Central. org.kie.server.controller.pwd controllerUser1234; The password of the user who can log in to the Business Central. org.kie.server.location http://localhost:8080/kie-server/services/rest/server The location of KIE Server. The URL for connecting to the API of KIE Server. Table 8.2. JVM Properties for the Business Central instance Property Value Note org.kie.server.user controllerUser The user name with the role kie-server . org.kie.server.pwd controllerUser1234; The password of the user. The following example shows how to configure a KIE Server instance: <property name="org.kie.server.id" value="default-kie-server"/> <property name="org.kie.server.controller" value="http://localhost:8080/business-central/rest/controller"/> <property name="org.kie.server.controller.user" value="controllerUser"/> <property name="org.kie.server.controller.pwd" value="controllerUser1234;"/> <property name="org.kie.server.location" value="http://localhost:8080/kie-server/services/rest/server"/> The following example shows how to configure a for Business Central instance: <property name="org.kie.server.user" value="controllerUser"/> <property name="org.kie.server.pwd" value="controllerUser1234;"/> To verify that KIE Server starts successfully, send a GET request to http:// SERVER:PORT /kie-server/services/rest/server/ when KIE Server is running. For more information about running Red Hat Process Automation Manager on KIE Server, see Running Red Hat Process Automation Manager . After successful authentication, you receive an XML response similar to the following example: <response type="SUCCESS" msg="Kie Server info"> <kie-server-info> <capabilities>KieServer</capabilities> <capabilities>BRM</capabilities> <capabilities>BPM</capabilities> <capabilities>CaseMgmt</capabilities> <capabilities>BPM-UI</capabilities> <capabilities>BRP</capabilities> <capabilities>DMN</capabilities> <capabilities>Swagger</capabilities> <location>http://localhost:8230/kie-server/services/rest/server</location> <messages> <content>Server KieServerInfo{serverId='first-kie-server', version='7.5.1.Final-redhat-1', location='http://localhost:8230/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger]}started successfully at Mon Feb 05 15:44:35 AEST 2018</content> <severity>INFO</severity> <timestamp>2018-02-05T15:44:35.355+10:00</timestamp> </messages> <name>first-kie-server</name> <id>first-kie-server</id> <version>7.5.1.Final-redhat-1</version> </kie-server-info> </response> Verify successful registration: Log in to Business Central. Click Menu Deploy Execution Servers . If registration is successful, you will see the registered server ID. | [
"<property name=\"org.kie.server.id\" value=\"default-kie-server\"/> <property name=\"org.kie.server.controller\" value=\"http://localhost:8080/business-central/rest/controller\"/> <property name=\"org.kie.server.controller.user\" value=\"controllerUser\"/> <property name=\"org.kie.server.controller.pwd\" value=\"controllerUser1234;\"/> <property name=\"org.kie.server.location\" value=\"http://localhost:8080/kie-server/services/rest/server\"/>",
"<property name=\"org.kie.server.user\" value=\"controllerUser\"/> <property name=\"org.kie.server.pwd\" value=\"controllerUser1234;\"/>",
"<response type=\"SUCCESS\" msg=\"Kie Server info\"> <kie-server-info> <capabilities>KieServer</capabilities> <capabilities>BRM</capabilities> <capabilities>BPM</capabilities> <capabilities>CaseMgmt</capabilities> <capabilities>BPM-UI</capabilities> <capabilities>BRP</capabilities> <capabilities>DMN</capabilities> <capabilities>Swagger</capabilities> <location>http://localhost:8230/kie-server/services/rest/server</location> <messages> <content>Server KieServerInfo{serverId='first-kie-server', version='7.5.1.Final-redhat-1', location='http://localhost:8230/kie-server/services/rest/server', capabilities=[KieServer, BRM, BPM, CaseMgmt, BPM-UI, BRP, DMN, Swagger]}started successfully at Mon Feb 05 15:44:35 AEST 2018</content> <severity>INFO</severity> <timestamp>2018-02-05T15:44:35.355+10:00</timestamp> </messages> <name>first-kie-server</name> <id>first-kie-server</id> <version>7.5.1.Final-redhat-1</version> </kie-server-info> </response>"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/kie-server-configure-central-proc_execution-server |
Chapter 1. Introduction to Apicurio Registry | Chapter 1. Introduction to Apicurio Registry This chapter introduces Apicurio Registry concepts and features and provides details on the supported artifact types that are stored in the registry: Section 1.1, "What is Apicurio Registry?" Section 1.2, "Schema and API artifacts in Apicurio Registry" Section 1.3, "Manage content using the Apicurio Registry web console" Section 1.4, "Apicurio Registry REST API for clients" Section 1.5, "Apicurio Registry storage options" Section 1.6, "Validate Kafka messages using schemas and Java client serializers/deserializers" Section 1.7, "Stream data to external systems with Kafka Connect converters" Section 1.8, "Apicurio Registry demonstration examples" Section 1.9, "Apicurio Registry available distributions" 1.1. What is Apicurio Registry? Apicurio Registry is a datastore for sharing standard event schemas and API designs across event-driven and API architectures. You can use Apicurio Registry to decouple the structure of your data from your client applications, and to share and manage your data types and API descriptions at runtime using a REST interface. Client applications can dynamically push or pull the latest schema updates to or from Apicurio Registry at runtime without needing to redeploy. Developer teams can query Apicurio Registry for existing schemas required for services already deployed in production, and can register new schemas required for new services in development. You can enable client applications to use schemas and API designs stored in Apicurio Registry by specifying the Apicurio Registry URL in your client application code. Apicurio Registry can store schemas used to serialize and deserialize messages, which are referenced from your client applications to ensure that the messages that they send and receive are compatible with those schemas. Using Apicurio Registry to decouple your data structure from your applications reduces costs by decreasing overall message size, and creates efficiencies by increasing consistent reuse of schemas and API designs across your organization. Apicurio Registry provides a web console to make it easy for developers and administrators to manage registry content. You can configure optional rules to govern the evolution of your Apicurio Registry content. These include rules to ensure that uploaded content is valid, or is compatible with other versions. Any configured rules must pass before new versions can be uploaded to Apicurio Registry, which ensures that time is not wasted on invalid or incompatible schemas or API designs. Apicurio Registry is based on the Apicurio Registry open source community project. For details, see https://github.com/apicurio/apicurio-registry . Apicurio Registry capabilities Multiple payload formats for standard event schema and API specifications such as Apache Avro, JSON Schema, Google Protobuf, AsyncAPI, OpenAPI, and more. Pluggable Apicurio Registry storage options in AMQ Streams or PostgreSQL database. Rules for content validation, compatibility, and integrity to govern how Apicurio Registry content evolves over time. Apicurio Registry content management using web console, REST API, command line, Maven plug-in, or Java client. Full Apache Kafka schema registry support, including integration with Kafka Connect for external systems. Kafka client serializers/deserializers (SerDes) to validate message types at runtime. Compatibility with existing Confluent schema registry client applications. Cloud-native Quarkus Java runtime for low memory footprint and fast deployment times. Operator-based installation of Apicurio Registry on OpenShift. OpenID Connect (OIDC) authentication using Red Hat Single Sign-On. 1.2. Schema and API artifacts in Apicurio Registry The items stored in Apicurio Registry, such as event schemas and API designs, are known as registry artifacts . The following shows an example of an Apache Avro schema artifact in JSON format for a simple share price application: Example Avro schema { "type": "record", "name": "price", "namespace": "com.example", "fields": [ { "name": "symbol", "type": "string" }, { "name": "price", "type": "string" } ] } When a schema or API design is added as an artifact in Apicurio Registry, client applications can then use that schema or API design to validate that the client messages conform to the correct data structure at runtime. Groups of schemas and APIs An artifact group is an optional named collection of schema or API artifacts. Each group contains a logically related set of schemas or API designs, typically managed by a single entity, belonging to a particular application or organization. You can create optional artifact groups when adding your schemas and API designs to organize them in Apicurio Registry. For example, you could create groups to match your development and production application environments, or your sales and engineering organizations. Schema and API groups can contain multiple artifact types. For example, you could have Protobuf, Avro, JSON Schema, OpenAPI, or AsyncAPI artifacts all in the same group. You can create schema and API artifacts and groups using the Apicurio Registry web console, REST API, command line, Maven plug-in, or Java client application. The following simple example shows using the Core Registry REST API: USD curl -X POST -H "Content-type: application/json; artifactType=AVRO" \ -H "X-Registry-ArtifactId: share-price" \ --data '{"type":"record","name":"price","namespace":"com.example", \ "fields":[{"name":"symbol","type":"string"},{"name":"price","type":"string"}]}' \ https://my-registry.example.com/apis/registry/v2/groups/my-group/artifacts This example creates an artifact group named my-group and adds an Avro schema with an artifact ID of share-price . Note Specifying a group is optional when using the Apicurio Registry web console, and a default group is created automatically. When using the REST API or Maven plug-in, specify the default group in the API path if you do not want to create a unique group. Additional resources For information on supported artifact types, see Chapter 9, Apicurio Registry artifact reference . For information on the Core Registry API, see the Apicurio Registry REST API documentation . References to other schemas and APIs Some Apicurio Registry artifact types can include artifact references from one artifact file to another. You can create efficiencies by defining reusable schema or API components, and then referencing them from multiple locations. For example, you can specify a reference in JSON Schema or OpenAPI using a USDref statement, or in Google Protobuf using an import statement, or in Apache Avro using a nested namespace. The following example shows a simple Avro schema named TradeKey that includes a reference to another schema named Exchange using a nested namespace: Tradekey schema with nested Exchange schema { "namespace": "com.kubetrade.schema.trade", "type": "record", "name": "TradeKey", "fields": [ { "name": "exchange", "type": "com.kubetrade.schema.common.Exchange" }, { "name": "key", "type": "string" } ] } Exchange schema { "namespace": "com.kubetrade.schema.common", "type": "enum", "name": "Exchange", "symbols" : ["GEMINI"] } An artifact reference is stored in Apicurio Registry as a collection of artifact metadata that maps from an artifact type-specific reference to an internal Apicurio Registry reference. Each artifact reference in Apicurio Registry is composed of the following: Group ID Artifact ID Artifact version Artifact reference name You can manage artifact references using the Apicurio Registry core REST API, Maven plug-in, and Java serializers/deserializers (SerDes). Apicurio Registry stores the artifact references along with the artifact content. Apicurio Registry also maintains a collection of all artifact references so you can search them or list all references for a specific artifact. Supported artifact types Apicurio Registry currently supports artifact references for the following artifact types only: Avro Protobuf JSON Schema OpenAPI AsyncAPI Additional resources For details on managing artifact references, see: Chapter 4, Managing Apicurio Registry content using the REST API . Chapter 5, Managing Apicurio Registry content using the Maven plug-in . For a Java example, see the Apicurio Registry SerDes with references demonstration . 1.3. Manage content using the Apicurio Registry web console You can use the Apicurio Registry web console to browse and search the schema and API artifacts and optional groups stored in the registry, and to add new schema and API artifacts, groups, and versions. You can search for artifacts by label, name, group, and description. You can view an artifact's content or its available versions, or download an artifact file locally. You can also configure optional rules for registry content, both globally and for each schema and API artifact. These optional rules for content validation and compatibility are applied when new schema and API artifacts or versions are uploaded to the registry. For more details, see Chapter 10, Apicurio Registry content rule reference . Figure 1.1. Apicurio Registry web console The Apicurio Registry web console is available from http://MY_REGISTRY_URL/ui . Additional resources Chapter 3, Managing Apicurio Registry content using the web console 1.4. Apicurio Registry REST API for clients Client applications can use the Core Registry API v2 to manage the schema and API artifacts in Apicurio Registry. This API provides operations for the following features: Admin Export or import Apicurio Registry data in a .zip file, and manage logging levels for the Apicurio Registry instance at runtime. Artifacts Manage schema and API artifacts stored in Apicurio Registry. You can also manage the lifecycle state of an artifact: enabled, disabled, or deprecated. Artifact metadata Manage details about a schema or API artifact. You can edit details such as artifact name, description, or labels. Details such as artifact group, and when the artifact was created or modified are read-only. Artifact rules Configure rules to govern the content evolution of a specific schema or API artifact to prevent invalid or incompatible content from being added to Apicurio Registry. Artifact rules override any global rules configured. Artifact versions Manage versions that are created when a schema or API artifact is updated. You can also manage the lifecycle state of an artifact version: enabled, disabled, or deprecated. Global rules Configure rules to govern the content evolution of all schema and API artifacts to prevent invalid or incompatible content from being added to Apicurio Registry. Global rules are applied only if an artifact does not have its own specific artifact rules configured. Search Browse or search for schema and API artifacts and versions, for example, by name, group, description, or label. System Get the Apicurio Registry version and the limits on resources for the Apicurio Registry instance. Users Get the current Apicurio Registry user. Compatibility with other schema registry REST APIs Apicurio Registry also provides compatibility with the following schema registries by including implementations of their respective REST APIs: Apicurio Registry Core Registry API v1 Confluent Schema Registry API v6 Confluent Schema Registry API v7 CNCF CloudEvents Schema Registry API v0 Applications using Confluent client libraries can use Apicurio Registry as a drop-in replacement. For more details, see Replacing Confluent Schema Registry . Additional resources For more information on the Core Registry API v2, see the Apicurio Registry REST API documentation . For API documentation on the Core Registry API v2 and all compatible APIs, browse to the /apis endpoint of your Apicurio Registry instance, for example, http://MY-REGISTRY-URL/apis . 1.5. Apicurio Registry storage options Apicurio Registry provides the following options for the underlying storage of registry data: Table 1.1. Apicurio Registry data storage options Storage option Description PostgreSQL database PostgreSQL is the recommended data storage option for performance, stability, and data management (backup/restore, and so on) in a production environment. AMQ Streams Kafka storage is provided for production environments where database management expertise is not available, or where storage in Kafka is a specific requirement. Additional resources For more details on storage options, see Installing and deploying Red Hat build of Apicurio Registry on OpenShift . 1.6. Validate Kafka messages using schemas and Java client serializers/deserializers Kafka producer applications can use serializers to encode messages that conform to a specific event schema. Kafka consumer applications can then use deserializers to validate that messages have been serialized using the correct schema, based on a specific schema ID. Figure 1.2. Apicurio Registry and Kafka client SerDes architecture Apicurio Registry provides Kafka client serializers/deserializers (SerDes) to validate the following message types at runtime: Apache Avro Google Protobuf JSON Schema The Apicurio Registry Maven repository and source code distributions include the Kafka SerDes implementations for these message types, which Kafka client application developers can use to integrate with Apicurio Registry. These implementations include custom Java classes for each supported message type, for example, io.apicurio.registry.serde.avro , which client applications can use to pull schemas from Apicurio Registry at runtime for validation. Additional resources Chapter 7, Validating Kafka messages using serializers/deserializers in Java clients 1.7. Stream data to external systems with Kafka Connect converters You can use Apicurio Registry with Apache Kafka Connect to stream data between Kafka and external systems. Using Kafka Connect, you can define connectors for different systems to move large volumes of data into and out of Kafka-based systems. Figure 1.3. Apicurio Registry and Kafka Connect architecture Apicurio Registry provides the following features for Kafka Connect: Storage for Kafka Connect schemas Kafka Connect converters for Apache Avro and JSON Schema Core Registry API to manage schemas You can use the Avro and JSON Schema converters to map Kafka Connect schemas into Avro or JSON schemas. These schemas can then serialize message keys and values into the compact Avro binary format or human-readable JSON format. The converted JSON is less verbose because the messages do not contain the schema information, only the schema ID. Apicurio Registry can manage and track the Avro and JSON schemas used in the Kafka topics. Because the schemas are stored in Apicurio Registry and decoupled from the message content, each message must only include a tiny schema identifier. For an I/O bound system like Kafka, this means more total throughput for producers and consumers. The Avro and JSON Schema serializers and deserializers (SerDes) provided by Apicurio Registry are used by Kafka producers and consumers in this use case. Kafka consumer applications that you write to consume change events can use the Avro or JSON SerDes to deserialize these events. You can install the Apicurio Registry SerDes in any Kafka-based system and use them along with Kafka Connect, or with a Kafka Connect-based system such as Debezium. Additional resources Configuring Debezium to use Avro serialization and Apicurio Registry Example of using Debezium to monitor the PostgreSQL database used by Apicurio Registry Apache Kafka Connect documentation 1.8. Apicurio Registry demonstration examples Apicurio Registry provides open source example applications that demonstrate how to use Apicurio Registry in different use case scenarios. For example, these include storing schemas used by Kafka serializer and deserializer (SerDes) Java classes. These classes fetch the schema from Apicurio Registry for use when producing or consuming operations to serialize, deserialize, or validate the Kafka message payload. These applications demonstrate use cases such as the following examples: Apache Avro Kafka SerDes Apache Avro Maven plug-in Apache Camel Quarkus and Kafka CloudEvents Confluent Kafka SerDes Custom ID strategy Event-driven architecture with Debezium Google Protobuf Kafka SerDes JSON Schema Kafka SerDes REST clients Additional resources For more details, see https://github.com/Apicurio/apicurio-registry/tree/2.6.x/examples/ 1.9. Apicurio Registry available distributions Apicurio Registry provides the following distribution options. Table 1.2. Apicurio Registry Operator and images Distribution Location Release category Apicurio Registry Operator OpenShift web console under Operators OperatorHub General Availability Container image for Apicurio Registry Operator Red Hat Ecosystem Catalog General Availability Container image for Kafka storage in AMQ Streams Red Hat Ecosystem Catalog General Availability Container image for database storage in PostgreSQL Red Hat Ecosystem Catalog General Availability Table 1.3. Apicurio Registry zip downloads Distribution Location Release category Example custom resource definitions for installation Red Hat Software Downloads General Availability Apicurio Registry v1 to v2 migration tool Red Hat Software Downloads General Availability Maven repository Red Hat Software Downloads General Availability Source code Red Hat Software Downloads General Availability Kafka Connect converters Red Hat Software Downloads General Availability Note You must have a subscription for Red Hat Application Foundations and be logged into the Red Hat Customer Portal to access the available Apicurio Registry distributions. | [
"{ \"type\": \"record\", \"name\": \"price\", \"namespace\": \"com.example\", \"fields\": [ { \"name\": \"symbol\", \"type\": \"string\" }, { \"name\": \"price\", \"type\": \"string\" } ] }",
"curl -X POST -H \"Content-type: application/json; artifactType=AVRO\" -H \"X-Registry-ArtifactId: share-price\" --data '{\"type\":\"record\",\"name\":\"price\",\"namespace\":\"com.example\", \"fields\":[{\"name\":\"symbol\",\"type\":\"string\"},{\"name\":\"price\",\"type\":\"string\"}]}' https://my-registry.example.com/apis/registry/v2/groups/my-group/artifacts",
"{ \"namespace\": \"com.kubetrade.schema.trade\", \"type\": \"record\", \"name\": \"TradeKey\", \"fields\": [ { \"name\": \"exchange\", \"type\": \"com.kubetrade.schema.common.Exchange\" }, { \"name\": \"key\", \"type\": \"string\" } ] }",
"{ \"namespace\": \"com.kubetrade.schema.common\", \"type\": \"enum\", \"name\": \"Exchange\", \"symbols\" : [\"GEMINI\"] }"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apicurio_registry/2.6/html/apicurio_registry_user_guide/intro-to-the-registry_registry |
Chapter 13. Red Hat Quay build enhancements | Chapter 13. Red Hat Quay build enhancements Red Hat Quay builds can be run on virtualized platforms. Backwards compatibility to run build configurations are also available. 13.1. Red Hat Quay enhanced build architecture The following image shows the expected design flow and architecture of the enhanced build features: With this enhancement, the build manager first creates the Job Object . Then, the Job Object then creates a pod using the quay-builder-image . The quay-builder-image will contain the quay-builder binary and the Podman service. The created pod runs as unprivileged . The quay-builder binary then builds the image while communicating status and retrieving build information from the Build Manager. 13.2. Red Hat Quay build limitations Running builds in Red Hat Quay in an unprivileged context might cause some commands that were working under the build strategy to fail. Attempts to change the build strategy could potentially cause performance issues and reliability with the build. Running builds directly in a container does not have the same isolation as using virtual machines. Changing the build environment might also caused builds that were previously working to fail. 13.3. Creating a Red Hat Quay builders environment with OpenShift Container Platform The procedures in this section explain how to create a Red Hat Quay virtual builders environment with OpenShift Container Platform. 13.3.1. OpenShift Container Platform TLS component The tls component allows you to control TLS configuration. Note Red Hat Quay 3.10 does not support builders when the TLS component is managed by the Operator. If you set tls to unmanaged , you supply your own ssl.cert and ssl.key files. In this instance, if you want your cluster to support builders, you must add both the Quay route and the builder route name to the SAN list in the cert, or use a wildcard. To add the builder route, use the following format: [quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]:443 13.3.2. Using OpenShift Container Platform for Red Hat Quay builders Builders require SSL/TLS certificates. For more information about SSL/TLS certificates, see Adding TLS certificates to the Red Hat Quay container . If you are using Amazon Web Service (AWS) S3 storage, you must modify your storage bucket in the AWS console, prior to running builders. See "Modifying your AWS S3 storage bucket" in the following section for the required parameters. 13.3.2.1. Preparing OpenShift Container Platform for virtual builders Use the following procedure to prepare OpenShift Container Platform for Red Hat Quay virtual builders. Note This procedure assumes you already have a cluster provisioned and a Quay Operator running. This procedure is for setting up a virtual namespace on OpenShift Container Platform. Procedure Log in to your Red Hat Quay cluster using a cluster administrator account. Create a new project where your virtual builders will be run, for example, virtual-builders , by running the following command: USD oc new-project virtual-builders Create a ServiceAccount in the project that will be used to run builds by entering the following command: USD oc create sa -n virtual-builders quay-builder Provide the created service account with editing permissions so that it can run the build: USD oc adm policy -n virtual-builders add-role-to-user edit system:serviceaccount:virtual-builders:quay-builder Grant the Quay builder anyuid scc permissions by entering the following command: USD oc adm policy -n virtual-builders add-scc-to-user anyuid -z quay-builder Note This action requires cluster admin privileges. This is required because builders must run as the Podman user for unprivileged or rootless builds to work. Obtain the token for the Quay builder service account. If using OpenShift Container Platform 4.10 or an earlier version, enter the following command: oc sa get-token -n virtual-builders quay-builder If using OpenShift Container Platform 4.11 or later, enter the following command: USD oc create token quay-builder -n virtual-builders Note When the token expires you will need to request a new token. Optionally, you can also add a custom expiration. For example, specify --duration 20160m to retain the token for two weeks. Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ... Determine the builder route by entering the following command: USD oc get route -n quay-enterprise Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD ... example-registry-quay-builder example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org example-registry-quay-app grpc edge/Redirect None ... Generate a self-signed SSL/TlS certificate with the .crt extension by entering the following command: USD oc extract cm/kube-root-ca.crt -n openshift-apiserver Example output ca.crt Rename the ca.crt file to extra_ca_cert_build_cluster.crt by entering the following command: USD mv ca.crt extra_ca_cert_build_cluster.crt Locate the secret for you configuration bundle in the Console , and select Actions Edit Secret and add the appropriate builder configuration: FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - <superusername> FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: <sample_build_route> 1 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 2 ORCHESTRATOR: REDIS_HOST: <sample_redis_hostname> 3 REDIS_PASSWORD: "" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: <sample_builder_namespace> 4 SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 BUILDER_CONTAINER_IMAGE: <sample_builder_container_image> 5 # Kubernetes resource options K8S_API_SERVER: <sample_k8s_api_server> 6 K8S_API_TLS_CA: <sample_crt_file> 7 VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 300m 8 CONTAINER_CPU_LIMITS: 1G 9 CONTAINER_MEMORY_REQUEST: 300m 10 CONTAINER_CPU_REQUEST: 1G 11 NODE_SELECTOR_LABEL_KEY: "" NODE_SELECTOR_LABEL_VALUE: "" SERVICE_ACCOUNT_NAME: <sample_service_account_name> SERVICE_ACCOUNT_TOKEN: <sample_account_token> 12 1 The build route is obtained by running oc get route -n with the name of your OpenShift Operator's namespace. A port must be provided at the end of the route, and it should use the following format: [quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]:443 . 2 If the JOB_REGISTRATION_TIMEOUT parameter is set too low, you might receive the following error: failed to register job to build manager: rpc error: code = Unauthenticated desc = Invalid build token: Signature has expired . It is suggested that this parameter be set to at least 240. 3 If your Redis host has a password or SSL/TLS certificates, you must update accordingly. 4 Set to match the name of your virtual builders namespace, for example, virtual-builders . 5 For early access, the BUILDER_CONTAINER_IMAGE is currently quay.io/projectquay/quay-builder:3.7.0-rc.2 . Note that this might change during the early access window. If this happens, customers are alerted. 6 The K8S_API_SERVER is obtained by running oc cluster-info . 7 You must manually create and add your custom CA cert, for example, K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt . 8 Defaults to 5120Mi if left unspecified. 9 For virtual builds, you must ensure that there are enough resources in your cluster. Defaults to 1000m if left unspecified. 10 Defaults to 3968Mi if left unspecified. 11 Defaults to 500m if left unspecified. 12 Obtained when running oc create sa . Sample configuration FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org:443 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 ORCHESTRATOR: REDIS_HOST: example-registry-quay-redis REDIS_PASSWORD: "" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: virtual-builders SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 BUILDER_CONTAINER_IMAGE: quay.io/projectquay/quay-builder:3.7.0-rc.2 # Kubernetes resource options K8S_API_SERVER: api.docs.quayteam.org:6443 K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 1G CONTAINER_CPU_LIMITS: 1080m CONTAINER_MEMORY_REQUEST: 1G CONTAINER_CPU_REQUEST: 580m NODE_SELECTOR_LABEL_KEY: "" NODE_SELECTOR_LABEL_VALUE: "" SERVICE_ACCOUNT_NAME: quay-builder SERVICE_ACCOUNT_TOKEN: "eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ" 13.3.2.2. Manually adding SSL/TLS certificates Due to a known issue with the configuration tool, you must manually add your custom SSL/TLS certificates to properly run builders. Use the following procedure to manually add custom SSL/TLS certificates. For more information creating SSL/TLS certificates, see Adding TLS certificates to the Red Hat Quay container . 13.3.2.2.1. Creating and signing certificates Use the following procedure to create and sign an SSL/TLS certificate. Procedure Create a certificate authority and sign a certificate. For more information, see Create a Certificate Authority and sign a certificate . openssl.cnf [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = example-registry-quay-quay-enterprise.apps.docs.quayteam.org 1 DNS.2 = example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org 2 1 An alt_name for the URL of your Red Hat Quay registry must be included. 2 An alt_name for the BUILDMAN_HOSTNAME Sample commands USD openssl genrsa -out rootCA.key 2048 USD openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem USD openssl genrsa -out ssl.key 2048 USD openssl req -new -key ssl.key -out ssl.csr USD openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf 13.3.2.2.2. Setting TLS to unmanaged Use the following procedure to set king:tls to unmanaged. Procedure In your Red Hat Quay Registry YAML, set kind: tls to managed: false : - kind: tls managed: false On the Events page, the change is blocked until you set up the appropriate config.yaml file. For example: - lastTransitionTime: '2022-03-28T12:56:49Z' lastUpdateTime: '2022-03-28T12:56:49Z' message: >- required component `tls` marked as unmanaged, but `configBundleSecret` is missing necessary fields reason: ConfigInvalid status: 'True' 13.3.2.2.3. Creating temporary secrets Use the following procedure to create temporary secrets for the CA certificate. Procedure Create a secret in your default namespace for the CA certificate: Create a secret in your default namespace for the ssl.key and ssl.cert files: 13.3.2.2.4. Copying secret data to the configuration YAML Use the following procedure to copy secret data to your config.yaml file. Procedure Locate the new secrets in the console UI at Workloads Secrets . For each secret, locate the YAML view: kind: Secret apiVersion: v1 metadata: name: temp-crt namespace: quay-enterprise uid: a4818adb-8e21-443a-a8db-f334ace9f6d0 resourceVersion: '9087855' creationTimestamp: '2022-03-28T13:05:30Z' ... data: extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0l.... type: Opaque kind: Secret apiVersion: v1 metadata: name: quay-config-ssl namespace: quay-enterprise uid: 4f5ae352-17d8-4e2d-89a2-143a3280783c resourceVersion: '9090567' creationTimestamp: '2022-03-28T13:10:34Z' ... data: ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT... ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc... type: Opaque Locate the secret for your Red Hat Quay registry configuration bundle in the UI, or through the command line by running a command like the following: USD oc get quayregistries.quay.redhat.com -o jsonpath="{.items[0].spec.configBundleSecret}{'\n'}" -n quay-enterprise In the OpenShift Container Platform console, select the YAML tab for your configuration bundle secret, and add the data from the two secrets you created: kind: Secret apiVersion: v1 metadata: name: init-config-bundle-secret namespace: quay-enterprise uid: 4724aca5-bff0-406a-9162-ccb1972a27c1 resourceVersion: '4383160' creationTimestamp: '2022-03-22T12:35:59Z' ... data: config.yaml: >- RkVBVFVSRV9VU0VSX0lOSVRJQUxJWkU6IHRydWUKQlJ... extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0ldw.... ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT... ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc... type: Opaque Click Save . Enter the following command to see if your pods are restarting: USD oc get pods -n quay-enterprise Example output NAME READY STATUS RESTARTS AGE ... example-registry-quay-app-6786987b99-vgg2v 0/1 ContainerCreating 0 2s example-registry-quay-app-7975d4889f-q7tvl 1/1 Running 0 5d21h example-registry-quay-app-7975d4889f-zn8bb 1/1 Running 0 5d21h example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 0/1 ContainerCreating 0 2s example-registry-quay-config-editor-c6c4d9ccd-2mwg2 1/1 Running 0 5d21h example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-764d7b68d9-jmlkk 1/1 Terminating 0 5d21h example-registry-quay-mirror-764d7b68d9-jqzwg 1/1 Terminating 0 5d21h example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h After your Red Hat Quay registry has reconfigured, enter the following command to check if the Red Hat Quay app pods are running: USD oc get pods -n quay-enterprise Example output example-registry-quay-app-6786987b99-sz6kb 1/1 Running 0 7m45s example-registry-quay-app-6786987b99-vgg2v 1/1 Running 0 9m1s example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 1/1 Running 0 9m1s example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-758fc68ff7-5wxlp 1/1 Running 0 8m29s example-registry-quay-mirror-758fc68ff7-lbl82 1/1 Running 0 8m29s example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h In your browser, access the registry endpoint and validate that the certificate has been updated appropriately. For example: Common Name (CN) example-registry-quay-quay-enterprise.apps.docs.quayteam.org Organisation (O) DOCS Organisational Unit (OU) QUAY 13.3.2.3. Using the UI to create a build trigger Use the following procedure to use the UI to create a build trigger. Procedure Log in to your Red Hat Quay repository. Click Create New Repository and create a new registry, for example, testrepo . On the Repositories page, click the Builds tab on the navigation pane. Alternatively, use the corresponding URL directly: Important In some cases, the builder might have issues resolving hostnames. This issue might be related to the dnsPolicy being set to default on the job object. Currently, there is no workaround for this issue. It will be resolved in a future version of Red Hat Quay. Click Create Build Trigger Custom Git Repository Push . Enter the HTTPS or SSH style URL used to clone your Git repository, then click Continue . For example: Check Tag manifest with the branch or tag name and then click Continue . Enter the location of the Dockerfile to build when the trigger is invoked, for example, /Dockerfile and click Continue . Enter the location of the context for the Docker build, for example, / , and click Continue . If warranted, create a Robot Account. Otherwise, click Continue . Click Continue to verify the parameters. On the Builds page, click Options icon of your Trigger Name, and then click Run Trigger Now . Enter a commit SHA from the Git repository and click Start Build . You can check the status of your build by clicking the commit in the Build History page, or by running oc get pods -n virtual-builders . For example: Example output USD oc get pods -n virtual-builders Example output Example output When the build is finished, you can check the status of the tag under Tags on the navigation pane. Note With early access, full build logs and timestamps of builds are currently unavailable. 13.3.2.4. Modifying your AWS S3 storage bucket Note Currently, modifying your AWS S3 storage bucket is not supported on IBM Power and IBM Z. If you are using AWS S3 storage, you must change your storage bucket in the AWS console, prior to running builders. Procedure Log in to your AWS console at s3.console.aws.com . In the search bar, search for S3 and then click S3 . Click the name of your bucket, for example, myawsbucket . Click the Permissions tab. Under Cross-origin resource sharing (CORS) , include the following parameters: [ { "AllowedHeaders": [ "Authorization" ], "AllowedMethods": [ "GET" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 }, { "AllowedHeaders": [ "Content-Type", "x-amz-acl", "origin" ], "AllowedMethods": [ "PUT" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ] 13.3.2.5. Modifying your Google Cloud Platform object bucket Note Currently, modifying your Google Cloud Platform object bucket is not supported on IBM Power and IBM Z. Use the following procedure to configure cross-origin resource sharing (CORS) for virtual builders. Note Without CORS configuration, uploading a build Dockerfile fails. Procedure Use the following reference to create a JSON file for your specific CORS needs. For example: USD cat gcp_cors.json Example output [ { "origin": ["*"], "method": ["GET"], "responseHeader": ["Authorization"], "maxAgeSeconds": 3600 }, { "origin": ["*"], "method": ["PUT"], "responseHeader": [ "Content-Type", "x-goog-acl", "origin"], "maxAgeSeconds": 3600 } ] Enter the following command to update your GCP storage bucket: USD gcloud storage buckets update gs://<bucket_name> --cors-file=./gcp_cors.json Example output Updating Completed 1 You can display the updated CORS configuration of your GCP bucket by running the following command: USD gcloud storage buckets describe gs://<bucket_name> --format="default(cors)" Example output cors: - maxAgeSeconds: 3600 method: - GET origin: - '*' responseHeader: - Authorization - maxAgeSeconds: 3600 method: - PUT origin: - '*' responseHeader: - Content-Type - x-goog-acl - origin | [
"[quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]:443",
"oc new-project virtual-builders",
"oc create sa -n virtual-builders quay-builder",
"oc adm policy -n virtual-builders add-role-to-user edit system:serviceaccount:virtual-builders:quay-builder",
"oc adm policy -n virtual-builders add-scc-to-user anyuid -z quay-builder",
"sa get-token -n virtual-builders quay-builder",
"oc create token quay-builder -n virtual-builders",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ",
"oc get route -n quay-enterprise",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD example-registry-quay-builder example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org example-registry-quay-app grpc edge/Redirect None",
"oc extract cm/kube-root-ca.crt -n openshift-apiserver",
"ca.crt",
"mv ca.crt extra_ca_cert_build_cluster.crt",
"FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - <superusername> FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: <sample_build_route> 1 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 2 ORCHESTRATOR: REDIS_HOST: <sample_redis_hostname> 3 REDIS_PASSWORD: \"\" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: <sample_builder_namespace> 4 SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 BUILDER_CONTAINER_IMAGE: <sample_builder_container_image> 5 # Kubernetes resource options K8S_API_SERVER: <sample_k8s_api_server> 6 K8S_API_TLS_CA: <sample_crt_file> 7 VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 300m 8 CONTAINER_CPU_LIMITS: 1G 9 CONTAINER_MEMORY_REQUEST: 300m 10 CONTAINER_CPU_REQUEST: 1G 11 NODE_SELECTOR_LABEL_KEY: \"\" NODE_SELECTOR_LABEL_VALUE: \"\" SERVICE_ACCOUNT_NAME: <sample_service_account_name> SERVICE_ACCOUNT_TOKEN: <sample_account_token> 12",
"FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_BUILD_SUPPORT: True BUILDMAN_HOSTNAME: example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org:443 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 3600 ORCHESTRATOR: REDIS_HOST: example-registry-quay-redis REDIS_PASSWORD: \"\" REDIS_SSL: false REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetesPodman NAME: openshift BUILDER_NAMESPACE: virtual-builders SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 BUILDER_CONTAINER_IMAGE: quay.io/projectquay/quay-builder:3.7.0-rc.2 # Kubernetes resource options K8S_API_SERVER: api.docs.quayteam.org:6443 K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 1G CONTAINER_CPU_LIMITS: 1080m CONTAINER_MEMORY_REQUEST: 1G CONTAINER_CPU_REQUEST: 580m NODE_SELECTOR_LABEL_KEY: \"\" NODE_SELECTOR_LABEL_VALUE: \"\" SERVICE_ACCOUNT_NAME: quay-builder SERVICE_ACCOUNT_TOKEN: \"eyJhbGciOiJSUzI1NiIsImtpZCI6IldfQUJkaDVmb3ltTHZ0dGZMYjhIWnYxZTQzN2dJVEJxcDJscldSdEUtYWsifQ\"",
"[req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = example-registry-quay-quay-enterprise.apps.docs.quayteam.org 1 DNS.2 = example-registry-quay-builder-quay-enterprise.apps.docs.quayteam.org 2",
"openssl genrsa -out rootCA.key 2048 openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem openssl genrsa -out ssl.key 2048 openssl req -new -key ssl.key -out ssl.csr openssl x509 -req -in ssl.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out ssl.cert -days 356 -extensions v3_req -extfile openssl.cnf",
"- kind: tls managed: false",
"- lastTransitionTime: '2022-03-28T12:56:49Z' lastUpdateTime: '2022-03-28T12:56:49Z' message: >- required component `tls` marked as unmanaged, but `configBundleSecret` is missing necessary fields reason: ConfigInvalid status: 'True'",
"oc create secret generic -n quay-enterprise temp-crt --from-file extra_ca_cert_build_cluster.crt",
"oc create secret generic -n quay-enterprise quay-config-ssl --from-file ssl.cert --from-file ssl.key",
"kind: Secret apiVersion: v1 metadata: name: temp-crt namespace: quay-enterprise uid: a4818adb-8e21-443a-a8db-f334ace9f6d0 resourceVersion: '9087855' creationTimestamp: '2022-03-28T13:05:30Z' data: extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0l. type: Opaque",
"kind: Secret apiVersion: v1 metadata: name: quay-config-ssl namespace: quay-enterprise uid: 4f5ae352-17d8-4e2d-89a2-143a3280783c resourceVersion: '9090567' creationTimestamp: '2022-03-28T13:10:34Z' data: ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc type: Opaque",
"oc get quayregistries.quay.redhat.com -o jsonpath=\"{.items[0].spec.configBundleSecret}{'\\n'}\" -n quay-enterprise",
"kind: Secret apiVersion: v1 metadata: name: init-config-bundle-secret namespace: quay-enterprise uid: 4724aca5-bff0-406a-9162-ccb1972a27c1 resourceVersion: '4383160' creationTimestamp: '2022-03-22T12:35:59Z' data: config.yaml: >- RkVBVFVSRV9VU0VSX0lOSVRJQUxJWkU6IHRydWUKQlJ extra_ca_cert_build_cluster.crt: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNakNDQWhxZ0F3SUJBZ0ldw. ssl.cert: >- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVaakNDQTA2Z0F3SUJBZ0lVT ssl.key: >- LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBc type: Opaque",
"oc get pods -n quay-enterprise",
"NAME READY STATUS RESTARTS AGE example-registry-quay-app-6786987b99-vgg2v 0/1 ContainerCreating 0 2s example-registry-quay-app-7975d4889f-q7tvl 1/1 Running 0 5d21h example-registry-quay-app-7975d4889f-zn8bb 1/1 Running 0 5d21h example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 0/1 ContainerCreating 0 2s example-registry-quay-config-editor-c6c4d9ccd-2mwg2 1/1 Running 0 5d21h example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-764d7b68d9-jmlkk 1/1 Terminating 0 5d21h example-registry-quay-mirror-764d7b68d9-jqzwg 1/1 Terminating 0 5d21h example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h",
"oc get pods -n quay-enterprise",
"example-registry-quay-app-6786987b99-sz6kb 1/1 Running 0 7m45s example-registry-quay-app-6786987b99-vgg2v 1/1 Running 0 9m1s example-registry-quay-app-upgrade-lswsn 0/1 Completed 0 6d1h example-registry-quay-config-editor-77847fc4f5-nsbbv 1/1 Running 0 9m1s example-registry-quay-database-66969cd859-n2ssm 1/1 Running 0 6d1h example-registry-quay-mirror-758fc68ff7-5wxlp 1/1 Running 0 8m29s example-registry-quay-mirror-758fc68ff7-lbl82 1/1 Running 0 8m29s example-registry-quay-redis-7cc5f6c977-956g8 1/1 Running 0 5d21h",
"Common Name (CN) example-registry-quay-quay-enterprise.apps.docs.quayteam.org Organisation (O) DOCS Organisational Unit (OU) QUAY",
"https://example-registry-quay-quay-enterprise.apps.docs.quayteam.org/repository/quayadmin/testrepo?tab=builds",
"https://github.com/gabriel-rh/actions_test.git",
"oc get pods -n virtual-builders",
"NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s",
"oc get pods -n virtual-builders",
"NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Terminating 0 9s",
"oc get pods -n virtual-builders",
"No resources found in virtual-builders namespace.",
"[ { \"AllowedHeaders\": [ \"Authorization\" ], \"AllowedMethods\": [ \"GET\" ], \"AllowedOrigins\": [ \"*\" ], \"ExposeHeaders\": [], \"MaxAgeSeconds\": 3000 }, { \"AllowedHeaders\": [ \"Content-Type\", \"x-amz-acl\", \"origin\" ], \"AllowedMethods\": [ \"PUT\" ], \"AllowedOrigins\": [ \"*\" ], \"ExposeHeaders\": [], \"MaxAgeSeconds\": 3000 } ]",
"cat gcp_cors.json",
"[ { \"origin\": [\"*\"], \"method\": [\"GET\"], \"responseHeader\": [\"Authorization\"], \"maxAgeSeconds\": 3600 }, { \"origin\": [\"*\"], \"method\": [\"PUT\"], \"responseHeader\": [ \"Content-Type\", \"x-goog-acl\", \"origin\"], \"maxAgeSeconds\": 3600 } ]",
"gcloud storage buckets update gs://<bucket_name> --cors-file=./gcp_cors.json",
"Updating Completed 1",
"gcloud storage buckets describe gs://<bucket_name> --format=\"default(cors)\"",
"cors: - maxAgeSeconds: 3600 method: - GET origin: - '*' responseHeader: - Authorization - maxAgeSeconds: 3600 method: - PUT origin: - '*' responseHeader: - Content-Type - x-goog-acl - origin"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/use_red_hat_quay/red-hat-quay-builders-enhancement |
Chapter 5. The ext4 File System | Chapter 5. The ext4 File System The ext4 file system is a scalable extension of the ext3 file system. With Red Hat Enterprise Linux 7, it can support a maximum individual file size of 16 terabytes, and file systems to a maximum of 50 terabytes, unlike Red Hat Enterprise Linux 6 which only supported file systems up to 16 terabytes. It also supports an unlimited number of sub-directories (the ext3 file system only supports up to 32,000), though once the link count exceeds 65,000 it resets to 1 and is no longer increased. The bigalloc feature is not currently supported. Note As with ext3, an ext4 volume must be umounted in order to perform an fsck . For more information, see Chapter 4, The ext3 File System . Main Features The ext4 file system uses extents (as opposed to the traditional block mapping scheme used by ext2 and ext3), which improves performance when using large files and reduces metadata overhead for large files. In addition, ext4 also labels unallocated block groups and inode table sections accordingly, which allows them to be skipped during a file system check. This makes for quicker file system checks, which becomes more beneficial as the file system grows in size. Allocation Features The ext4 file system features the following allocation schemes: Persistent pre-allocation Delayed allocation Multi-block allocation Stripe-aware allocation Because of delayed allocation and other performance optimizations, ext4's behavior of writing files to disk is different from ext3. In ext4, when a program writes to the file system, it is not guaranteed to be on-disk unless the program issues an fsync() call afterwards. By default, ext3 automatically forces newly created files to disk almost immediately even without fsync() . This behavior hid bugs in programs that did not use fsync() to ensure that written data was on-disk. The ext4 file system, on the other hand, often waits several seconds to write out changes to disk, allowing it to combine and reorder writes for better disk performance than ext3. Warning Unlike ext3, the ext4 file system does not force data to disk on transaction commit. As such, it takes longer for buffered writes to be flushed to disk. As with any file system, use data integrity calls such as fsync() to ensure that data is written to permanent storage. Other ext4 Features The ext4 file system also supports the following: Extended attributes ( xattr ) - This allows the system to associate several additional name and value pairs per file. Quota journaling - This avoids the need for lengthy quota consistency checks after a crash. Note The only supported journaling mode in ext4 is data=ordered (default). Subsecond timestamps - This gives timestamps to the subsecond. 5.1. Creating an ext4 File System To create an ext4 file system, use the following command: Replace block_device with the path to a block device. For example, /dev/sdb1 , /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a , or /dev/my-volgroup/my-lv . In general, the default options are optimal for most usage scenarios. Example 5.1. mkfs.ext4 Command Output Below is a sample output of this command, which displays the resulting file system geometry and features: Important It is possible to use tune2fs to enable certain ext4 features on ext3 file systems. However, using tune2fs in this way has not been fully tested and is therefore not supported in Red Hat Enterprise Linux 7. As a result, Red Hat cannot guarantee consistent performance and predictable behavior for ext3 file systems converted or mounted by using tune2fs . Striped Block Devices For striped block devices (for example, RAID5 arrays), the stripe geometry can be specified at the time of file system creation. Using proper stripe geometry greatly enhances the performance of an ext4 file system. When creating file systems on LVM or MD volumes, mkfs.ext4 chooses an optimal geometry. This may also be true on some hardware RAIDs which export geometry information to the operating system. To specify stripe geometry, use the -E option of mkfs.ext4 (that is, extended file system options) with the following sub-options: stride= value Specifies the RAID chunk size. stripe-width= value Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe. For both sub-options, value must be specified in file system block units. For example, to create a file system with a 64k stride (that is, 16 x 4096) on a 4k-block file system, use the following command: Configuring UUID It is also possible to set a specific UUID for a file system. To specify a UUID when creating a file system, use the -U option: Replace UUID with the UUID you want to set: for example, 7cd65de3-e0be-41d9-b66d-96d749c02da7 . Replace device with the path to an ext4 file system to have the UUID added to it: for example, /dev/sda8 . To change the UUID of an existing file system, see Section 25.8.3.2, "Modifying Persistent Naming Attributes" Additional Resources For more information about creating ext4 file systems, see: The mkfs.ext4 (8) man page | [
"mkfs.ext4 block_device",
"~]# mkfs.ext4 /dev/sdb1 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 245280 inodes, 979456 blocks 48972 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1006632960 30 block groups 32768 blocks per group, 32768 fragments per group 8176 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done",
"mkfs.ext4 -E stride=16,stripe-width=64 /dev/ block_device",
"mkfs.ext4 -U UUID device"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-ext4 |
User and group APIs | User and group APIs OpenShift Container Platform 4.15 Reference guide for user and group APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/user_and_group_apis/index |
4.11. Fence Virt (Serial/VMChannel Mode) | 4.11. Fence Virt (Serial/VMChannel Mode) Table 4.12, "Fence virt (Serial/VMChannel Mode)" lists the fence device parameters used by fence_virt , the fence agent for virtual machines using VM channel or serial mode . Table 4.12. Fence virt (Serial/VMChannel Mode) luci Field cluster.conf Attribute Description Name name A name for the Fence virt fence device. Serial Device serial_device On the host, the serial device must be mapped in each domain's configuration file. For more information, see the fence_virt man page. If this field is specified, it causes the fence_virt fencing agent to operate in serial mode. Not specifying a value causes the fence_virt fencing agent to operate in VM channel mode. Serial Parameters serial_params The serial parameters. The default is 115200, 8N1. VM Channel IP Address channel_address The channel IP. The default value is 10.0.2.179. Timeout (optional) timeout Fencing timeout, in seconds. The default value is 30. Domain port (formerly domain ) Virtual machine (domain UUID or name) to fence. ipport The channel port. The default value is 1229, which is the value used when configuring this fence device with luci . Delay (optional) delay Fencing delay, in seconds. The fence agent will wait the specified number of seconds before attempting a fencing operation. The default value is 0. The following command creates a fence device instance for virtual machines using serial mode. The following is the cluster.conf entry for the fence_virt device: | [
"ccs -f cluster.conf --addfencedev fencevirt1 agent=fence_virt serial_device=/dev/ttyS1 serial_params=19200, 8N1",
"<fencedevices> <fencedevice agent=\"fence_virt\" name=\"fencevirt1\" serial_device=\"/dev/ttyS1\" serial_params=\"19200, 8N1\"/> </fencedevices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-virt-ca |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Provide as much detail as possible so that your request can be addressed. Prerequisites You have a Red Hat account. If you do not have a Red Hat account, you can create one by clicking Register on the Red Hat Customer Portal home page. You are logged in to your Red Hat account. Procedure To provide your feedback, click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide more details about the issue or enhancement in the Description text box. If your Red Hat user name does not automatically appear in the Reporter text box, enter it. Scroll to the bottom of the page and then click the Create button. A documentation issue is created and routed to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/configuring_notifications_on_the_red_hat_hybrid_cloud_console_with_fedramp/proc-providing-feedback-on-redhat-documentation |
Chapter 76. ListenerAddress schema reference | Chapter 76. ListenerAddress schema reference Used in: ListenerStatus Property Property type Description host string The DNS name or IP address of the Kafka bootstrap service. port integer The port of the Kafka bootstrap service. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-ListenerAddress-reference |
Chapter 10. VolumeSnapshot [snapshot.storage.k8s.io/v1] | Chapter 10. VolumeSnapshot [snapshot.storage.k8s.io/v1] Description VolumeSnapshot is a user's request for either creating a point-in-time snapshot of a persistent volume, or binding to a pre-existing snapshot. Type object Required spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required. status object status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. 10.1.1. .spec Description spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required. Type object Required source Property Type Description source object source specifies where a snapshot will be created from. This field is immutable after creation. Required. volumeSnapshotClassName string VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field. 10.1.2. .spec.source Description source specifies where a snapshot will be created from. This field is immutable after creation. Required. Type object Property Type Description persistentVolumeClaimName string persistentVolumeClaimName specifies the name of the PersistentVolumeClaim object representing the volume from which a snapshot should be created. This PVC is assumed to be in the same namespace as the VolumeSnapshot object. This field should be set if the snapshot does not exists, and needs to be created. This field is immutable. volumeSnapshotContentName string volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. 10.1.3. .status Description status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. Type object Property Type Description boundVolumeSnapshotContentName string boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. creationTime string creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. error object error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared. readyToUse boolean readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. restoreSize integer-or-string restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. 10.1.4. .status.error Description error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared. Type object Property Type Description message string message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information. time string time is the timestamp when the error was encountered. 10.2. API endpoints The following API endpoints are available: /apis/snapshot.storage.k8s.io/v1/volumesnapshots GET : list objects of kind VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots DELETE : delete collection of VolumeSnapshot GET : list objects of kind VolumeSnapshot POST : create a VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name} DELETE : delete a VolumeSnapshot GET : read the specified VolumeSnapshot PATCH : partially update the specified VolumeSnapshot PUT : replace the specified VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name}/status GET : read status of the specified VolumeSnapshot PATCH : partially update status of the specified VolumeSnapshot PUT : replace status of the specified VolumeSnapshot 10.2.1. /apis/snapshot.storage.k8s.io/v1/volumesnapshots Table 10.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind VolumeSnapshot Table 10.2. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotList schema 401 - Unauthorized Empty 10.2.2. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots Table 10.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 10.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of VolumeSnapshot Table 10.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind VolumeSnapshot Table 10.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.8. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeSnapshot Table 10.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.10. Body parameters Parameter Type Description body VolumeSnapshot schema Table 10.11. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 202 - Accepted VolumeSnapshot schema 401 - Unauthorized Empty 10.2.3. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name} Table 10.12. Global path parameters Parameter Type Description name string name of the VolumeSnapshot namespace string object name and auth scope, such as for teams and projects Table 10.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a VolumeSnapshot Table 10.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 10.15. Body parameters Parameter Type Description body DeleteOptions schema Table 10.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeSnapshot Table 10.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.18. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeSnapshot Table 10.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.20. Body parameters Parameter Type Description body Patch schema Table 10.21. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeSnapshot Table 10.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.23. Body parameters Parameter Type Description body VolumeSnapshot schema Table 10.24. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 401 - Unauthorized Empty 10.2.4. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name}/status Table 10.25. Global path parameters Parameter Type Description name string name of the VolumeSnapshot namespace string object name and auth scope, such as for teams and projects Table 10.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified VolumeSnapshot Table 10.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.28. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified VolumeSnapshot Table 10.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.30. Body parameters Parameter Type Description body Patch schema Table 10.31. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified VolumeSnapshot Table 10.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.33. Body parameters Parameter Type Description body VolumeSnapshot schema Table 10.34. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/storage_apis/volumesnapshot-snapshot-storage-k8s-io-v1 |
E.3. Directories within /proc/ | E.3. Directories within /proc/ Common groups of information concerning the kernel are grouped into directories and subdirectories within the /proc/ directory. E.3.1. Process Directories Every /proc/ directory contains a number of directories with numerical names. A listing of them may be similar to the following: These directories are called process directories , as they are named after a program's process ID and contain information specific to that process. The owner and group of each process directory is set to the user running the process. When the process is terminated, its /proc/ process directory vanishes. Each process directory contains the following files: cmdline - Contains the command issued when starting the process. cwd - A symbolic link to the current working directory for the process. environ - A list of the environment variables for the process. The environment variable is given in all upper-case characters, and the value is in lower-case characters. exe - A symbolic link to the executable of this process. fd - A directory containing all of the file descriptors for a particular process. These are given in numbered links: maps - A list of memory maps to the various executables and library files associated with this process. This file can be rather long, depending upon the complexity of the process, but sample output from the sshd process begins like the following: mem - The memory held by the process. This file cannot be read by the user. root - A link to the root directory of the process. stat - The status of the process. statm - The status of the memory in use by the process. Below is a sample /proc/statm file: The seven columns relate to different memory statistics for the process. From left to right, they report the following aspects of the memory used: Total program size, in kilobytes. Size of memory portions, in kilobytes. Number of pages that are shared. Number of pages that are code. Number of pages of data/stack. Number of library pages. Number of dirty pages. status - The status of the process in a more readable form than stat or statm . Sample output for sshd looks similar to the following: The information in this output includes the process name and ID, the state (such as S (sleeping) or R (running) ), user/group ID running the process, and detailed data regarding memory usage. E.3.1.1. /proc/self/ The /proc/self/ directory is a link to the currently running process. This allows a process to look at itself without having to know its process ID. Within a shell environment, a listing of the /proc/self/ directory produces the same contents as listing the process directory for that process. | [
"dr-xr-xr-x 3 root root 0 Feb 13 01:28 1 dr-xr-xr-x 3 root root 0 Feb 13 01:28 1010 dr-xr-xr-x 3 xfs xfs 0 Feb 13 01:28 1087 dr-xr-xr-x 3 daemon daemon 0 Feb 13 01:28 1123 dr-xr-xr-x 3 root root 0 Feb 13 01:28 11307 dr-xr-xr-x 3 apache apache 0 Feb 13 01:28 13660 dr-xr-xr-x 3 rpc rpc 0 Feb 13 01:28 637 dr-xr-xr-x 3 rpcuser rpcuser 0 Feb 13 01:28 666",
"total 0 lrwx------ 1 root root 64 May 8 11:31 0 -> /dev/null lrwx------ 1 root root 64 May 8 11:31 1 -> /dev/null lrwx------ 1 root root 64 May 8 11:31 2 -> /dev/null lrwx------ 1 root root 64 May 8 11:31 3 -> /dev/ptmx lrwx------ 1 root root 64 May 8 11:31 4 -> socket:[7774817] lrwx------ 1 root root 64 May 8 11:31 5 -> /dev/ptmx lrwx------ 1 root root 64 May 8 11:31 6 -> socket:[7774829] lrwx------ 1 root root 64 May 8 11:31 7 -> /dev/ptmx",
"08048000-08086000 r-xp 00000000 03:03 391479 /usr/sbin/sshd 08086000-08088000 rw-p 0003e000 03:03 391479 /usr/sbin/sshd 08088000-08095000 rwxp 00000000 00:00 0 40000000-40013000 r-xp 0000000 03:03 293205 /lib/ld-2.2.5.so 40013000-40014000 rw-p 00013000 03:03 293205 /lib/ld-2.2.5.so 40031000-40038000 r-xp 00000000 03:03 293282 /lib/libpam.so.0.75 40038000-40039000 rw-p 00006000 03:03 293282 /lib/libpam.so.0.75 40039000-4003a000 rw-p 00000000 00:00 0 4003a000-4003c000 r-xp 00000000 03:03 293218 /lib/libdl-2.2.5.so 4003c000-4003d000 rw-p 00001000 03:03 293218 /lib/libdl-2.2.5.so",
"263 210 210 5 0 205 0",
"Name: sshd State: S (sleeping) Tgid: 797 Pid: 797 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 32 Groups: VmSize: 3072 kB VmLck: 0 kB VmRSS: 840 kB VmData: 104 kB VmStk: 12 kB VmExe: 300 kB VmLib: 2528 kB SigPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 8000000000001000 SigCgt: 0000000000014005 CapInh: 0000000000000000 CapPrm: 00000000fffffeff CapEff: 00000000fffffeff"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-proc-directories |
Chapter 12. Keyless authentication with robot accounts | Chapter 12. Keyless authentication with robot accounts In versions of Red Hat Quay, robot account tokens were valid for the lifetime of the token unless deleted or regenerated. Tokens that do not expire have security implications for users who do not want to store long-term passwords or manage the deletion, or regeneration, or new authentication tokens. With Red Hat Quay 3, Red Hat Quay administrators are provided the ability to exchange external OIDC tokens for short-lived, or ephemeral robot account tokens with either Red Hat Single Sign-On (based on the Keycloak project) or Microsoft Entra ID. This allows robot accounts to leverage tokens that last one hour, which are are refreshed regularly and can be used to authenticate individual transactions. This feature greatly enhances the security of your Red Hat Quay registry by mitigating the possibility of robot token exposure by removing the tokens after one hour. Configuring keyless authentication with robot accounts is a multi-step procedure that requires setting a robot federation, generating an OAuth2 token from your OIDC provider, and exchanging the OAuth2 token for a robot account access token. 12.1. Generating an OAuth2 token with Red Hat Sign Sign-On The following procedure shows you how to generate an OAuth2 token using Red Hat Single Sign-On. Depending on your OIDC provider, these steps will vary. Procedure On the Red Hat Single Sign-On UI: Click Clients and then the name of the application or service that can request authentication of a user. On the Settings page of your client, ensure that the following options are set or enabled: Client ID Valid redirect URI Client authentication Authorization Standard flow Direct access grants Note Settings can differ depending on your setup. On the Credentials page, store the Client Secret for future use. On the Users page, click Add user and enter a username, for example, service-account-quaydev . Then, click Create . Click the name of of the user, for example service-account-quaydev on the Users page. Click the Credentials tab Set password and provide a password for the user. If warranted, you can make this password temporary by selecting the Temporary option. Click the Realm settings tab OpenID Endpoint Configuration . Store the /protocol/openid-connect/token endpoint. For example: http://localhost:8080/realms/master/protocol/openid-connect/token On a web browser, navigate to the following URL: http://<keycloak_url>/realms/<realm_name>/protocol/openid-connect/auth?response_type=code&client_id=<client_id> When prompted, log in with the service-account-quaydev user and the temporary password you set. Complete the login by providing the required information and setting a permanent password if necessary. You are redirected to the URI address provided for your client. For example: https://localhost:3000/cb?session_state=5c9bce22-6b85-4654-b716-e9bbb3e755bc&iss=http%3A%2F%2Flocalhost%3A8080%2Frealms%2Fmaster&code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43 Take note of the code provided in the address. For example: code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43 Note This is a temporary code that can only be used one time. If necessary, you can refresh the page or revisit the URL to obtain another code. On your terminal, use the following curl -X POST command to generate a temporary OAuth2 access token: USD curl -X POST "http://localhost:8080/realms/master/protocol/openid-connect/token" 1 -H "Content-Type: application/x-www-form-urlencoded" \ -d "client_id=quaydev" 2 -d "client_secret=g8gPsBLxVrLo2PjmZkYBdKvcB9C7fmBz" 3 -d "grant_type=authorization_code" -d "code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43" 4 1 The protocol/openid-connect/token endpoint found on the Realm settings page of the Red Hat Single Sign-On UI. 2 The Client ID used for this procedure. 3 The Client Secret for the Client ID. 4 The code returned from the redirect URI. Example output {"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0...", "expires_in":60,"refresh_expires_in":1800,"refresh_token":"eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJiNTBlZTVkMS05OTc1LTQwMzUtYjNkNy1lMWQ5ZTJmMjg0MTEifQ.oBDx6B3pUkXQO8m-M3hYE7v-w25ak6y70CQd5J8f5EuldhvTwpWrC1K7yOglvs09dQxtq8ont12rKIoCIi4WXw","token_type":"Bearer","not-before-policy":0,"session_state":"5c9bce22-6b85-4654-b716-e9bbb3e755bc","scope":"profile email"} Store the access_token from the previously step, as it will be exchanged for a Red Hat Quay robot account token in the following procedure. 12.2. Setting up a robot account federation by using the Red Hat Quay v2 UI The following procedure shows you how to set up a robot account federation by using the Red Hat Quay v2 UI. This procedure uses Red Hat Single Sign-On, which is based on the Keycloak project. These steps, and the information used to set up a robot account federation, will vary depending on your OIDC provider. Prerequisites You have created an organization. The following example uses fed_test . You have created a robot account. The following example uses fest_test+robot1 . You have configured a OIDC for your Red Hat Quay deployment. The following example uses Red Hat Single Sign-On. Procedure On the Red Hat Single Sign-On main page: Select the appropriate realm that is authenticated for use with Red Hat Quay. Store the issuer URL, for example, https://keycloak-auth-realm.quayadmin.org/realms/quayrealm . Click Users the name of the user to be linked with the robot account for authentication. You must use the same user account that you used when generating the OAuth2 access token. On the Details page, store the ID of the user, for example, 449e14f8-9eb5-4d59-a63e-b7a77c75f770 . Note The information collected in this step will vary depending on your OIDC provider. For example, with Red Hat Single Sign-On, the ID of a user is used as the Subject to set up the robot account federation in a subsequent step. For a different OIDC provider, like Microsoft Entra ID, this information is stored as the Subject . On your Red Hat Quay registry: Navigate to Organizations and click the name of your organization, for example, fed_test . Click Robot Accounts . Click the menu kebab Set robot federation . Click the + symbol. In the popup window, include the following information: Issuer URL : https://keycloak-auth-realm.quayadmin.org/realms/quayrealm . For Red Hat Single Sign-On, this is the the URL of your Red Hat Single Sign-On realm. This might vary depending on your OIDC provider. Subject : 449e14f8-9eb5-4d59-a63e-b7a77c75f770 . For Red Hat Single Sign-On, the Subject is the ID of your Red Hat Single Sign-On user. This varies depending on your OIDC provider. For example, if you are using Microsoft Entra ID, the Subject will be the Subject or your Entra ID user. Click Save . 12.3. Exchanging an OAuth2 access token for a Red Hat Quay robot account token The following procedure leverages the access token generated in the procedure to create a new Red Hat Quay robot account token. The new Red Hat Quay robot account token is used for authentication between your OIDC provider and Red Hat Quay. Note The following example uses a Python script to exchange the OAuth2 access token for a Red Hat Quay robot account token. Prerequisites You have the python3 CLI tool installed. Procedure Save the following Python script in a .py file, for example, robot_fed_token_auth.py import requests import os TOKEN=os.environ.get('TOKEN') robot_user = "fed-test+robot1" def get_quay_robot_token(fed_token): URL = "https://<quay-server.example.com>/oauth2/federation/robot/token" response = requests.get(URL, auth=(robot_user,fed_token)) 1 print(response) print(response.text) if __name__ == "__main__": get_quay_robot_token(TOKEN) 1 If your Red Hat Quay deployment is using custom SSL/TLS certificates, the response must be response = requests.get(URL,auth=(robot_user,fed_token),verify=False) , which includes the verify=False flag. Export the OAuth2 access token as TOKEN . For example: USD export TOKEN = eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0... Run the robot_fed_token_auth.py script by entering the following command: USD python3 robot_fed_token_auth.py Example output <Response [200]> {"token": "291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ..."} Important This token expires after one hour. After one hour, a new token must be generated. Export the robot account access token as QUAY_TOKEN . For example: USD export QUAY_TOKEN=291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ 12.4. Pushing and pulling images After you have generated a new robot account access token and exported it, you can log in and the robot account using the access token and push and pull images. Prerequisites You have exported the OAuth2 access token into a new robot account access token. Procedure Log in to your Red Hat Quay registry using the fest_test+robot1 robot account and the QUAY_TOKEN access token. For example: USD podman login <quay-server.example.com> -u fed_test+robot1 -p USDQUAY_TOKEN Pull an image from a Red Hat Quay repository for which the robot account has the proper permissions. For example: USD podman pull <quay-server.example.com/<repository_name>/<image_name>> Example output Getting image source signatures Copying blob 900e6061671b done Copying config 8135583d97 done Writing manifest to image destination Storing signatures 8135583d97feb82398909c9c97607159e6db2c4ca2c885c0b8f590ee0f9fe90d 0.57user 0.11system 0:00.99elapsed 68%CPU (0avgtext+0avgdata 78716maxresident)k 800inputs+15424outputs (18major+6528minor)pagefaults 0swaps Attempt to pull an image from a Red Hat Quay repository for which the robot account does not have the proper permissions. For example: USD podman pull <quay-server.example.com/<different_repository_name>/<image_name>> Example output Error: initializing source docker://quay-server.example.com/example_repository/busybox:latest: reading manifest in quay-server.example.com/example_repository/busybox: unauthorized: access to the requested resource is not authorized After one hour, the credentials for this robot account are set to expire. Afterwards, you must generate a new access token for this robot account. | [
"http://localhost:8080/realms/master/protocol/openid-connect/token",
"http://<keycloak_url>/realms/<realm_name>/protocol/openid-connect/auth?response_type=code&client_id=<client_id>",
"https://localhost:3000/cb?session_state=5c9bce22-6b85-4654-b716-e9bbb3e755bc&iss=http%3A%2F%2Flocalhost%3A8080%2Frealms%2Fmaster&code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43",
"code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43",
"curl -X POST \"http://localhost:8080/realms/master/protocol/openid-connect/token\" 1 -H \"Content-Type: application/x-www-form-urlencoded\" -d \"client_id=quaydev\" 2 -d \"client_secret=g8gPsBLxVrLo2PjmZkYBdKvcB9C7fmBz\" 3 -d \"grant_type=authorization_code\" -d \"code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43\" 4",
"{\"access_token\":\"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0...\", \"expires_in\":60,\"refresh_expires_in\":1800,\"refresh_token\":\"eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJiNTBlZTVkMS05OTc1LTQwMzUtYjNkNy1lMWQ5ZTJmMjg0MTEifQ.oBDx6B3pUkXQO8m-M3hYE7v-w25ak6y70CQd5J8f5EuldhvTwpWrC1K7yOglvs09dQxtq8ont12rKIoCIi4WXw\",\"token_type\":\"Bearer\",\"not-before-policy\":0,\"session_state\":\"5c9bce22-6b85-4654-b716-e9bbb3e755bc\",\"scope\":\"profile email\"}",
"import requests import os TOKEN=os.environ.get('TOKEN') robot_user = \"fed-test+robot1\" def get_quay_robot_token(fed_token): URL = \"https://<quay-server.example.com>/oauth2/federation/robot/token\" response = requests.get(URL, auth=(robot_user,fed_token)) 1 print(response) print(response.text) if __name__ == \"__main__\": get_quay_robot_token(TOKEN)",
"export TOKEN = eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0",
"python3 robot_fed_token_auth.py",
"<Response [200]> {\"token\": \"291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ...\"}",
"export QUAY_TOKEN=291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ",
"podman login <quay-server.example.com> -u fed_test+robot1 -p USDQUAY_TOKEN",
"podman pull <quay-server.example.com/<repository_name>/<image_name>>",
"Getting image source signatures Copying blob 900e6061671b done Copying config 8135583d97 done Writing manifest to image destination Storing signatures 8135583d97feb82398909c9c97607159e6db2c4ca2c885c0b8f590ee0f9fe90d 0.57user 0.11system 0:00.99elapsed 68%CPU (0avgtext+0avgdata 78716maxresident)k 800inputs+15424outputs (18major+6528minor)pagefaults 0swaps",
"podman pull <quay-server.example.com/<different_repository_name>/<image_name>>",
"Error: initializing source docker://quay-server.example.com/example_repository/busybox:latest: reading manifest in quay-server.example.com/example_repository/busybox: unauthorized: access to the requested resource is not authorized"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/manage_red_hat_quay/keyless-authentication-robot-accounts |
17.3. Delegating Host or Service Management in the Web UI | 17.3. Delegating Host or Service Management in the Web UI Each host and service entry in the IdM web UI has a configuration tab that indicates what hosts have been delegated management control over that host or service. Open the Identity tab, and select the Hosts or Services subtab. Click the name of the host or service that you are going to grant delegated management to . Click the Hosts subtab on the far right of the host or service entry. This is the tab which lists hosts that can manage the selected host or service. Figure 17.2. Host Subtab Click the Add link at the top of the list. Click the check box by the names of the hosts to which to delegate management for the host or service. Click the right arrow button, > , to move the hosts to the selection box. Figure 17.3. Host/Service Delegation Management Click the Add button to close the selection box and to save the delegation settings. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/delegating-management-ui |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/making-open-source-more-inclusive |
Upgrading Data Grid | Upgrading Data Grid Red Hat Data Grid 8.5 Upgrade Data Grid to 8.5 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/upgrading_data_grid/index |
Chapter 1. New and enhanced features | Chapter 1. New and enhanced features This section provides an overview of features that have been added to or significantly enhanced in this release of Red Hat OpenStack Services on OpenShift (RHOSO). RHOSO improves substantially over versions of Red Hat OpenStack Platform (RHOSP). The RHOSO control plane is natively hosted on the Red Hat OpenShift Container Platform (RHOCP) and the external RHEL-based data plane and workloads are managed with Ansible. This shift in architecture aligns with Red Hat's platform infrastructure strategy. You can future proof your existing investments by using RHOCP as a hosting platform for all of your infrastructure services. RHOSP 17.1 is the last version of the product to use the director-based OpenStack on OpenStack form-factor for the control plane. 1.1. Control plane new and enhanced features Control plane deployed on Red Hat OpenShift Container Platform (RHOCP) The director-based undercloud is replaced by a control plane that is natively hosted on an RHOCP cluster and managed with the OpenStack Operator. The Red Hat OpenStack Services on OpenShift (RHOSO) control plane features include: Deployed in pods and governed by Kubernetes Operators. Deploys in minutes, consuming only a fraction of the CPU and RAM footprint required by earlier RHOSP releases. Takes advantage of native Kubernetes mechanisms for high availability. Features built-in monitoring based on RHOCP Observability. 1.2. Data plane new and enhanced features Ansible-managed data plane The director-deployed overcloud is replaced by a data plane driven by the OpenStack Operator and executed by Ansible. RHOSO data plane features include: The OpenStackDataPlaneNodeSet custom resource definition (CRD), which provides a highly parallel deployment model. Micro failure domains based on the OpenStackDataPlaneNodeSet CRD. If one or more node sets fail, the other node sets run to completion because there is no interdependency between node sets. Faster deployment times compared to RHOSP versions. Highly configurable data plane setup based on the OpenStackDataPlaneNodeSet and OpenStackDataPlaneService CRDs. 1.3. Distributed Compute nodes (DCN) DCN with Red Hat Ceph storage RHOSO 18.0.3 (Feature Release 1) introduces support for Distributed Compute Nodes (DCN) with persistent storage backed by Red Hat Ceph Storage. 1.4. Networking new and enhanced features Dynamic routing on data plane with FRR and BGP RHOSO 18.0.3 (Feature Release 1) introduces support of Free Range Routing (FRR) border gateway protocol (BGP) to provide dynamic routing capabilities on the RHOSO data plane. Limitations: If you use dynamic routing, you must also use distributed virtual routing (DVR). If you use dynamic routing, you also use dedicated networker nodes. You can not use dynamic routing in an IPv6 deployment or a deployment that uses the Load-balancing service (octavia). Custom ML2 mechanism driver and SDN backend (Technology Preview) RHOSO 18.0.3 (Feature Release 1) allows you to test integration of the Networking service (neutron) with a custom ML2 mechanism driver and software defined networking (SDN) back end components, instead of the default OVN mechanism driver and back end components. Do not use this feature in a production environment. IPv6 metadata RHOSO 18.0.3 (Feature Release 1) introduces support of the IPv6 metadata service. NMstate provider for os-net-config (Development Preview) RHOSO 18.0.3 (Feature Release 1) allows you to test a Development Preview of the NMstate provider for os-net-config . To test the NMstate provider, set edpm_network_config_nmstate: true . Do NOT use this Development Preview setting in a production environment. Forwarding database (FDB) learning and aging controls RHOSO 18.0.3 (Feature Release 1) introduces FDB learning and related FDB aging parameters. You can use FDB learning to prevent traffic flooding on ports that have port security disabled. Set localnet_learn_fdb to true . Use the fdb_age_threshold parameter to set the maximum time (seconds) that the learned MACs stay in the FDB table. Use the fdb_removal_limit parameter to prevent OVN from removing a large number of FDB table entries at the same time. Example configuration Egress QoS support at NIC level using DCB (Technology Preview) Egress quality of service (QoS) at the network interface controller (NIC) level uses the Data Center Bridging Capability Exchange (DCBX) protocol to configure egress QoS at the NIC level in the host. It triggers the configuration and provides the information directly from the top of rack (ToR) switch that peers with the host NIC. This capability, combined with egress QoS for OVS/OVN, enables end-to-end egress QoS. This is a Technology Preview feature. A Technology Preview feature might not be fully implemented and tested. Some features might be absent, incomplete, or not work as expected. For more information on this feature, see Feature Integration document - DCB for E2E QoS . Configuring and deploying networking with Kubernetes NMState Operator and the RHEL NetworkManager service (Technology preview) The RHOSO bare-metal network deployment uses os-net-config with a Kubernetes NMState Operator and NetworkManager back end. Therefore, administrators can use the Kubernetes NMState Operator, nmstate , and the RHEL NetworkManager CLI tool nmcli to configure and deploy networks on the data plane, instead of legacy ifcfg files and network-init-scripts . 1.5. Storage new and enhanced features Integration with external Red Hat Ceph Storage (RHCS) 7 clusters You can integrate RHOSO with external RHCS 7 clusters to include RHCS capabilities with your deployment. Distributed image import RHOSO 18.0 introduces distributed image import for the Image service (glance). With this feature, you do not need to configure a shared staging area for different API workers to access images that are imported to the Image service. Now the API worker that owns the image data is the same API worker that performs the image import. Block Storage service (cinder) backup and restore for thin volumes The backup service for the Block Storage service service now preserves sparseness when restoring a backup to a new volume. This feature ensures that restored volumes use the same amount of storage as the backed up volume. It does not apply to RBD backups, which use a different mechanism to preserve sparseness. Support for RHCS RBD deferred deletion RHOSO 18.0 introduces Block Storage service and Image service RBD deferred deletion, which improves flexibility in the way RBD snapshot dependencies are managed. With deferred deletion, you can delete a resource such as an image, volume, or snapshot even if there are active dependencies. Shared File Systems service (manila) CephFS NFS driver with Ganesha Active/Active The CephFS-NFS driver for the Shared File Systems service now consumes an active/active Ganesha cluster by default, improving both the scalability and high availability of the Ceph NFS service. Unified OpenStack client parity with native Shared File Systems service client The Shared File Systems service now fully supports the openstack client command line interface. 1.6. Security new and enhanced features This section outlines the top new and enhanced features for RHOSO security services. FIPS enabled by default Federal Information Processing Standard (FIPS) is enabled by default when RHOSO is installed on a FIPS enabled RHOCP cluster in new deployments. You do not enable or disable FIPS in your RHOSO configuration. You control the FIPS state in the underlying RHOCP cluster. TLS-everywhere enabled by default After deployment, you can configure public services with your own certificates. You can deploy without TLS-everywhere and enable it later. You cannot disable TLS-everywhere after you enable it. Secure RBAC enabled by default The Secure Role-Based Access Control (RBAC) policy framework is enabled by default in RHOSO deployments. Key Manager (barbican) enabled by default The Key Manager is enabled by default in RHOSO deployments. 1.7. High availability new and enhanced features High availability managed natively in RHOCP RHOSO high availability (HA) uses RHOCP primitives instead of RHOSP services to manage failover and recovery deployment. 1.8. Upgrades new and enhanced features Adoption from RHOSP 17.1 RHOSO 18.0.3 (Feature Release 1) introduces the ability to use the adoption mechanism to upgrade from RHOSP 17.1 to RHOSO 18.0 while minimizing impacts to your workloads. 1.9. Observability new and enhanced features Power consumption monitoring (Technology Preview RHOSO 18.0.3 (Feature Release 1) introduces technology previews of power consumption monitoring capability for VM instances and and virtual networking functions (VNFs). See Jira Issue OSPRH-10006: Kepler Power Monitoring Metrics Visualization in RHOSO (Tech Preview) and Jira Issue OSPRH-46549: As a service provider I need a comprehensive dashboard that provides a power consumption matrix per VNF(Tech Preview) . RabbitMQ metrics dashboard Starting in RHOSO 18.0.3 (Feature Release 1), RabbitMQ metrics are collected and stored in Prometheus. A new dashboard for displaying these metrics was added. Enhanced Openstack Observability Enhanced dashboards provide unified observability with visualizations that are natively integrated into the RHOCP Observability UI. These include the node_exporter agent that exposes metrics to the Prometheus monitoring system. In RHOSO 18.0, the node_exporter agent replaces the collectd daemon, and Prometheus replaces the Time series database (Gnocchi). Logging The OpenStack logging capability is significantly enhanced. You can now collect logs from the Scontrol plane and Compute nodes, and use RHOCP Logging to store them in-cluster via Loki log store or forward them off-cluster to an external log store. Logs that are stored in-cluster with Loki can be visualized in the RHOCP Observability UI console. Service Telemetry Framework deprecation The Observability product for versions of RHOSP is Service Telemetry Framework (STF). With the release of RHOSO 18.0, STF is Deprecated and in maintenance mode. There are no feature enhancements for STF after STF 1.5.4, and STF status reaches end of life at the end of the RHOSP 17.1 lifecycle. Maintenance versions of STF will be released on new EUS versions of RHOCP until the end of the RHOSP 17.1 lifecycle. 1.10. Dashboard new and enhanced features Pinned CPUs The OpenStack Dashboard service (horizon) now shows how many pinned CPUs (pCPUs) are used and available to use in your environment. 1.11. Documentation new and enhanced features The documentation library has been restructured to align with the user lifecycle of RHOSO. Each guide incorporates content from one or more product areas that work together to cover end-to-end tasks. The titles are organized in categories for each stage in the user lifecycle of RHOSO. 1.11.1. Documentation categories The following categories are published with RHOSO 18.0: Plan Information about the release, requirements, and how to get started before deployment. This category includes the following guides: Release notes Planning your deployment Integrating partner content Prepare, deploy, configure, test Procedures for deploying an initial RHOSO environment, customizing the control plane and data plane, configuring validated architectures, storage, and testing the deployed environment. This category includes the following guides: Deploying Red Hat OpenStack Services on OpenShift Customizing the Red Hat OpenStack Services on OpenShift deployment Deploying a Network Functions Virtualization environment Deploying a hyper-converged infrastructure environment Configuring persistent storage Validating and troubleshooting the deployed cloud Adopt and update Information about performing minor updates to the latest maintenance release of RHOSO, and procedures for adopting a Red Hat OpenStack Platform 17.1 cloud. This category includes the following guides: Adopting a Red Hat OpenStack Platform 17.1 overcloud to a Red Hat OpenStack Services on OpenShift 18.0 data plane Updating your environment to the latest maintenance release Customize and scale Procedures for configuring and customizing specific components of the deployed environment. These procedures must be done before you start to operate the deployment. This category includes the following guides: Configuring the Compute service for instance creation Configuring data plane networking Configuring load balancing as a service Customizing persistent storage Configuring security services Auto-scaling for instances Manage resources and maintain the cloud Procedures that you can perform during ongoing operation of the RHOSO environment. This category includes the following guides: Maintaining the Red Hat OpenStack Services on OpenShift deployment Creating and managing instances Performing storage operations Performing security operations Managing networking resources Managing cloud resources with the Dashboard Monitoring high availability services 1.11.2. Documentation in progress The following titles are being reviewed and will be published asynchronously: Configuring the Bare Metal Provisioning service Configuring load balancing as a service (Technology Preview) 1.11.3. RHOCP feature documentation Features that are supported and managed natively in RHOCP are documented in the RHOCP documentation library. The RHOSO documentation includes links to relevant RHOCP documentation where needed. 1.11.4. Earlier documentation versions The RHOSO documentation page shows documentation for version 18.0 and later. For earlier supported versions of RHOSP, see Product Documentation for Red Hat OpenStack Platform 17.1 . | [
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: unused spec: neutron: template: customServiceConfig: | [ovn] localnet_learn_fdb = true fdb_age_threshold = 300 fdb_removal_limit = 50"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/release_notes/chap-top-new-features_release-notes |
Maintaining Red Hat Hyperconverged Infrastructure for Virtualization | Maintaining Red Hat Hyperconverged Infrastructure for Virtualization Red Hat Hyperconverged Infrastructure for Virtualization 1.8 Common maintenance tasks for Red Hat Hyperconverged Infrastructure for Virtualization Laura Bailey [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/index |
Chapter 9. Scheduling NUMA-aware workloads | Chapter 9. Scheduling NUMA-aware workloads Learn about NUMA-aware scheduling and how you can use it to deploy high performance workloads in an OpenShift Container Platform cluster. The NUMA Resources Operator allows you to schedule high-performance workloads in the same NUMA zone. It deploys a node resources exporting agent that reports on available cluster node NUMA resources, and a secondary scheduler that manages the workloads. 9.1. About NUMA-aware scheduling Introduction to NUMA Non-Uniform Memory Access (NUMA) is a compute platform architecture that allows different CPUs to access different regions of memory at different speeds. NUMA resource topology refers to the locations of CPUs, memory, and PCI devices relative to each other in the compute node. Colocated resources are said to be in the same NUMA zone . For high-performance applications, the cluster needs to process pod workloads in a single NUMA zone. Performance considerations NUMA architecture allows a CPU with multiple memory controllers to use any available memory across CPU complexes, regardless of where the memory is located. This allows for increased flexibility at the expense of performance. A CPU processing a workload using memory that is outside its NUMA zone is slower than a workload processed in a single NUMA zone. Also, for I/O-constrained workloads, the network interface on a distant NUMA zone slows down how quickly information can reach the application. High-performance workloads, such as telecommunications workloads, cannot operate to specification under these conditions. NUMA-aware scheduling NUMA-aware scheduling aligns the requested cluster compute resources (CPUs, memory, devices) in the same NUMA zone to process latency-sensitive or high-performance workloads efficiently. NUMA-aware scheduling also improves pod density per compute node for greater resource efficiency. Integration with Node Tuning Operator By integrating the Node Tuning Operator's performance profile with NUMA-aware scheduling, you can further configure CPU affinity to optimize performance for latency-sensitive workloads. Default scheduling logic The default OpenShift Container Platform pod scheduler scheduling logic considers the available resources of the entire compute node, not individual NUMA zones. If the most restrictive resource alignment is requested in the kubelet topology manager, error conditions can occur when admitting the pod to a node. Conversely, if the most restrictive resource alignment is not requested, the pod can be admitted to the node without proper resource alignment, leading to worse or unpredictable performance. For example, runaway pod creation with Topology Affinity Error statuses can occur when the pod scheduler makes suboptimal scheduling decisions for guaranteed pod workloads without knowing if the pod's requested resources are available. Scheduling mismatch decisions can cause indefinite pod startup delays. Also, depending on the cluster state and resource allocation, poor pod scheduling decisions can cause extra load on the cluster because of failed startup attempts. NUMA-aware pod scheduling diagram The NUMA Resources Operator deploys a custom NUMA resources secondary scheduler and other resources to mitigate against the shortcomings of the default OpenShift Container Platform pod scheduler. The following diagram provides a high-level overview of NUMA-aware pod scheduling. Figure 9.1. NUMA-aware scheduling overview NodeResourceTopology API The NodeResourceTopology API describes the available NUMA zone resources in each compute node. NUMA-aware scheduler The NUMA-aware secondary scheduler receives information about the available NUMA zones from the NodeResourceTopology API and schedules high-performance workloads on a node where it can be optimally processed. Node topology exporter The node topology exporter exposes the available NUMA zone resources for each compute node to the NodeResourceTopology API. The node topology exporter daemon tracks the resource allocation from the kubelet by using the PodResources API. PodResources API The PodResources API is local to each node and exposes the resource topology and available resources to the kubelet. Note The List endpoint of the PodResources API exposes exclusive CPUs allocated to a particular container. The API does not expose CPUs that belong to a shared pool. The GetAllocatableResources endpoint exposes allocatable resources available on a node. Additional resources For more information about running secondary pod schedulers in your cluster and how to deploy pods with a secondary pod scheduler, see Scheduling pods using a secondary scheduler . 9.2. Installing the NUMA Resources Operator NUMA Resources Operator deploys resources that allow you to schedule NUMA-aware workloads and deployments. You can install the NUMA Resources Operator using the OpenShift Container Platform CLI or the web console. 9.2.1. Installing the NUMA Resources Operator using the CLI As a cluster administrator, you can install the Operator using the CLI. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the NUMA Resources Operator: Save the following YAML in the nro-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources Create the Namespace CR by running the following command: USD oc create -f nro-namespace.yaml Create the Operator group for the NUMA Resources Operator: Save the following YAML in the nro-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources Create the OperatorGroup CR by running the following command: USD oc create -f nro-operatorgroup.yaml Create the subscription for the NUMA Resources Operator: Save the following YAML in the nro-sub.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: "4.14" name: numaresources-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f nro-sub.yaml Verification Verify that the installation succeeded by inspecting the CSV resource in the openshift-numaresources namespace. Run the following command: USD oc get csv -n openshift-numaresources Example output NAME DISPLAY VERSION REPLACES PHASE numaresources-operator.v4.14.2 numaresources-operator 4.14.2 Succeeded 9.2.2. Installing the NUMA Resources Operator using the web console As a cluster administrator, you can install the NUMA Resources Operator using the web console. Procedure Create a namespace for the NUMA Resources Operator: In the OpenShift Container Platform web console, click Administration Namespaces . Click Create Namespace , enter openshift-numaresources in the Name field, and then click Create . Install the NUMA Resources Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose numaresources-operator from the list of available Operators, and then click Install . In the Installed Namespaces field, select the openshift-numaresources namespace, and then click Install . Optional: Verify that the NUMA Resources Operator installed successfully: Switch to the Operators Installed Operators page. Ensure that NUMA Resources Operator is listed in the openshift-numaresources namespace with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Go to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Go to the Workloads Pods page and check the logs for pods in the default project. 9.3. Scheduling NUMA-aware workloads Clusters running latency-sensitive workloads typically feature performance profiles that help to minimize workload latency and optimize performance. The NUMA-aware scheduler deploys workloads based on available node NUMA resources and with respect to any performance profile settings applied to the node. The combination of NUMA-aware deployments, and the performance profile of the workload, ensures that workloads are scheduled in a way that maximizes performance. For the NUMA Resources Operator to be fully operational, you must deploy the NUMAResourcesOperator custom resource and the NUMA-aware secondary pod scheduler. 9.3.1. Creating the NUMAResourcesOperator custom resource When you have installed the NUMA Resources Operator, then create the NUMAResourcesOperator custom resource (CR) that instructs the NUMA Resources Operator to install all the cluster infrastructure needed to support the NUMA-aware scheduler, including daemon sets and APIs. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator. Procedure Create the NUMAResourcesOperator custom resource: Save the following minimal required YAML file example as nrop.yaml : apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 1 This must match the MachineConfigPool resource that you want to configure the NUMA Resources Operator on. For example, you might have created a MachineConfigPool resource named worker-cnf that designates a set of nodes expected to run telecommunications workloads. Each NodeGroup must match exactly one MachineConfigPool . Configurations where NodeGroup matches more than one MachineConfigPool are not supported. Create the NUMAResourcesOperator CR by running the following command: USD oc create -f nrop.yaml Note Creating the NUMAResourcesOperator triggers a reboot on the corresponding machine config pool and therefore the affected node. Optional: To enable NUMA-aware scheduling for multiple machine config pools (MCPs), define a separate NodeGroup for each pool. For example, define three NodeGroups for worker-cnf , worker-ht , and worker-other , in the NUMAResourcesOperator CR as shown in the following example: Example YAML definition for a NUMAResourcesOperator CR with multiple NodeGroups apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: logLevel: Normal nodeGroups: - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-ht - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-cnf - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-other Verification Verify that the NUMA Resources Operator deployed successfully by running the following command: USD oc get numaresourcesoperators.nodetopology.openshift.io Example output NAME AGE numaresourcesoperator 27s After a few minutes, run the following command to verify that the required resources deployed successfully: USD oc get all -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s 9.3.2. Deploying the NUMA-aware secondary pod scheduler After installing the NUMA Resources Operator, deploy the NUMA-aware secondary pod scheduler to optimize pod placement for improved performance and reduced latency in NUMA-based systems. Procedure Create the NUMAResourcesScheduler custom resource that deploys the NUMA-aware custom pod scheduler: Save the following minimal required YAML in the nro-scheduler.yaml file: apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.14" 1 1 In a disconnected environment, make sure to configure the resolution of this image by completing one of the following actions: Creating an ImageTagMirrorSet custom resource (CR). For more information, see "Configuring image registry repository mirroring" in the "Additional resources" section. Setting the URL to the disconnected registry. Create the NUMAResourcesScheduler CR by running the following command: USD oc create -f nro-scheduler.yaml After a few seconds, run the following command to confirm the successful deployment of the required resources: USD oc get all -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s pod/secondary-scheduler-847cb74f84-9whlm 1/1 Running 0 10m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/numaresourcesoperator-worker 3 3 3 3 3 node-role.kubernetes.io/worker= 98s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/numaresources-controller-manager 1/1 1 1 12m deployment.apps/secondary-scheduler 1/1 1 1 10m NAME DESIRED CURRENT READY AGE replicaset.apps/numaresources-controller-manager-7d9d84c58d 1 1 1 12m replicaset.apps/secondary-scheduler-847cb74f84 1 1 1 10m Additional resources Configuring image registry repository mirroring 9.3.3. Configuring a single NUMA node policy The NUMA Resources Operator requires a single NUMA node policy to be configured on the cluster. This can be achieved in two ways: by creating and applying a performance profile, or by configuring a KubeletConfig. Note The preferred way to configure a single NUMA node policy is to apply a performance profile. You can use the Performance Profile Creator (PPC) tool to create the performance profile. If a performance profile is created on the cluster, it automatically creates other tuning components like KubeletConfig and the tuned profile. For more information about creating a performance profile, see "About the Performance Profile Creator" in the "Additional resources" section. Additional resources About the Performance Profile Creator 9.3.4. Sample performance profile This example YAML shows a performance profile created by using the performance profile creator (PPC) tool: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: "3" reserved: 0-2 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/worker: "" 1 nodeSelector: node-role.kubernetes.io/worker: "" numa: topologyPolicy: single-numa-node 2 realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true 1 This should match the MachineConfigPool that you want to configure the NUMA Resources Operator on. For example, you might have created a MachineConfigPool named worker-cnf that designates a set of nodes that run telecommunications workloads. 2 The topologyPolicy must be set to single-numa-node . Ensure that this is the case by setting the topology-manager-policy argument to single-numa-node when running the PPC tool. 9.3.5. Creating a KubeletConfig CRD The recommended way to configure a single NUMA node policy is to apply a performance profile. Another way is by creating and applying a KubeletConfig custom resource (CR), as shown in the following procedure. Procedure Create the KubeletConfig custom resource (CR) that configures the pod admittance policy for the machine profile: Save the following YAML in the nro-kubeletconfig.yaml file: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-tuning spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 kubeletConfig: cpuManagerPolicy: "static" 2 cpuManagerReconcilePeriod: "5s" reservedSystemCPUs: "0,1" 3 memoryManagerPolicy: "Static" 4 evictionHard: memory.available: "100Mi" kubeReserved: memory: "512Mi" reservedMemory: - numaNode: 0 limits: memory: "1124Mi" systemReserved: memory: "512Mi" topologyManagerPolicy: "single-numa-node" 5 1 Adjust this label to match the machineConfigPoolSelector in the NUMAResourcesOperator CR. 2 For cpuManagerPolicy , static must use a lowercase s . 3 Adjust this based on the CPU on your nodes. 4 For memoryManagerPolicy , Static must use an uppercase S . 5 topologyManagerPolicy must be set to single-numa-node . Create the KubeletConfig CR by running the following command: USD oc create -f nro-kubeletconfig.yaml Note Applying performance profile or KubeletConfig automatically triggers rebooting of the nodes. If no reboot is triggered, you can troubleshoot the issue by looking at the labels in KubeletConfig that address the node group. 9.3.6. Scheduling workloads with the NUMA-aware scheduler Now that topo-aware-scheduler is installed, the NUMAResourcesOperator and NUMAResourcesScheduler CRs are applied and your cluster has a matching performance profile or kubeletconfig , you can schedule workloads with the NUMA-aware scheduler using deployment CRs that specify the minimum required resources to process the workload. The following example deployment uses NUMA-aware scheduling for a sample workload. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Get the name of the NUMA-aware scheduler that is deployed in the cluster by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName' Example output "topo-aware-scheduler" Create a Deployment CR that uses scheduler named topo-aware-scheduler , for example: Save the following YAML in the nro-deployment.yaml file: apiVersion: apps/v1 kind: Deployment metadata: name: numa-deployment-1 namespace: openshift-numaresources spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: schedulerName: topo-aware-scheduler 1 containers: - name: ctnr image: quay.io/openshifttest/hello-openshift:openshift imagePullPolicy: IfNotPresent resources: limits: memory: "100Mi" cpu: "10" requests: memory: "100Mi" cpu: "10" - name: ctnr2 image: registry.access.redhat.com/rhel:latest imagePullPolicy: IfNotPresent command: ["/bin/sh", "-c"] args: [ "while true; do sleep 1h; done;" ] resources: limits: memory: "100Mi" cpu: "8" requests: memory: "100Mi" cpu: "8" 1 schedulerName must match the name of the NUMA-aware scheduler that is deployed in your cluster, for example topo-aware-scheduler . Create the Deployment CR by running the following command: USD oc create -f nro-deployment.yaml Verification Verify that the deployment was successful: USD oc get pods -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE numa-deployment-1-6c4f5bdb84-wgn6g 2/2 Running 0 5m2s numaresources-controller-manager-7d9d84c58d-4v65j 1/1 Running 0 18m numaresourcesoperator-worker-7d96r 2/2 Running 4 43m numaresourcesoperator-worker-crsht 2/2 Running 2 43m numaresourcesoperator-worker-jp9mw 2/2 Running 2 43m secondary-scheduler-847cb74f84-fpncj 1/1 Running 0 18m Verify that the topo-aware-scheduler is scheduling the deployed pod by running the following command: USD oc describe pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources Example output Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m45s topo-aware-scheduler Successfully assigned openshift-numaresources/numa-deployment-1-6c4f5bdb84-wgn6g to worker-1 Note Deployments that request more resources than is available for scheduling will fail with a MinimumReplicasUnavailable error. The deployment succeeds when the required resources become available. Pods remain in the Pending state until the required resources are available. Verify that the expected allocated resources are listed for the node. Identify the node that is running the deployment pod by running the following command: USD oc get pods -n openshift-numaresources -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES numa-deployment-1-6c4f5bdb84-wgn6g 0/2 Running 0 82m 10.128.2.50 worker-1 <none> <none> Run the following command with the name of that node that is running the deployment pod. USD oc describe noderesourcetopologies.topology.node.k8s.io worker-1 Example output ... Zones: Costs: Name: node-0 Value: 10 Name: node-1 Value: 21 Name: node-0 Resources: Allocatable: 39 Available: 21 1 Capacity: 40 Name: cpu Allocatable: 6442450944 Available: 6442450944 Capacity: 6442450944 Name: hugepages-1Gi Allocatable: 134217728 Available: 134217728 Capacity: 134217728 Name: hugepages-2Mi Allocatable: 262415904768 Available: 262206189568 Capacity: 270146007040 Name: memory Type: Node 1 The Available capacity is reduced because of the resources that have been allocated to the guaranteed pod. Resources consumed by guaranteed pods are subtracted from the available node resources listed under noderesourcetopologies.topology.node.k8s.io . Resource allocations for pods with a Best-effort or Burstable quality of service ( qosClass ) are not reflected in the NUMA node resources under noderesourcetopologies.topology.node.k8s.io . If a pod's consumed resources are not reflected in the node resource calculation, verify that the pod has qosClass of Guaranteed and the CPU request is an integer value, not a decimal value. You can verify the that the pod has a qosClass of Guaranteed by running the following command: USD oc get pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources -o jsonpath="{ .status.qosClass }" Example output Guaranteed 9.4. Optional: Configuring polling operations for NUMA resources updates The daemons controlled by the NUMA Resources Operator in their nodeGroup poll resources to retrieve updates about available NUMA resources. You can fine-tune polling operations for these daemons by configuring the spec.nodeGroups specification in the NUMAResourcesOperator custom resource (CR). This provides advanced control of polling operations. Configure these specifications to improve scheduling behaviour and troubleshoot suboptimal scheduling decisions. The configuration options are the following: infoRefreshMode : Determines the trigger condition for polling the kubelet. The NUMA Resources Operator reports the resulting information to the API server. infoRefreshPeriod : Determines the duration between polling updates. podsFingerprinting : Determines if point-in-time information for the current set of pods running on a node is exposed in polling updates. Note podsFingerprinting is enabled by default. podsFingerprinting is a requirement for the cacheResyncPeriod specification in the NUMAResourcesScheduler CR. The cacheResyncPeriod specification helps to report more exact resource availability by monitoring pending resources on nodes. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator. Procedure Configure the spec.nodeGroups specification in your NUMAResourcesOperator CR: apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: infoRefreshMode: Periodic 1 infoRefreshPeriod: 10s 2 podsFingerprinting: Enabled 3 name: worker 1 Valid values are Periodic , Events , PeriodicAndEvents . Use Periodic to poll the kubelet at intervals that you define in infoRefreshPeriod . Use Events to poll the kubelet at every pod lifecycle event. Use PeriodicAndEvents to enable both methods. 2 Define the polling interval for Periodic or PeriodicAndEvents refresh modes. The field is ignored if the refresh mode is Events . 3 Valid values are Enabled , Disabled , and EnabledExclusiveResources . Setting to Enabled is a requirement for the cacheResyncPeriod specification in the NUMAResourcesScheduler . Verification After you deploy the NUMA Resources Operator, verify that the node group configurations were applied by running the following command: USD oc get numaresop numaresourcesoperator -o json | jq '.status' Example output ... "config": { "infoRefreshMode": "Periodic", "infoRefreshPeriod": "10s", "podsFingerprinting": "Enabled" }, "name": "worker" ... 9.5. Troubleshooting NUMA-aware scheduling To troubleshoot common problems with NUMA-aware pod scheduling, perform the following steps. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator and deploy the NUMA-aware secondary scheduler. Procedure Verify that the noderesourcetopologies CRD is deployed in the cluster by running the following command: USD oc get crd | grep noderesourcetopologies Example output NAME CREATED AT noderesourcetopologies.topology.node.k8s.io 2022-01-18T08:28:06Z Check that the NUMA-aware scheduler name matches the name specified in your NUMA-aware workloads by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName' Example output topo-aware-scheduler Verify that NUMA-aware schedulable nodes have the noderesourcetopologies CR applied to them. Run the following command: USD oc get noderesourcetopologies.topology.node.k8s.io Example output NAME AGE compute-0.example.com 17h compute-1.example.com 17h Note The number of nodes should equal the number of worker nodes that are configured by the machine config pool ( mcp ) worker definition. Verify the NUMA zone granularity for all schedulable nodes by running the following command: USD oc get noderesourcetopologies.topology.node.k8s.io -o yaml Example output apiVersion: v1 items: - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: "2022-06-16T08:55:38Z" generation: 63760 name: worker-0 resourceVersion: "8450223" uid: 8b77be46-08c0-4074-927b-d49361471590 topologyPolicies: - SingleNUMANodeContainerLevel zones: - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: - allocatable: "38" available: "38" capacity: "40" name: cpu - allocatable: "134217728" available: "134217728" capacity: "134217728" name: hugepages-2Mi - allocatable: "262352048128" available: "262352048128" capacity: "270107316224" name: memory - allocatable: "6442450944" available: "6442450944" capacity: "6442450944" name: hugepages-1Gi type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: "268435456" available: "268435456" capacity: "268435456" name: hugepages-2Mi - allocatable: "269231067136" available: "269231067136" capacity: "270573244416" name: memory - allocatable: "40" available: "40" capacity: "40" name: cpu - allocatable: "1073741824" available: "1073741824" capacity: "1073741824" name: hugepages-1Gi type: Node - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: "2022-06-16T08:55:37Z" generation: 62061 name: worker-1 resourceVersion: "8450129" uid: e8659390-6f8d-4e67-9a51-1ea34bba1cc3 topologyPolicies: - SingleNUMANodeContainerLevel zones: 1 - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: 2 - allocatable: "38" available: "38" capacity: "40" name: cpu - allocatable: "6442450944" available: "6442450944" capacity: "6442450944" name: hugepages-1Gi - allocatable: "134217728" available: "134217728" capacity: "134217728" name: hugepages-2Mi - allocatable: "262391033856" available: "262391033856" capacity: "270146301952" name: memory type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: "40" available: "40" capacity: "40" name: cpu - allocatable: "1073741824" available: "1073741824" capacity: "1073741824" name: hugepages-1Gi - allocatable: "268435456" available: "268435456" capacity: "268435456" name: hugepages-2Mi - allocatable: "269192085504" available: "269192085504" capacity: "270534262784" name: memory type: Node kind: List metadata: resourceVersion: "" selfLink: "" 1 Each stanza under zones describes the resources for a single NUMA zone. 2 resources describes the current state of the NUMA zone resources. Check that resources listed under items.zones.resources.available correspond to the exclusive NUMA zone resources allocated to each guaranteed pod. 9.5.1. Reporting more exact resource availability Enable the cacheResyncPeriod specification to help the NUMA Resources Operator report more exact resource availability by monitoring pending resources on nodes and synchronizing this information in the scheduler cache at a defined interval. This also helps to minimize Topology Affinity Error errors because of sub-optimal scheduling decisions. The lower the interval, the greater the network load. The cacheResyncPeriod specification is disabled by default. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Delete the currently running NUMAResourcesScheduler resource: Get the active NUMAResourcesScheduler by running the following command: USD oc get NUMAResourcesScheduler Example output NAME AGE numaresourcesscheduler 92m Delete the secondary scheduler resource by running the following command: USD oc delete NUMAResourcesScheduler numaresourcesscheduler Example output numaresourcesscheduler.nodetopology.openshift.io "numaresourcesscheduler" deleted Save the following YAML in the file nro-scheduler-cacheresync.yaml . This example changes the log level to Debug : apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.14" cacheResyncPeriod: "5s" 1 1 Enter an interval value in seconds for synchronization of the scheduler cache. A value of 5s is typical for most implementations. Create the updated NUMAResourcesScheduler resource by running the following command: USD oc create -f nro-scheduler-cacheresync.yaml Example output numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created Verification steps Check that the NUMA-aware scheduler was successfully deployed: Run the following command to check that the CRD is created successfully: USD oc get crd | grep numaresourcesschedulers Example output NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z Check that the new custom scheduler is available by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io Example output NAME AGE numaresourcesscheduler 3h26m Check that the logs for the scheduler show the increased log level: Get the list of pods running in the openshift-numaresources namespace by running the following command: USD oc get pods -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m Get the logs for the secondary scheduler pod by running the following command: USD oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources Example output ... I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] "Add event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" I0223 11:05:53.461016 1 eventhandlers.go:244] "Delete event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" 9.5.2. Checking the NUMA-aware scheduler logs Troubleshoot problems with the NUMA-aware scheduler by reviewing the logs. If required, you can increase the scheduler log level by modifying the spec.logLevel field of the NUMAResourcesScheduler resource. Acceptable values are Normal , Debug , and Trace , with Trace being the most verbose option. Note To change the log level of the secondary scheduler, delete the running scheduler resource and re-deploy it with the changed log level. The scheduler is unavailable for scheduling new workloads during this downtime. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Delete the currently running NUMAResourcesScheduler resource: Get the active NUMAResourcesScheduler by running the following command: USD oc get NUMAResourcesScheduler Example output NAME AGE numaresourcesscheduler 90m Delete the secondary scheduler resource by running the following command: USD oc delete NUMAResourcesScheduler numaresourcesscheduler Example output numaresourcesscheduler.nodetopology.openshift.io "numaresourcesscheduler" deleted Save the following YAML in the file nro-scheduler-debug.yaml . This example changes the log level to Debug : apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.14" logLevel: Debug Create the updated Debug logging NUMAResourcesScheduler resource by running the following command: USD oc create -f nro-scheduler-debug.yaml Example output numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created Verification steps Check that the NUMA-aware scheduler was successfully deployed: Run the following command to check that the CRD is created successfully: USD oc get crd | grep numaresourcesschedulers Example output NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z Check that the new custom scheduler is available by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io Example output NAME AGE numaresourcesscheduler 3h26m Check that the logs for the scheduler shows the increased log level: Get the list of pods running in the openshift-numaresources namespace by running the following command: USD oc get pods -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m Get the logs for the secondary scheduler pod by running the following command: USD oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources Example output ... I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] "Add event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" I0223 11:05:53.461016 1 eventhandlers.go:244] "Delete event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" 9.5.3. Troubleshooting the resource topology exporter Troubleshoot noderesourcetopologies objects where unexpected results are occurring by inspecting the corresponding resource-topology-exporter logs. Note It is recommended that NUMA resource topology exporter instances in the cluster are named for nodes they refer to. For example, a worker node with the name worker should have a corresponding noderesourcetopologies object called worker . Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Get the daemonsets managed by the NUMA Resources Operator. Each daemonset has a corresponding nodeGroup in the NUMAResourcesOperator CR. Run the following command: USD oc get numaresourcesoperators.nodetopology.openshift.io numaresourcesoperator -o jsonpath="{.status.daemonsets[0]}" Example output {"name":"numaresourcesoperator-worker","namespace":"openshift-numaresources"} Get the label for the daemonset of interest using the value for name from the step: USD oc get ds -n openshift-numaresources numaresourcesoperator-worker -o jsonpath="{.spec.selector.matchLabels}" Example output {"name":"resource-topology"} Get the pods using the resource-topology label by running the following command: USD oc get pods -n openshift-numaresources -l name=resource-topology -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE numaresourcesoperator-worker-5wm2k 2/2 Running 0 2d1h 10.135.0.64 compute-0.example.com numaresourcesoperator-worker-pb75c 2/2 Running 0 2d1h 10.132.2.33 compute-1.example.com Examine the logs of the resource-topology-exporter container running on the worker pod that corresponds to the node you are troubleshooting. Run the following command: USD oc logs -n openshift-numaresources -c resource-topology-exporter numaresourcesoperator-worker-pb75c Example output I0221 13:38:18.334140 1 main.go:206] using sysinfo: reservedCpus: 0,1 reservedMemory: "0": 1178599424 I0221 13:38:18.334370 1 main.go:67] === System information === I0221 13:38:18.334381 1 sysinfo.go:231] cpus: reserved "0-1" I0221 13:38:18.334493 1 sysinfo.go:237] cpus: online "0-103" I0221 13:38:18.546750 1 main.go:72] cpus: allocatable "2-103" hugepages-1Gi: numa cell 0 -> 6 numa cell 1 -> 1 hugepages-2Mi: numa cell 0 -> 64 numa cell 1 -> 128 memory: numa cell 0 -> 45758Mi numa cell 1 -> 48372Mi 9.5.4. Correcting a missing resource topology exporter config map If you install the NUMA Resources Operator in a cluster with misconfigured cluster settings, in some circumstances, the Operator is shown as active but the logs of the resource topology exporter (RTE) daemon set pods show that the configuration for the RTE is missing, for example: Info: couldn't find configuration in "/etc/resource-topology-exporter/config.yaml" This log message indicates that the kubeletconfig with the required configuration was not properly applied in the cluster, resulting in a missing RTE configmap . For example, the following cluster is missing a numaresourcesoperator-worker configmap custom resource (CR): USD oc get configmap Example output NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h In a correctly configured cluster, oc get configmap also returns a numaresourcesoperator-worker configmap CR. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator and deploy the NUMA-aware secondary scheduler. Procedure Compare the values for spec.machineConfigPoolSelector.matchLabels in kubeletconfig and metadata.labels in the MachineConfigPool ( mcp ) worker CR using the following commands: Check the kubeletconfig labels by running the following command: USD oc get kubeletconfig -o yaml Example output machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled Check the mcp labels by running the following command: USD oc get mcp worker -o yaml Example output labels: machineconfiguration.openshift.io/mco-built-in: "" pools.operator.machineconfiguration.openshift.io/worker: "" The cnf-worker-tuning: enabled label is not present in the MachineConfigPool object. Edit the MachineConfigPool CR to include the missing label, for example: USD oc edit mcp worker -o yaml Example output labels: machineconfiguration.openshift.io/mco-built-in: "" pools.operator.machineconfiguration.openshift.io/worker: "" cnf-worker-tuning: enabled Apply the label changes and wait for the cluster to apply the updated configuration. Run the following command: Verification Check that the missing numaresourcesoperator-worker configmap CR is applied: USD oc get configmap Example output NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h numaresourcesoperator-worker 1 5m openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h 9.5.5. Collecting NUMA Resources Operator data You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with the NUMA Resources Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure To collect NUMA Resources Operator data with must-gather , you must specify the NUMA Resources Operator must-gather image. USD oc adm must-gather --image=registry.redhat.io/numaresources-must-gather/numaresources-must-gather-rhel9:v4.14 | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources",
"oc create -f nro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources",
"oc create -f nro-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: \"4.14\" name: numaresources-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f nro-sub.yaml",
"oc get csv -n openshift-numaresources",
"NAME DISPLAY VERSION REPLACES PHASE numaresources-operator.v4.14.2 numaresources-operator 4.14.2 Succeeded",
"apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1",
"oc create -f nrop.yaml",
"apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: logLevel: Normal nodeGroups: - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-ht - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-cnf - machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker-other",
"oc get numaresourcesoperators.nodetopology.openshift.io",
"NAME AGE numaresourcesoperator 27s",
"oc get all -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s",
"apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.14\" 1",
"oc create -f nro-scheduler.yaml",
"oc get all -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s pod/secondary-scheduler-847cb74f84-9whlm 1/1 Running 0 10m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/numaresourcesoperator-worker 3 3 3 3 3 node-role.kubernetes.io/worker= 98s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/numaresources-controller-manager 1/1 1 1 12m deployment.apps/secondary-scheduler 1/1 1 1 10m NAME DESIRED CURRENT READY AGE replicaset.apps/numaresources-controller-manager-7d9d84c58d 1 1 1 12m replicaset.apps/secondary-scheduler-847cb74f84 1 1 1 10m",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: \"3\" reserved: 0-2 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 nodeSelector: node-role.kubernetes.io/worker: \"\" numa: topologyPolicy: single-numa-node 2 realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-tuning spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 kubeletConfig: cpuManagerPolicy: \"static\" 2 cpuManagerReconcilePeriod: \"5s\" reservedSystemCPUs: \"0,1\" 3 memoryManagerPolicy: \"Static\" 4 evictionHard: memory.available: \"100Mi\" kubeReserved: memory: \"512Mi\" reservedMemory: - numaNode: 0 limits: memory: \"1124Mi\" systemReserved: memory: \"512Mi\" topologyManagerPolicy: \"single-numa-node\" 5",
"oc create -f nro-kubeletconfig.yaml",
"oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'",
"\"topo-aware-scheduler\"",
"apiVersion: apps/v1 kind: Deployment metadata: name: numa-deployment-1 namespace: openshift-numaresources spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: schedulerName: topo-aware-scheduler 1 containers: - name: ctnr image: quay.io/openshifttest/hello-openshift:openshift imagePullPolicy: IfNotPresent resources: limits: memory: \"100Mi\" cpu: \"10\" requests: memory: \"100Mi\" cpu: \"10\" - name: ctnr2 image: registry.access.redhat.com/rhel:latest imagePullPolicy: IfNotPresent command: [\"/bin/sh\", \"-c\"] args: [ \"while true; do sleep 1h; done;\" ] resources: limits: memory: \"100Mi\" cpu: \"8\" requests: memory: \"100Mi\" cpu: \"8\"",
"oc create -f nro-deployment.yaml",
"oc get pods -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE numa-deployment-1-6c4f5bdb84-wgn6g 2/2 Running 0 5m2s numaresources-controller-manager-7d9d84c58d-4v65j 1/1 Running 0 18m numaresourcesoperator-worker-7d96r 2/2 Running 4 43m numaresourcesoperator-worker-crsht 2/2 Running 2 43m numaresourcesoperator-worker-jp9mw 2/2 Running 2 43m secondary-scheduler-847cb74f84-fpncj 1/1 Running 0 18m",
"oc describe pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m45s topo-aware-scheduler Successfully assigned openshift-numaresources/numa-deployment-1-6c4f5bdb84-wgn6g to worker-1",
"oc get pods -n openshift-numaresources -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES numa-deployment-1-6c4f5bdb84-wgn6g 0/2 Running 0 82m 10.128.2.50 worker-1 <none> <none>",
"oc describe noderesourcetopologies.topology.node.k8s.io worker-1",
"Zones: Costs: Name: node-0 Value: 10 Name: node-1 Value: 21 Name: node-0 Resources: Allocatable: 39 Available: 21 1 Capacity: 40 Name: cpu Allocatable: 6442450944 Available: 6442450944 Capacity: 6442450944 Name: hugepages-1Gi Allocatable: 134217728 Available: 134217728 Capacity: 134217728 Name: hugepages-2Mi Allocatable: 262415904768 Available: 262206189568 Capacity: 270146007040 Name: memory Type: Node",
"oc get pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources -o jsonpath=\"{ .status.qosClass }\"",
"Guaranteed",
"apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: infoRefreshMode: Periodic 1 infoRefreshPeriod: 10s 2 podsFingerprinting: Enabled 3 name: worker",
"oc get numaresop numaresourcesoperator -o json | jq '.status'",
"\"config\": { \"infoRefreshMode\": \"Periodic\", \"infoRefreshPeriod\": \"10s\", \"podsFingerprinting\": \"Enabled\" }, \"name\": \"worker\"",
"oc get crd | grep noderesourcetopologies",
"NAME CREATED AT noderesourcetopologies.topology.node.k8s.io 2022-01-18T08:28:06Z",
"oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'",
"topo-aware-scheduler",
"oc get noderesourcetopologies.topology.node.k8s.io",
"NAME AGE compute-0.example.com 17h compute-1.example.com 17h",
"oc get noderesourcetopologies.topology.node.k8s.io -o yaml",
"apiVersion: v1 items: - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:38Z\" generation: 63760 name: worker-0 resourceVersion: \"8450223\" uid: 8b77be46-08c0-4074-927b-d49361471590 topologyPolicies: - SingleNUMANodeContainerLevel zones: - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262352048128\" available: \"262352048128\" capacity: \"270107316224\" name: memory - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269231067136\" available: \"269231067136\" capacity: \"270573244416\" name: memory - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi type: Node - apiVersion: topology.node.k8s.io/v1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:37Z\" generation: 62061 name: worker-1 resourceVersion: \"8450129\" uid: e8659390-6f8d-4e67-9a51-1ea34bba1cc3 topologyPolicies: - SingleNUMANodeContainerLevel zones: 1 - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: 2 - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262391033856\" available: \"262391033856\" capacity: \"270146301952\" name: memory type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269192085504\" available: \"269192085504\" capacity: \"270534262784\" name: memory type: Node kind: List metadata: resourceVersion: \"\" selfLink: \"\"",
"oc get NUMAResourcesScheduler",
"NAME AGE numaresourcesscheduler 92m",
"oc delete NUMAResourcesScheduler numaresourcesscheduler",
"numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted",
"apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.14\" cacheResyncPeriod: \"5s\" 1",
"oc create -f nro-scheduler-cacheresync.yaml",
"numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created",
"oc get crd | grep numaresourcesschedulers",
"NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z",
"oc get numaresourcesschedulers.nodetopology.openshift.io",
"NAME AGE numaresourcesscheduler 3h26m",
"oc get pods -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m",
"oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources",
"I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] \"Add event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\" I0223 11:05:53.461016 1 eventhandlers.go:244] \"Delete event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\"",
"oc get NUMAResourcesScheduler",
"NAME AGE numaresourcesscheduler 90m",
"oc delete NUMAResourcesScheduler numaresourcesscheduler",
"numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted",
"apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.14\" logLevel: Debug",
"oc create -f nro-scheduler-debug.yaml",
"numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created",
"oc get crd | grep numaresourcesschedulers",
"NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z",
"oc get numaresourcesschedulers.nodetopology.openshift.io",
"NAME AGE numaresourcesscheduler 3h26m",
"oc get pods -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m",
"oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources",
"I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] \"Add event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\" I0223 11:05:53.461016 1 eventhandlers.go:244] \"Delete event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\"",
"oc get numaresourcesoperators.nodetopology.openshift.io numaresourcesoperator -o jsonpath=\"{.status.daemonsets[0]}\"",
"{\"name\":\"numaresourcesoperator-worker\",\"namespace\":\"openshift-numaresources\"}",
"oc get ds -n openshift-numaresources numaresourcesoperator-worker -o jsonpath=\"{.spec.selector.matchLabels}\"",
"{\"name\":\"resource-topology\"}",
"oc get pods -n openshift-numaresources -l name=resource-topology -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE numaresourcesoperator-worker-5wm2k 2/2 Running 0 2d1h 10.135.0.64 compute-0.example.com numaresourcesoperator-worker-pb75c 2/2 Running 0 2d1h 10.132.2.33 compute-1.example.com",
"oc logs -n openshift-numaresources -c resource-topology-exporter numaresourcesoperator-worker-pb75c",
"I0221 13:38:18.334140 1 main.go:206] using sysinfo: reservedCpus: 0,1 reservedMemory: \"0\": 1178599424 I0221 13:38:18.334370 1 main.go:67] === System information === I0221 13:38:18.334381 1 sysinfo.go:231] cpus: reserved \"0-1\" I0221 13:38:18.334493 1 sysinfo.go:237] cpus: online \"0-103\" I0221 13:38:18.546750 1 main.go:72] cpus: allocatable \"2-103\" hugepages-1Gi: numa cell 0 -> 6 numa cell 1 -> 1 hugepages-2Mi: numa cell 0 -> 64 numa cell 1 -> 128 memory: numa cell 0 -> 45758Mi numa cell 1 -> 48372Mi",
"Info: couldn't find configuration in \"/etc/resource-topology-exporter/config.yaml\"",
"oc get configmap",
"NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h",
"oc get kubeletconfig -o yaml",
"machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled",
"oc get mcp worker -o yaml",
"labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\"",
"oc edit mcp worker -o yaml",
"labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\" cnf-worker-tuning: enabled",
"oc get configmap",
"NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h numaresourcesoperator-worker 1 5m openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h",
"oc adm must-gather --image=registry.redhat.io/numaresources-must-gather/numaresources-must-gather-rhel9:v4.14"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/scalability_and_performance/cnf-numa-aware-scheduling |
Chapter 7. Red Hat build of Keycloak SAML Galleon feature pack detailed configuration | Chapter 7. Red Hat build of Keycloak SAML Galleon feature pack detailed configuration This chapter contains the detailed list of elements for the keycloak-saml.xml configuration file used by the Red Hat build of Keycloak SAML Galleon feature pack. 7.1. SP element Here is the explanation of the SP element attributes: <SP entityID="sp" sslPolicy="ssl" nameIDPolicyFormat="format" forceAuthentication="true" isPassive="false" keepDOMAssertion="true" autodetectBearerOnly="false"> ... </SP> entityID This is the identifier for this client. The IdP needs this value to determine who the client is that is communicating with it. This setting is REQUIRED . sslPolicy This is the SSL policy the adapter will enforce. Valid values are: ALL , EXTERNAL , and NONE . For ALL , all requests must come in via HTTPS. For EXTERNAL , only non-private IP addresses must come over the wire via HTTPS. For NONE , no requests are required to come over via HTTPS. This setting is OPTIONAL . Default value is EXTERNAL . nameIDPolicyFormat SAML clients can request a specific NameID Subject format. Fill in this value if you want a specific format. It must be a standard SAML format identifier: urn:oasis:names:tc:SAML:2.0:nameid-transient . This setting is OPTIONAL . By default, no special format is requested. forceAuthentication SAML clients can request that a user is re-authenticated even if they are already logged in at the IdP. Set this to true to enable. This setting is OPTIONAL . Default value is false . isPassive SAML clients can request that a user is never asked to authenticate even if they are not logged in at the IdP. Set this to true if you want this. Do not use together with forceAuthentication as they are opposite. This setting is OPTIONAL . Default value is false . turnOffChangeSessionIdOnLogin The session ID is changed by default on a successful login on some platforms to plug a security attack vector. Change this to true to disable this. It is recommended you do not turn it off. Default value is false . autodetectBearerOnly This should be set to true if your application serves both a web application and web services (for example SOAP or REST). It allows you to redirect unauthenticated users of the web application to the Red Hat build of Keycloak login page, but send an HTTP 401 status code to unauthenticated SOAP or REST clients instead as they would not understand a redirect to the login page. Red Hat build of Keycloak auto-detects SOAP or REST clients based on typical headers like X-Requested-With , SOAPAction or Accept . The default value is false . logoutPage This sets the page to display after logout. If the page is a full URL, such as http://web.example.com/logout.html , the user is redirected after logout to that page using the HTTP 302 status code. If a link without scheme part is specified, such as /logout.jsp , the page is displayed after logout, regardless of whether it lies in a protected area according to security-constraint declarations in web.xml , and the page is resolved relative to the deployment context root. keepDOMAssertion This attribute should be set to true to make the adapter store the DOM representation of the assertion in its original form inside the SamlPrincipal associated to the request. The assertion document can be retrieved using the method getAssertionDocument inside the principal. This is specially useful when re-playing a signed assertion. The returned document is the one that was generated parsing the SAML response received by the Red Hat build of Keycloak server. This setting is OPTIONAL and its default value is false (the document is not saved inside the principal). 7.2. Service Provider keys and key elements If the IdP requires that the client application (or SP) sign all of its requests and/or if the IdP will encrypt assertions, you must define the keys used to do this. For client-signed documents you must define both the private and public key or certificate that is used to sign documents. For encryption, you only have to define the private key that is used to decrypt it. There are two ways to describe your keys. They can be stored within a Java KeyStore or you can copy/paste the keys directly within keycloak-saml.xml in the PEM format. <Keys> <Key signing="true" > ... </Key> </Keys> The Key element has two optional attributes signing and encryption . When set to true these tell the adapter what the key will be used for. If both attributes are set to true, then the key will be used for both signing documents and decrypting encrypted assertions. You must set at least one of these attributes to true. 7.2.1. KeyStore element Within the Key element you can load your keys and certificates from a Java Keystore. This is declared within a KeyStore element. <Keys> <Key signing="true" > <KeyStore resource="/WEB-INF/keystore.jks" password="store123"> <PrivateKey alias="myPrivate" password="test123"/> <Certificate alias="myCertAlias"/> </KeyStore> </Key> </Keys> Here are the XML config attributes that are defined with the KeyStore element. file File path to the key store. This option is OPTIONAL . The file or resource attribute must be set. resource WAR resource path to the KeyStore. This is a path used in method call to ServletContext.getResourceAsStream(). This option is OPTIONAL . The file or resource attribute must be set. password The password of the KeyStore. This option is REQUIRED . If you are defining keys that the SP will use to sign document, you must also specify references to your private keys and certificates within the Java KeyStore. The PrivateKey and Certificate elements in the above example define an alias that points to the key or cert within the keystore. Keystores require an additional password to access private keys. In the PrivateKey element you must define this password within a password attribute. 7.2.2. Key PEMS Within the Key element you declare your keys and certificates directly using the sub elements PrivateKeyPem , PublicKeyPem , and CertificatePem . The values contained in these elements must conform to the PEM key format. You usually use this option if you are generating keys using openssl or similar command line tool. <Keys> <Key signing="true"> <PrivateKeyPem> 2341251234AB31234==231BB998311222423522334 </PrivateKeyPem> <CertificatePem> 211111341251234AB31234==231BB998311222423522334 </CertificatePem> </Key> </Keys> 7.3. SP PrincipalNameMapping element This element is optional. When creating a Java Principal object that you obtain from methods such as HttpServletRequest.getUserPrincipal() , you can define what name is returned by the Principal.getName() method. <SP ...> <PrincipalNameMapping policy="FROM_NAME_ID"/> </SP> <SP ...> <PrincipalNameMapping policy="FROM_ATTRIBUTE" attribute="email" /> </SP> The policy attribute defines the policy used to populate this value. The possible values for this attribute are: FROM_NAME_ID This policy just uses whatever the SAML subject value is. This is the default setting FROM_ATTRIBUTE This will pull the value from one of the attributes declared in the SAML assertion received from the server. You'll need to specify the name of the SAML assertion attribute to use within the attribute XML attribute. 7.4. RoleIdentifiers element The RoleIdentifiers element defines what SAML attributes within the assertion received from the user should be used as role identifiers within the Jakarta EE Security Context for the user. <RoleIdentifiers> <Attribute name="Role"/> <Attribute name="member"/> <Attribute name="memberOf"/> </RoleIdentifiers> By default Role attribute values are converted to Jakarta EE roles. Some IdPs send roles using a member or memberOf attribute assertion. You can define one or more Attribute elements to specify which SAML attributes must be converted into roles. 7.5. RoleMappingsProvider element The RoleMappingsProvider is an optional element that allows for the specification of the id and configuration of the org.keycloak.adapters.saml.RoleMappingsProvider SPI implementation that is to be used by the SAML adapter. When Red Hat build of Keycloak is used as the IDP, it is possible to use the built-in role mappers to map any roles before adding them to the SAML assertion. However, the SAML adapters can be used to send SAML requests to third party IDPs and in this case it might be necessary to map the roles extracted from the assertion into a different set of roles as required by the SP. The RoleMappingsProvider SPI allows for the configuration of pluggable role mappers that can be used to perform the necessary mappings. The configuration of the provider looks as follows: ... <RoleIdentifiers> ... </RoleIdentifiers> <RoleMappingsProvider id="properties-based-role-mapper"> <Property name="properties.resource.location" value="/WEB-INF/role-mappings.properties"/> </RoleMappingsProvider> <IDP> ... </IDP> The id attribute identifies which of the installed providers is to be used. The Property sub-element can be used multiple times to specify configuration properties for the provider. 7.5.1. Properties Based role mappings provider Red Hat build of Keycloak includes a RoleMappingsProvider implementation that performs the role mappings using a properties file. This provider is identified by the id properties-based-role-mapper and is implemented by the org.keycloak.adapters.saml.PropertiesBasedRoleMapper class. This provider relies on two configuration properties that can be used to specify the location of the properties file that will be used. First, it checks if the properties.file.location property has been specified, using the configured value to locate the properties file in the filesystem. If the configured file is not located, the provider throws a RuntimeException . The following snippet shows an example of provider using the properties.file.configuration option to load the roles.properties file from the /opt/mappers/ directory in the filesystem: <RoleMappingsProvider id="properties-based-role-mapper"> <Property name="properties.file.location" value="/opt/mappers/roles.properties"/> </RoleMappingsProvider> If the properties.file.location configuration has not been set, the provider checks the properties.resource.location property, using the configured value to load the properties file from the WAR resource. If this configuration property is also not present, the provider attempts to load the file from /WEB-INF/role-mappings.properties by default. Failure to load the file from the resource will result in the provider throwing a RuntimeException . The following snippet shows an example of provider using the properties.resource.location to load the roles.properties file from the application's /WEB-INF/conf/ directory: <RoleMappingsProvider id="properties-based-role-mapper"> <Property name="properties.resource.location" value="/WEB-INF/conf/roles.properties"/> </RoleMappingsProvider> The properties file can contain both roles and principals as keys, and a list of zero or more roles separated by comma as values. When invoked, the implementation iterates through the set of roles that were extracted from the assertion and checks, for each role, if a mapping exists. If the role maps to an empty role, it is discarded. If it maps to a set of one or more different roles, then these roles are set in the result set. If no mapping is found for the role then it is included as is in the result set. Once the roles have been processed, the implementation checks if the principal extracted from the assertion contains an entry properties file. If a mapping for the principal exists, any roles listed as value are added to the result set. This allows the assignment of extra roles to a principal. As an example, let's assume the provider has been configured with the following properties file: If the principal kc_user is extracted from the assertion with roles roleA , roleB and roleC , the final set of roles assigned to the principal will be roleC , roleX , roleY and roleZ because roleA is being mapped into both roleX and roleY , roleB was mapped into an empty role - thus being discarded, roleC is used as is and finally an additional role was added to the kc_user principal ( roleZ ). Note: to use spaces in role names for mappings, use unicode replacements for space. For example, incoming 'role A' would appear as: 7.6. IDP Element Everything in the IDP element describes the settings for the identity provider (authentication server) the SP is communicating with. <IDP entityID="idp" signaturesRequired="true" signatureAlgorithm="RSA_SHA1" signatureCanonicalizationMethod="http://www.w3.org/2001/10/xml-exc-c14n#"> ... </IDP> Here are the attribute config options you can specify within the IDP element declaration. entityID This is the issuer ID of the IDP. This setting is REQUIRED . signaturesRequired If set to true , the client adapter will sign every document it sends to the IDP. Also, the client will expect that the IDP will be signing any documents sent to it. This switch sets the default for all request and response types, but you will see later that you have some fine grain control over this. This setting is OPTIONAL and will default to false . signatureAlgorithm This is the signature algorithm that the IDP expects signed documents to use. Allowed values are: RSA_SHA1 , RSA_SHA256 , RSA_SHA512 , and DSA_SHA1 . This setting is OPTIONAL and defaults to RSA_SHA256 . Note that SHA1 based algorithms are deprecated and can be removed in the future. We recommend the use of some more secure algorithm instead of *_SHA1 . Also, with *_SHA1 algorithms, verifying signatures do not work if the SAML server (usually Red Hat build of Keycloak) runs on Java 17 or higher. signatureCanonicalizationMethod This is the signature canonicalization method that the IDP expects signed documents to use. This setting is OPTIONAL . The default value is http://www.w3.org/2001/10/xml-exc-c14n# and should be good for most IDPs. metadataUrl The URL used to retrieve the IDP metadata, currently this is only used to pick up signing and encryption keys periodically which allow cycling of these keys on the IDP without manual changes on the SP side. 7.7. IDP AllowedClockSkew sub element The AllowedClockSkew optional sub element defines the allowed clock skew between IDP and SP. The default value is 0. <AllowedClockSkew unit="MILLISECONDS">3500</AllowedClockSkew> unit It is possible to define the time unit attached to the value for this element. Allowed values are MICROSECONDS, MILLISECONDS, MINUTES, NANOSECONDS and SECONDS. This is OPTIONAL . The default value is SECONDS . 7.8. IDP SingleSignOnService sub element The SingleSignOnService sub element defines the login SAML endpoint of the IDP. The client adapter will send requests to the IDP formatted via the settings within this element when it wants to log in. <SingleSignOnService signRequest="true" validateResponseSignature="true" requestBinding="post" bindingUrl="url"/> Here are the config attributes you can define on this element: signRequest Should the client sign authn requests? This setting is OPTIONAL . Defaults to whatever the IDP signaturesRequired element value is. validateResponseSignature Should the client expect the IDP to sign the assertion response document sent back from an authn request? This setting OPTIONAL . Defaults to whatever the IDP signaturesRequired element value is. requestBinding This is the SAML binding type used for communicating with the IDP. This setting is OPTIONAL . The default value is POST , but you can set it to REDIRECT as well. responseBinding SAML allows the client to request what binding type it wants authn responses to use. The values of this can be POST or REDIRECT . This setting is OPTIONAL . The default is that the client will not request a specific binding type for responses. assertionConsumerServiceUrl URL of the assertion consumer service (ACS) where the IDP login service should send responses to. This setting is OPTIONAL . By default it is unset, relying on the configuration in the IdP. When set, it must end in /saml , for example http://sp.domain.com/my/endpoint/for/saml . The value of this property is sent in AssertionConsumerServiceURL attribute of SAML AuthnRequest message. This property is typically accompanied by the responseBinding attribute. bindingUrl This is the URL for the IDP login service that the client will send requests to. This setting is REQUIRED . 7.9. IDP SingleLogoutService sub element The SingleLogoutService sub element defines the logout SAML endpoint of the IDP. The client adapter will send requests to the IDP formatted via the settings within this element when it wants to log out. <SingleLogoutService validateRequestSignature="true" validateResponseSignature="true" signRequest="true" signResponse="true" requestBinding="redirect" responseBinding="post" postBindingUrl="posturl" redirectBindingUrl="redirecturl"> signRequest Should the client sign logout requests it makes to the IDP? This setting is OPTIONAL . Defaults to whatever the IDP signaturesRequired element value is. signResponse Should the client sign logout responses it sends to the IDP requests? This setting is OPTIONAL . Defaults to whatever the IDP signaturesRequired element value is. validateRequestSignature Should the client expect signed logout request documents from the IDP? This setting is OPTIONAL . Defaults to whatever the IDP signaturesRequired element value is. validateResponseSignature Should the client expect signed logout response documents from the IDP? This setting is OPTIONAL . Defaults to whatever the IDP signaturesRequired element value is. requestBinding This is the SAML binding type used for communicating SAML requests to the IDP. This setting is OPTIONAL . The default value is POST , but you can set it to REDIRECT as well. responseBinding This is the SAML binding type used for communicating SAML responses to the IDP. The values of this can be POST or REDIRECT . This setting is OPTIONAL . The default value is POST , but you can set it to REDIRECT as well. postBindingUrl This is the URL for the IDP's logout service when using the POST binding. This setting is REQUIRED if using the POST binding. redirectBindingUrl This is the URL for the IDP's logout service when using the REDIRECT binding. This setting is REQUIRED if using the REDIRECT binding. 7.10. IDP Keys sub element The Keys sub element of IDP is only used to define the certificate or public key to use to verify documents signed by the IDP. It is defined in the same way as the SP's Keys element . But again, you only have to define one certificate or public key reference. Note that, if both IDP and SP are realized by Red Hat build of Keycloak server and adapter, respectively, there is no need to specify the keys for signature validation, see below. It is possible to configure SP to obtain public keys for IDP signature validation from published certificates automatically, provided both SP and IDP are implemented by Red Hat build of Keycloak. This is done by removing all declarations of signature validation keys in Keys sub element. If the Keys sub element would then remain empty, it can be omitted completely. The keys are then automatically obtained by SP from SAML descriptor, location of which is derived from SAML endpoint URL specified in the IDP SingleSignOnService sub element . Settings of the HTTP client that is used for SAML descriptor retrieval usually needs no additional configuration, however it can be configured in the IDP HttpClient sub element . It is also possible to specify multiple keys for signature verification. This is done by declaring multiple Key elements within Keys sub element that have signing attribute set to true . This is useful for example in situation when the IDP signing keys are rotated: There is usually a transition period when new SAML protocol messages and assertions are signed with the new key but those signed by key should still be accepted. It is not possible to configure Red Hat build of Keycloak to both obtain the keys for signature verification automatically and define additional static signature verification keys. <IDP entityID="idp"> ... <Keys> <Key signing="true"> <KeyStore resource="/WEB-INF/keystore.jks" password="store123"> <Certificate alias="demo"/> </KeyStore> </Key> </Keys> </IDP> 7.11. IDP HttpClient sub element The HttpClient optional sub element defines the properties of HTTP client used for automatic obtaining of certificates containing public keys for IDP signature verification via SAML descriptor of the IDP when enabled . <HttpClient connectionPoolSize="10" disableTrustManager="false" allowAnyHostname="false" clientKeystore="classpath:keystore.jks" clientKeystorePassword="pwd" truststore="classpath:truststore.jks" truststorePassword="pwd" proxyUrl="http://proxy/" socketTimeout="5000" connectionTimeout="6000" connectionTtl="500" /> connectionPoolSize This config option defines how many connections to the Red Hat build of Keycloak server should be pooled. This is OPTIONAL . The default value is 10 . disableTrustManager If the Red Hat build of Keycloak server requires HTTPS and this config option is set to true you do not have to specify a truststore. This setting should only be used during development and never in production as it will disable verification of SSL certificates. This is OPTIONAL . The default value is false . allowAnyHostname If the Red Hat build of Keycloak server requires HTTPS and this config option is set to true the Red Hat build of Keycloak server's certificate is validated via the truststore, but host name validation is not done. This setting should only be used during development and never in production as it will partly disable verification of SSL certificates. This setting may be useful in test environments. This is OPTIONAL . The default value is false . truststore The value is the file path to a truststore file. If you prefix the path with classpath: , then the truststore will be obtained from the deployment's classpath instead. Used for outgoing HTTPS communications to the Red Hat build of Keycloak server. Client making HTTPS requests need a way to verify the host of the server they are talking to. This is what the truststore does. The keystore contains one or more trusted host certificates or certificate authorities. You can create this truststore by extracting the public certificate of the Red Hat build of Keycloak server's SSL keystore. This is REQUIRED unless disableTrustManager is true . truststorePassword Password for the truststore. This is REQUIRED if truststore is set and the truststore requires a password. clientKeystore This is the file path to a keystore file. This keystore contains client certificate for two-way SSL when the adapter makes HTTPS requests to the Red Hat build of Keycloak server. This is OPTIONAL . clientKeystorePassword Password for the client keystore and for the client's key. This is REQUIRED if clientKeystore is set. proxyUrl URL to HTTP proxy to use for HTTP connections. This is OPTIONAL . socketTimeout Timeout for socket waiting for data after establishing the connection in milliseconds. Maximum time of inactivity between two data packets. A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default if applicable). The default value is -1 . This is OPTIONAL . connectionTimeout Timeout for establishing the connection with the remote host in milliseconds. A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default if applicable). The default value is -1 . This is OPTIONAL . connectionTtl Connection time-to-live for client in milliseconds. A value less than or equal to zero is interpreted as an infinite value. The default value is -1 . This is OPTIONAL . | [
"<SP entityID=\"sp\" sslPolicy=\"ssl\" nameIDPolicyFormat=\"format\" forceAuthentication=\"true\" isPassive=\"false\" keepDOMAssertion=\"true\" autodetectBearerOnly=\"false\"> </SP>",
"<Keys> <Key signing=\"true\" > </Key> </Keys>",
"<Keys> <Key signing=\"true\" > <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <PrivateKey alias=\"myPrivate\" password=\"test123\"/> <Certificate alias=\"myCertAlias\"/> </KeyStore> </Key> </Keys>",
"<Keys> <Key signing=\"true\"> <PrivateKeyPem> 2341251234AB31234==231BB998311222423522334 </PrivateKeyPem> <CertificatePem> 211111341251234AB31234==231BB998311222423522334 </CertificatePem> </Key> </Keys>",
"<SP ...> <PrincipalNameMapping policy=\"FROM_NAME_ID\"/> </SP> <SP ...> <PrincipalNameMapping policy=\"FROM_ATTRIBUTE\" attribute=\"email\" /> </SP>",
"<RoleIdentifiers> <Attribute name=\"Role\"/> <Attribute name=\"member\"/> <Attribute name=\"memberOf\"/> </RoleIdentifiers>",
"<RoleIdentifiers> </RoleIdentifiers> <RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.resource.location\" value=\"/WEB-INF/role-mappings.properties\"/> </RoleMappingsProvider> <IDP> </IDP>",
"<RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.file.location\" value=\"/opt/mappers/roles.properties\"/> </RoleMappingsProvider>",
"<RoleMappingsProvider id=\"properties-based-role-mapper\"> <Property name=\"properties.resource.location\" value=\"/WEB-INF/conf/roles.properties\"/> </RoleMappingsProvider>",
"roleA=roleX,roleY roleB= kc_user=roleZ",
"role\\u0020A=roleX,roleY",
"<IDP entityID=\"idp\" signaturesRequired=\"true\" signatureAlgorithm=\"RSA_SHA1\" signatureCanonicalizationMethod=\"http://www.w3.org/2001/10/xml-exc-c14n#\"> </IDP>",
"<AllowedClockSkew unit=\"MILLISECONDS\">3500</AllowedClockSkew>",
"<SingleSignOnService signRequest=\"true\" validateResponseSignature=\"true\" requestBinding=\"post\" bindingUrl=\"url\"/>",
"<SingleLogoutService validateRequestSignature=\"true\" validateResponseSignature=\"true\" signRequest=\"true\" signResponse=\"true\" requestBinding=\"redirect\" responseBinding=\"post\" postBindingUrl=\"posturl\" redirectBindingUrl=\"redirecturl\">",
"<IDP entityID=\"idp\"> <Keys> <Key signing=\"true\"> <KeyStore resource=\"/WEB-INF/keystore.jks\" password=\"store123\"> <Certificate alias=\"demo\"/> </KeyStore> </Key> </Keys> </IDP>",
"<HttpClient connectionPoolSize=\"10\" disableTrustManager=\"false\" allowAnyHostname=\"false\" clientKeystore=\"classpath:keystore.jks\" clientKeystorePassword=\"pwd\" truststore=\"classpath:truststore.jks\" truststorePassword=\"pwd\" proxyUrl=\"http://proxy/\" socketTimeout=\"5000\" connectionTimeout=\"6000\" connectionTtl=\"500\" />"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/securing_applications_and_services_guide/saml-galleon-layers-detailed-config- |
Chapter 91. Additional resources | Chapter 91. Additional resources Designing your decision management architecture for Red Hat Process Automation Manager Getting started with decision services Designing a decision service using DRL rules Packaging and deploying an Red Hat Process Automation Manager project | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/additional_resources_3 |
Chapter 1. Release notes | Chapter 1. Release notes Release notes contain information about new and deprecated features, breaking changes, and known issues. The following release notes apply for the most recent Builds releases on OpenShift Container Platform. Builds is an extensible build framework based on the Shipwright project , which you can use to build container images on an OpenShift Container Platform cluster. You can build container images from source code and Dockerfiles by using image build tools, such as Source-to-Image (S2I) and Buildah . You can create and apply build resources, view logs of build runs, and manage builds in your OpenShift Container Platform namespaces. Builds includes the following capabilities: Standard Kubernetes-native API for building container images from source code and Dockerfiles Support for Source-to-Image (S2I) and Buildah build strategies Extensibility with your own custom build strategies Execution of builds from source code in a local directory Shipwright CLI for creating and viewing logs, and managing builds on the cluster Integrated user experience with the Developer perspective of the OpenShift Container Platform web console For more information about Builds, see Overview of Builds . 1.1. Compatibility and support matrix In the table, components are marked with the following statuses: TP Technology Preview GA General Availability The Technology Preview features are experimental features and are not intended for production use. Table 1.1. Compatibility and support matrix Builds Version Component Version Compatible Openshift Pipelines Version OpenShift Version Support Operator Builds (Shipwright) CLI 1.1 0.13.0 (GA) 0.13.0 (GA) 1.13, 1.14, and 1.15 4.12, 4.13, 4.14, 4.15, and 4.16 GA 1.0 0.12.0 (GA) 0.12.0 (GA) 1.12, 1.13, 1.14, and 1.15 4.12, 4.13, 4.14, 4.15, and 4.16 GA 1.2. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 1.3. Release notes for Builds General Availability 1.1.1 With this update, Builds 1.1.1 is now Generally Available (GA) on OpenShift Container Platform 4.12, 4.13, 4.14, 4.15, and 4.16. 1.3.1. Fixed issues The following section highlights fixed issues in Builds 1.1.1. With this update, the Shared Resource Container Storage Interface (CSI) Driver has the following permissions: Create subject access reviews Get, list, and watch Shared Resource objects 1.4. Release notes for Builds General Availability 1.1 With this update, Builds 1.1 is now Generally Available (GA) on OpenShift Container Platform 4.12, 4.13, 4.14, 4.15, and 4.16. 1.4.1. New features The following sections highlight what is new in Builds 1.1. 1.4.1.1. Builds The builds controllers now use Tekton's V1 API to create and access the TaskRun that backs a BuildRun . With this release, you can now define a build without any source. This is useful if you want to run the build using only the local source. The output image section now supports an optional timestamp field to change the image creation timestamp. Use the SourceTimestamp string to set it to match the source timestamp. With this release, the .source.type field is now required in both build and buildRun . With this release, the Operator now installs Shipwright by default and introduces several user experience enhancements for Builds. 1.4.1.2. Shared Resource CSI Driver The Shared Resource CSI Driver is now generally available. With this release, the CSI Shared Driver enables sharing of ConfigMaps and secrets across different namespaces in Kubernetes clusters. The driver has permission to read all Kubernetes secrets. This feature improves resource efficiency by reducing duplication and simplifies configuration management in multi-tenant environments. Administrators can define access policies to control which namespaces can read or modify these shared resources. 1.4.2. Known issues The following section highlights known issues in Builds 1.1. 1.4.2.1. Builds With this release, direct upgrades from Builds 1.0 to Builds 1.1 are not supported. To upgrade to Builds 1.1, you must delete the ShipwrightBuild object, uninstall the Builds 1.0.z operator, and then install the Builds 1.1.0 operator from OperatorHub. | null | https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.1/html/about_builds/ob-release-notes |
Chapter 16. DRL (Drools Rule Language) rules | Chapter 16. DRL (Drools Rule Language) rules DRL (Drools Rule Language) rules are business rules that you define directly in .drl text files. These DRL files are the source in which all other rule assets in Business Central are ultimately rendered. You can create and manage DRL files within the Business Central interface, or create them externally as part of a Maven or Java project using Red Hat CodeReady Studio or another integrated development environment (IDE). A DRL file can contain one or more rules that define at a minimum the rule conditions ( when ) and actions ( then ). The DRL designer in Business Central provides syntax highlighting for Java, DRL, and XML. DRL files consist of the following components: Components in a DRL file The following example DRL rule determines the age limit in a loan application decision service: Example rule for loan application age limit A DRL file can contain single or multiple rules, queries, and functions, and can define resource declarations such as imports, globals, and attributes that are assigned and used by your rules and queries. The DRL package must be listed at the top of a DRL file and the rules are typically listed last. All other DRL components can follow any order. Each rule must have a unique name within the rule package. If you use the same rule name more than once in any DRL file in the package, the rules fail to compile. Always enclose rule names with double quotation marks ( rule "rule name" ) to prevent possible compilation errors, especially if you use spaces in rule names. All data objects related to a DRL rule must be in the same project package as the DRL file in Business Central. Assets in the same package are imported by default. Existing assets in other packages can be imported with the DRL rule. 16.1. Packages in DRL A package is a folder of related assets in Red Hat Process Automation Manager, such as data objects, DRL files, decision tables, and other asset types. A package also serves as a unique namespace for each group of rules. A single rule base can contain multiple packages. You typically store all the rules for a package in the same file as the package declaration so that the package is self-contained. However, you can import objects from other packages that you want to use in the rules. The following example is a package name and namespace for a DRL file in a mortgage application decision service: Example package definition in a DRL file 16.2. Import statements in DRL Similar to import statements in Java, imports in DRL files identify the fully qualified paths and type names for any objects that you want to use in the rules. You specify the package and data object in the format packageName.objectName , with multiple imports on separate lines. The decision engine automatically imports classes from the Java package with the same name as the DRL package and from the package java.lang . The following example is an import statement for a loan application object in a mortgage application decision service: Example import statement in a DRL file 16.3. Functions in DRL Functions in DRL files put semantic code in your rule source file instead of in Java classes. Functions are especially useful if an action ( then ) part of a rule is used repeatedly and only the parameters differ for each rule. Above the rules in the DRL file, you can declare the function or import a static method from a helper class as a function, and then use the function by name in an action ( then ) part of the rule. The following examples illustrate a function that is either declared or imported in a DRL file: Example function declaration with a rule (option 1) Example function import with a rule (option 2) 16.4. Queries in DRL Queries in DRL files search the working memory of the decision engine for facts related to the rules in the DRL file. You add the query definitions in DRL files and then obtain the matching results in your application code. Queries search for a set of defined conditions and do not require when or then specifications. Query names are global to the KIE base and therefore must be unique among all other rule queries in the project. To return the results of a query, you construct a QueryResults definition using ksession.getQueryResults("name") , where "name" is the query name. This returns a list of query results, which enable you to retrieve the objects that matched the query. You define the query and query results parameters above the rules in the DRL file. The following example is a query definition in a DRL file for underage applicants in a mortgage application decision service, with the accompanying application code: Example query definition in a DRL file Example application code to obtain query results QueryResults results = ksession.getQueryResults( "people under the age of 21" ); System.out.println( "we have " + results.size() + " people under the age of 21" ); You can also iterate over the returned QueryResults using a standard for loop. Each element is a QueryResultsRow that you can use to access each of the columns in the tuple. Example application code to obtain and iterate over query results QueryResults results = ksession.getQueryResults( "people under the age of 21" ); System.out.println( "we have " + results.size() + " people under the age of 21" ); System.out.println( "These people are under the age of 21:" ); for ( QueryResultsRow row : results ) { Person person = ( Person ) row.get( "person" ); System.out.println( person.getName() + "\n" ); } 16.5. Type declarations and metadata in DRL Declarations in DRL files define new fact types or metadata for fact types to be used by rules in the DRL file: New fact types: The default fact type in the java.lang package of Red Hat Process Automation Manager is Object , but you can declare other types in DRL files as needed. Declaring fact types in DRL files enables you to define a new fact model directly in the decision engine, without creating models in a lower-level language like Java. You can also declare a new type when a domain model is already built and you want to complement this model with additional entities that are used mainly during the reasoning process. Metadata for fact types: You can associate metadata in the format @key(value) with new or existing facts. Metadata can be any kind of data that is not represented by the fact attributes and is consistent among all instances of that fact type. The metadata can be queried at run time by the decision engine and used in the reasoning process. 16.5.1. Type declarations without metadata in DRL A declaration of a new fact does not require any metadata, but must include a list of attributes or fields. If a type declaration does not include identifying attributes, the decision engine searches for an existing fact class in the classpath and raises an error if the class is missing. The following example is a declaration of a new fact type Person with no metadata in a DRL file: Example declaration of a new fact type with a rule In this example, the new fact type Person has the three attributes name , dateOfBirth , and address . Each attribute has a type that can be any valid Java type, including another class that you create or a fact type that you previously declared. The dateOfBirth attribute has the type java.util.Date , from the Java API, and the address attribute has the previously defined fact type Address . To avoid writing the fully qualified name of a class every time you declare it, you can define the full class name as part of the import clause: Example type declaration with the fully qualified class name in the import When you declare a new fact type, the decision engine generates at compile time a Java class representing the fact type. The generated Java class is a one-to-one JavaBeans mapping of the type definition. For example, the following Java class is generated from the example Person type declaration: Generated Java class for the Person fact type declaration public class Person implements Serializable { private String name; private java.util.Date dateOfBirth; private Address address; // Empty constructor public Person() {...} // Constructor with all fields public Person( String name, Date dateOfBirth, Address address ) {...} // If keys are defined, constructor with keys public Person( ...keys... ) {...} // Getters and setters // `equals` and `hashCode` // `toString` } You can then use the generated class in your rules like any other fact, as illustrated in the rule example with the Person type declaration: Example rule that uses the declared Person fact type 16.5.2. Enumerative type declarations in DRL DRL supports the declaration of enumerative types in the format declare enum <factType> , followed by a comma-separated list of values ending with a semicolon. You can then use the enumerative list in the rules in the DRL file. For example, the following enumerative type declaration defines days of the week for an employee scheduling rule: Example enumerative type declaration with a scheduling rule 16.5.3. Extended type declarations in DRL DRL supports type declaration inheritance in the format declare <factType1> extends <factType2> . To extend a type declared in Java by a subtype declared in DRL, you repeat the parent type in a declaration statement without any fields. For example, the following type declarations extend a Student type from a top-level Person type, and a LongTermStudent type from the Student subtype: Example extended type declarations 16.5.4. Type declarations with metadata in DRL You can associate metadata in the format @key(value) (the value is optional) with fact types or fact attributes. Metadata can be any kind of data that is not represented by the fact attributes and is consistent among all instances of that fact type. The metadata can be queried at run time by the decision engine and used in the reasoning process. Any metadata that you declare before the attributes of a fact type are assigned to the fact type, while metadata that you declare after an attribute are assigned to that particular attribute. In the following example, the two metadata attributes @author and @dateOfCreation are declared for the Person fact type, and the two metadata items @key and @maxLength are declared for the name attribute. The @key metadata attribute has no required value, so the parentheses and the value are omitted. Example metadata declaration for fact types and attributes For declarations of metadata attributes for existing types, you can identify the fully qualified class name as part of the import clause for all declarations or as part of the individual declare clause: Example metadata declaration for an imported type Example metadata declaration for a declared type 16.5.5. Metadata tags for fact type and attribute declarations in DRL Although you can define custom metadata attributes in DRL declarations, the decision engine also supports the following predefined metadata tags for declarations of fact types or fact type attributes. Note The examples in this section that refer to the VoiceCall class assume that the sample application domain model includes the following class details: VoiceCall fact class in an example Telecom domain model public class VoiceCall { private String originNumber; private String destinationNumber; private Date callDateTime; private long callDuration; // in milliseconds // Constructors, getters, and setters } @role This tag determines whether a given fact type is handled as a regular fact or an event in the decision engine during complex event processing. Default parameter: fact Supported parameters: fact , event Example: Declare VoiceCall as event type @timestamp This tag is automatically assigned to every event in the decision engine. By default, the time is provided by the session clock and assigned to the event when it is inserted into the working memory of the decision engine. You can specify a custom time stamp attribute instead of the default time stamp added by the session clock. Default parameter: The time added by the decision engine session clock Supported parameters: Session clock time or custom time stamp attribute Example: Declare VoiceCall timestamp attribute @duration This tag determines the duration time for events in the decision engine. Events can be interval-based events or point-in-time events. Interval-based events have a duration time and persist in the working memory of the decision engine until their duration time has lapsed. Point-in-time events have no duration and are essentially interval-based events with a duration of zero. By default, every event in the decision engine has a duration of zero. You can specify a custom duration attribute instead of the default. Default parameter: Null (zero) Supported parameters: Custom duration attribute Example: Declare VoiceCall duration attribute @expires This tag determines the time duration before an event expires in the working memory of the decision engine. By default, an event expires when the event can no longer match and activate any of the current rules. You can define an amount of time after which an event should expire. This tag definition also overrides the implicit expiration offset calculated from temporal constraints and sliding windows in the KIE base. This tag is available only when the decision engine is running in stream mode. Default parameter: Null (event expires after event can no longer match and activate rules) Supported parameters: Custom timeOffset attribute in the format [ #d][#h][#m][#s][ [ms]] Example: Declare expiration offset for VoiceCall events @typesafe This tab determines whether a given fact type is compiled with or without type safety. By default, all type declarations are compiled with type safety enabled. You can override this behavior to type-unsafe evaluation, where all constraints are generated as MVEL constraints and executed dynamically. This is useful when dealing with collections that do not have any generics or mixed type collections. Default parameter: true Supported parameters: true , false Example: Declare VoiceCall for type-unsafe evaluation @serialVersionUID This tag defines an identifying serialVersionUID value for a serializable class in a fact declaration. If a serializable class does not explicitly declare a serialVersionUID , the serialization run time calculates a default serialVersionUID value for that class based on various aspects of the class, as described in the Java Object Serialization Specification . However, for optimal deserialization results and for greater compatibility with serialized KIE sessions, set the serialVersionUID as needed in the relevant class or in your DRL declarations. Default parameter: Null Supported parameters: Custom serialVersionUID integer Example: Declare serialVersionUID for a VoiceCall class @key This tag enables a fact type attribute to be used as a key identifier for the fact type. The generated class can then implement the equals() and hashCode() methods to determine if two instances of the type are equal to each other. The decision engine can also generate a constructor using all the key attributes as parameters. Default parameter: None Supported parameters: None Example: Declare Person type attributes as keys For this example, the decision engine checks the firstName and lastName attributes to determine if two instances of Person are equal to each other, but it does not check the age attribute. The decision engine also implicitly generates three constructors: one without parameters, one with the @key fields, and one with all fields: Example constructors from the key declarations You can then create instances of the type based on the key constructors, as shown in the following example: Example instance using the key constructor Person person = new Person( "John", "Doe" ); @position This tag determines the position of a declared fact type attribute or field in a positional argument, overriding the default declared order of attributes. You can use this tag to modify positional constraints in patterns while maintaining a consistent format in your type declarations and positional arguments. You can use this tag only for fields in classes on the classpath. If some fields in a single class use this tag and some do not, the attributes without this tag are positioned last, in the declared order. Inheritance of classes is supported, but not interfaces of methods. Default parameter: None Supported parameters: Any integer Example: Declare a fact type and override declared order In this example, the attributes are prioritized in positional arguments in the following order: lastName firstName age occupation In positional arguments, you do not need to specify the field name because the position maps to a known named field. For example, the argument Person( lastName == "Doe" ) is the same as Person( "Doe"; ) , where the lastName field has the highest position annotation in the DRL declaration. The semicolon ; indicates that everything before it is a positional argument. You can mix positional and named arguments on a pattern by using the semicolon to separate them. Any variables in a positional argument that have not yet been bound are bound to the field that maps to that position. The following example patterns illustrate different ways of constructing positional and named arguments. The patterns have two constraints and a binding, and the semicolon differentiates the positional section from the named argument section. Variables and literals and expressions using only literals are supported in positional arguments, but not variables alone. Example patterns with positional and named arguments Positional arguments can be classified as input arguments or output arguments . Input arguments contain a previously declared binding and constrain against that binding using unification. Output arguments generate the declaration and bind it to the field represented by the positional argument when the binding does not yet exist. In extended type declarations, use caution when defining @position annotations because the attribute positions are inherited in subtypes. This inheritance can result in a mixed attribute order that can be confusing in some cases. Two fields can have the same @position value and consecutive values do not need to be declared. If a position is repeated, the conflict is solved using inheritance, where position values in the parent type have precedence, and then using the declaration order from the first to last declaration. For example, the following extended type declarations result in mixed positional priorities: Example extended fact type with mixed position annotations In this example, the attributes are prioritized in positional arguments in the following order: lastName (position 0 in the parent type) school (position 0 in the subtype) firstName (position 1 in the parent type) degree (position 1 in the subtype) age (position 2 in the parent type) occupation (first field with no position annotation) graduationDate (second field with no position annotation) 16.5.6. Property-change settings and listeners for fact types By default, the decision engine does not re-evaluate all fact patterns for fact types each time a rule is triggered, but instead reacts only to modified properties that are constrained or bound inside a given pattern. For example, if a rule calls modify() as part of the rule actions but the action does not generate new data in the KIE base, the decision engine does not automatically re-evaluate all fact patterns because no data was modified. This property reactivity behavior prevents unwanted recursions in the KIE base and results in more efficient rule evaluation. This behavior also means that you do not always need to use the no-loop rule attribute to avoid infinite recursion. You can modify or disable this property reactivity behavior with the following KnowledgeBuilderConfiguration options, and then use a property-change setting in your Java class or DRL files to fine-tune property reactivity as needed: ALWAYS : (Default) All types are property reactive, but you can disable property reactivity for a specific type by using the @classReactive property-change setting. ALLOWED : No types are property reactive, but you can enable property reactivity for a specific type by using the @propertyReactive property-change setting. DISABLED : No types are property reactive. All property-change listeners are ignored. Example property reactivity setting in KnowledgeBuilderConfiguration Alternatively, you can update the drools.propertySpecific system property in the standalone.xml file of your Red Hat Process Automation Manager distribution: Example property reactivity setting in system properties <system-properties> ... <property name="drools.propertySpecific" value="ALLOWED"/> ... </system-properties> The decision engine supports the following property-change settings and listeners for fact classes or declared DRL fact types: @classReactive If property reactivity is set to ALWAYS in the decision engine (all types are property reactive), this tag disables the default property reactivity behavior for a specific Java class or a declared DRL fact type. You can use this tag if you want the decision engine to re-evaluate all fact patterns for the specified fact type each time the rule is triggered, instead of reacting only to modified properties that are constrained or bound inside a given pattern. Example: Disable default property reactivity in a DRL type declaration Example: Disable default property reactivity in a Java class @classReactive public static class Person { private String firstName; private String lastName; } @propertyReactive If property reactivity is set to ALLOWED in the decision engine (no types are property reactive unless specified), this tag enables property reactivity for a specific Java class or a declared DRL fact type. You can use this tag if you want the decision engine to react only to modified properties that are constrained or bound inside a given pattern for the specified fact type, instead of re-evaluating all fact patterns for the fact each time the rule is triggered. Example: Enable property reactivity in a DRL type declaration (when reactivity is disabled globally) Example: Enable property reactivity in a Java class (when reactivity is disabled globally) @propertyReactive public static class Person { private String firstName; private String lastName; } @watch This tag enables property reactivity for additional properties that you specify in-line in fact patterns in DRL rules. This tag is supported only if property reactivity is set to ALWAYS in the decision engine, or if property reactivity is set to ALLOWED and the relevant fact type uses the @propertyReactive tag. You can use this tag in DRL rules to add or exclude specific properties in fact property reactivity logic. Default parameter: None Supported parameters: Property name, * (all), ! (not), !* (no properties) Example: Enable or disable property reactivity in fact patterns The decision engine generates a compilation error if you use the @watch tag for properties in a fact type that uses the @classReactive tag (disables property reactivity) or when property reactivity is set to ALLOWED in the decision engine and the relevant fact type does not use the @propertyReactive tag. Compilation errors also arise if you duplicate properties in listener annotations, such as @watch( firstName, ! firstName ) . @propertyChangeSupport For facts that implement support for property changes as defined in the JavaBeans Specification , this tag enables the decision engine to monitor changes in the fact properties. Example: Declare property change support in JavaBeans object 16.5.7. Access to DRL declared types in application code Declared types in DRL are typically used within the DRL files while Java models are typically used when the model is shared between rules and applications. Because declared types are generated at KIE base compile time, an application cannot access them until application run time. In some cases, an application needs to access and handle facts directly from the declared types, especially when the application wraps the decision engine and provides higher-level, domain-specific user interfaces for rules management. To handle declared types directly from the application code, you can use the org.drools.definition.type.FactType API in Red Hat Process Automation Manager. Through this API, you can instantiate, read, and write fields in the declared fact types. The following example code modifies a Person fact type directly from an application: Example application code to handle a declared fact type through the FactType API import java.util.Date; import org.kie.api.definition.type.FactType; import org.kie.api.KieBase; import org.kie.api.runtime.KieSession; ... // Get a reference to a KIE base with the declared type: KieBase kbase = ... // Get the declared fact type: FactType personType = kbase.getFactType("org.drools.examples", "Person"); // Create instances: Object bob = personType.newInstance(); // Set attribute values: personType.set(bob, "name", "Bob" ); personType.set(bob, "dateOfBirth", new Date()); personType.set(bob, "address", new Address("King's Road","London","404")); // Insert the fact into a KIE session: KieSession ksession = ... ksession.insert(bob); ksession.fireAllRules(); // Read attributes: String name = (String) personType.get(bob, "name"); Date date = (Date) personType.get(bob, "dateOfBirth"); The API also includes other helpful methods, such as setting all the attributes at once, reading values from a Map collection, or reading all attributes at once into a Map collection. Although the API behavior is similar to Java reflection, the API does not use reflection and relies on more performant accessors that are implemented with generated bytecode. 16.6. Global variables in DRL Global variables in DRL files typically provide data or services for the rules, such as application services used in rule consequences, and return data from rules, such as logs or values added in rule consequences. You set the global value in the working memory of the decision engine through a KIE session configuration or REST operation, declare the global variable above the rules in the DRL file, and then use it in an action ( then ) part of the rule. For multiple global variables, use separate lines in the DRL file. The following example illustrates a global variable list configuration for the decision engine and the corresponding global variable definition in the DRL file: Example global list configuration for the decision engine Example global variable definition with a rule Warning Do not use global variables to establish conditions in rules unless a global variable has a constant immutable value. Global variables are not inserted into the working memory of the decision engine, so the decision engine cannot track value changes of variables. Do not use global variables to share data between rules. Rules always reason and react to the working memory state, so if you want to pass data from rule to rule, assert the data as facts into the working memory of the decision engine. A use case for a global variable might be an instance of an email service. In your integration code that is calling the decision engine, you obtain your emailService object and then set it in the working memory of the decision engine. In the DRL file, you declare that you have a global of type emailService and give it the name "email" , and then in your rule consequences, you can use actions such as email.sendSMS(number, message) . If you declare global variables with the same identifier in multiple packages, then you must set all the packages with the same type so that they all reference the same global value. 16.7. Rule attributes in DRL Rule attributes are additional specifications that you can add to business rules to modify rule behavior. In DRL files, you typically define rule attributes above the rule conditions and actions, with multiple attributes on separate lines, in the following format: The following table lists the names and supported values of the attributes that you can assign to rules: Table 16.1. Rule attributes Attribute Value salience An integer defining the priority of the rule. Rules with a higher salience value are given higher priority when ordered in the activation queue. Example: salience 10 enabled A Boolean value. When the option is selected, the rule is enabled. When the option is not selected, the rule is disabled. Example: enabled true date-effective A string containing a date and time definition. The rule can be activated only if the current date and time is after a date-effective attribute. Example: date-effective "4-Sep-2018" date-expires A string containing a date and time definition. The rule cannot be activated if the current date and time is after the date-expires attribute. Example: date-expires "4-Oct-2018" no-loop A Boolean value. When the option is selected, the rule cannot be reactivated (looped) if a consequence of the rule re-triggers a previously met condition. When the condition is not selected, the rule can be looped in these circumstances. Example: no-loop true agenda-group A string identifying an agenda group to which you want to assign the rule. Agenda groups allow you to partition the agenda to provide more execution control over groups of rules. Only rules in an agenda group that has acquired a focus are able to be activated. Example: agenda-group "GroupName" activation-group A string identifying an activation (or XOR) group to which you want to assign the rule. In activation groups, only one rule can be activated. The first rule to fire will cancel all pending activations of all rules in the activation group. Example: activation-group "GroupName" duration A long integer value defining the duration of time in milliseconds after which the rule can be activated, if the rule conditions are still met. Example: duration 10000 timer A string identifying either int (interval) or cron timer definitions for scheduling the rule. Example: timer ( cron:* 0/15 * * * ? ) (every 15 minutes) calendar A Quartz calendar definition for scheduling the rule. Example: calendars "* * 0-7,18-23 ? * *" (exclude non-business hours) auto-focus A Boolean value, applicable only to rules within agenda groups. When the option is selected, the time the rule is activated, a focus is automatically given to the agenda group to which the rule is assigned. Example: auto-focus true lock-on-active A Boolean value, applicable only to rules within rule flow groups or agenda groups. When the option is selected, the time the ruleflow group for the rule becomes active or the agenda group for the rule receives a focus, the rule cannot be activated again until the ruleflow group is no longer active or the agenda group loses the focus. This is a stronger version of the no-loop attribute, because the activation of a matching rule is discarded regardless of the origin of the update (not only by the rule itself). This attribute is ideal for calculation rules where you have a number of rules that modify a fact and you do not want any rule re-matching and firing again. Example: lock-on-active true ruleflow-group A string identifying a rule flow group. In rule flow groups, rules can fire only when the group is activated by the associated rule flow. Example: ruleflow-group "GroupName" dialect A string identifying either JAVA or MVEL as the language to be used for code expressions in the rule. By default, the rule uses the dialect specified at the package level. Any dialect specified here overrides the package dialect setting for the rule. Example: dialect "JAVA" Note When you use Red Hat Process Automation Manager without the executable model, the dialect "JAVA" rule consequences support only Java 5 syntax. For more information about executable models, see Packaging and deploying an Red Hat Process Automation Manager project . 16.7.1. Timer and calendar rule attributes in DRL Timers and calendars are DRL rule attributes that enable you to apply scheduling and timing constraints to your DRL rules. These attributes require additional configurations depending on the use case. The timer attribute in DRL rules is a string identifying either int (interval) or cron timer definitions for scheduling a rule and supports the following formats: Timer attribute formats Example interval timer attributes Example cron timer attribute Interval timers follow the semantics of java.util.Timer objects, with an initial delay and an optional repeat interval. Cron timers follow standard Unix cron expressions. The following example DRL rule uses a cron timer to send an SMS text message every 15 minutes: Example DRL rule with a cron timer Generally, a rule that is controlled by a timer becomes active when the rule is triggered and the rule consequence is executed repeatedly, according to the timer settings. The execution stops when the rule condition no longer matches incoming facts. However, the way the decision engine handles rules with timers depends on whether the decision engine is in active mode or in passive mode . By default, the decision engine runs in passive mode and evaluates rules, according to the defined timer settings, when a user or an application explicitly calls fireAllRules() . Conversely, if a user or application calls fireUntilHalt() , the decision engine starts in active mode and evaluates rules continually until the user or application explicitly calls halt() . When the decision engine is in active mode, rule consequences are executed even after control returns from a call to fireUntilHalt() and the decision engine remains reactive to any changes made to the working memory. For example, removing a fact that was involved in triggering the timer rule execution causes the repeated execution to terminate, and inserting a fact so that some rule matches causes that rule to be executed. However, the decision engine is not continually active , but is active only after a rule is executed. Therefore, the decision engine does not react to asynchronous fact insertions until the execution of a timer-controlled rule. Disposing a KIE session terminates all timer activity. When the decision engine is in passive mode, rule consequences of timed rules are evaluated only when fireAllRules() is invoked again. However, you can change the default timer-execution behavior in passive mode by configuring the KIE session with a TimedRuleExecutionOption option, as shown in the following example: KIE session configuration to automatically execute timed rules in passive mode KieSessionConfiguration ksconf = KieServices.Factory.get().newKieSessionConfiguration(); ksconf.setOption( TimedRuleExecutionOption.YES ); KSession ksession = kbase.newKieSession(ksconf, null); You can additionally set a FILTERED specification on the TimedRuleExecutionOption option that enables you to define a callback to filter those rules, as shown in the following example: KIE session configuration to filter which timed rules are automatically executed KieSessionConfiguration ksconf = KieServices.Factory.get().newKieSessionConfiguration(); conf.setOption( new TimedRuleExecutionOption.FILTERED(new TimedRuleExecutionFilter() { public boolean accept(Rule[] rules) { return rules[0].getName().equals("MyRule"); } }) ); For interval timers, you can also use an expression timer with expr instead of int to define both the delay and interval as an expression instead of a fixed value. The following example DRL file declares a fact type with a delay and period that are then used in the subsequent rule with an expression timer: Example rule with an expression timer The expressions, such as USDd and USDp in this example, can use any variable defined in the pattern-matching part of the rule. The variable can be any String value that can be parsed into a time duration or any numeric value that is internally converted in a long value for a duration in milliseconds. Both interval and expression timers can use the following optional parameters: start and end : A Date or a String representing a Date or a long value. The value can also be a Number that is transformed into a Java Date in the format new Date( ((Number) n).longValue() ) . repeat-limit : An integer that defines the maximum number of repetitions allowed by the timer. If both the end and the repeat-limit parameters are set, the timer stops when the first of the two is reached. Example timer attribute with optional start , end , and repeat-limit parameters timer (int: 30s 1h; start=3-JAN-2020, end=4-JAN-2020, repeat-limit=50) In this example, the rule is scheduled for every hour, after a delay of 30 seconds each hour, beginning on 3 January 2020 and ending either on 4 January 2020 or when the cycle repeats 50 times. If the system is paused (for example, the session is serialized and then later deserialized), the rule is scheduled only one time to recover from missing activations regardless of how many activations were missed during the pause, and then the rule is subsequently scheduled again to continue in sync with the timer setting. The calendar attribute in DRL rules is a Quartz calendar definition for scheduling a rule and supports the following format: Calendar attribute format Example calendar attributes You can adapt a Quartz calendar based on the Quartz calendar API and then register the calendar in the KIE session, as shown in the following example: Adapting a Quartz Calendar Calendar weekDayCal = QuartzHelper.quartzCalendarAdapter(org.quartz.Calendar quartzCal) Registering the calendar in the KIE session ksession.getCalendars().set( "weekday", weekDayCal ); You can use calendars with standard rules and with rules that use timers. The calendar attribute can contain one or more comma-separated calendar names written as String literals. The following example rules use both calendars and timers to schedule the rules: Example rules with calendars and timers 16.8. Rule conditions in DRL (WHEN) The when part of a DRL rule (also known as the Left Hand Side (LHS) of the rule) contains the conditions that must be met to execute an action. Conditions consist of a series of stated patterns and constraints , with optional bindings and supported rule condition elements (keywords), based on the available data objects in the package. For example, if a bank requires loan applicants to have over 21 years of age, then the when condition of an "Underage" rule would be Applicant( age < 21 ) . Note DRL uses when instead of if because if is typically part of a procedural execution flow during which a condition is checked at a specific point in time. In contrast, when indicates that the condition evaluation is not limited to a specific evaluation sequence or point in time, but instead occurs continually at any time. Whenever the condition is met, the actions are executed. If the when section is empty, then the conditions are considered to be true and the actions in the then section are executed the first time a fireAllRules() call is made in the decision engine. This is useful if you want to use rules to set up the decision engine state. The following example rule uses empty conditions to insert a fact every time the rule is executed: Example rule without conditions If rule conditions use multiple patterns with no defined keyword conjunctions (such as and , or , or not ), the default conjunction is and : Example rule without keyword conjunctions 16.8.1. Patterns and constraints A pattern in a DRL rule condition is the segment to be matched by the decision engine. A pattern can potentially match each fact that is inserted into the working memory of the decision engine. A pattern can also contain constraints to further define the facts to be matched. In the simplest form, with no constraints, a pattern matches a fact of the given type. In the following example, the type is Person , so the pattern will match against all Person objects in the working memory of the decision engine: Example pattern for a single fact type The type does not need to be the actual class of some fact object. Patterns can refer to superclasses or even interfaces, potentially matching facts from many different classes. For example, the following pattern matches all objects in the working memory of the decision engine: Example pattern for all objects The parentheses of a pattern enclose the constraints, such as the following constraint on the person's age: Example pattern with a constraint A constraint is an expression that returns true or false . Pattern constraints in DRL are essentially Java expressions with some enhancements, such as property access, and some differences, such as equals() and !equals() semantics for == and != (instead of the usual same and not same semantics). Any JavaBeans property can be accessed directly from pattern constraints. A bean property is exposed internally using a standard JavaBeans getter that takes no arguments and returns something. For example, the age property is written as age in DRL instead of the getter getAge() : DRL constraint syntax with JavaBeans properties Red Hat Process Automation Manager uses the standard JDK Introspector class to achieve this mapping, so it follows the standard JavaBeans specification. For optimal decision engine performance, use the property access format, such as age , instead of using getters explicitly, such as getAge() . Warning Do not use property accessors to change the state of the object in a way that might affect the rules because the decision engine caches the results of the match between invocations for higher efficiency. For example, do not use property accessors in the following ways: public int getAge() { age++; // Do not do this. return age; } public int getAge() { Date now = DateUtil.now(); // Do not do this. return DateUtil.differenceInYears(now, birthday); } Instead of following the second example, insert a fact that wraps the current date in the working memory and update that fact between fireAllRules() as needed. However, if the getter of a property cannot be found, the compiler uses the property name as a fallback method name, without arguments: Fallback method if object is not found You can also nest access properties in patterns, as shown in the following example. Nested properties are indexed by the decision engine. Example pattern with nested property access Warning In stateful KIE sessions, use nested accessors carefully because the working memory of the decision engine is not aware of any of the nested values and does not detect when they change. Either consider the nested values immutable while any of their parent references are inserted into the working memory, or, if you want to modify a nested value, mark all of the outer facts as updated. In the example, when the houseNumber property changes, any Person with that Address must be marked as updated. You can use any Java expression that returns a boolean value as a constraint inside the parentheses of a pattern. Java expressions can be mixed with other expression enhancements, such as property access: Example pattern with a constraint using property access and Java expression You can change the evaluation priority by using parentheses, as in any logical or mathematical expression: Example evaluation order of constraints You can also reuse Java methods in constraints, as shown in the following example: Example constraints with reused Java methods Warning Do not use constraints to change the state of the object in a way that might affect the rules because the decision engine caches the results of the match between invocations for higher efficiency. Any method that is executed on a fact in the rule conditions must be a read-only method. Also, the state of a fact should not change between rule invocations unless those facts are marked as updated in the working memory on every change. For example, do not use a pattern constraint in the following ways: Standard Java operator precedence applies to constraint operators in DRL, and DRL operators follow standard Java semantics except for the == and != operators. The == operator uses null-safe equals() semantics instead of the usual same semantics. For example, the pattern Person( firstName == "John" ) is similar to java.util.Objects.equals(person.getFirstName(), "John") , and because "John" is not null, the pattern is also similar to "John".equals(person.getFirstName()) . The != operator uses null-safe !equals() semantics instead of the usual not same semantics. For example, the pattern Person( firstName != "John" ) is similar to !java.util.Objects.equals(person.getFirstName(), "John") . If the field and the value of a constraint are of different types, the decision engine uses type coercion to resolve the conflict and reduce compilation errors. For instance, if "ten" is provided as a string in a numeric evaluator, a compilation error occurs, whereas "10" is coerced to a numeric 10. In coercion, the field type always takes precedence over the value type: Example constraint with a value that is coerced For groups of constraints, you can use a delimiting comma , to use implicit and connective semantics: Example patterns with multiple constraints Note Although the && and , operators have the same semantics, they are resolved with different priorities. The && operator precedes the || operator, and both the && and || operators together precede the , operator. Use the comma operator at the top-level constraint for optimal decision engine performance and human readability. You cannot embed a comma operator in a composite constraint expression, such as in parentheses: Example of misused comma in composite constraint expression 16.8.2. Bound variables in patterns and constraints You can bind variables to patterns and constraints to refer to matched objects in other portions of a rule. Bound variables can help you define rules more efficiently or more consistently with how you annotate facts in your data model. To differentiate more easily between variables and fields in a rule, use the standard format USDvariable for variables, especially in complex rules. This convention is helpful but not required in DRL. For example, the following DRL rule uses the variable USDp for a pattern with the Person fact: Pattern with a bound variable Similarly, you can also bind variables to properties in pattern constraints, as shown in the following example: Note Constraint binding considers only the first atomic expression that follows it. In the following example the pattern only binds the age of the person to the variable USDa : For clearer and more efficient rule definitions, separate constraint bindings and constraint expressions. Although mixed bindings and expressions are supported, which can complicate patterns and affect evaluation efficiency. In the preceding example, if you want to bind to the variable USDa the double of the person's age, you must make it an atomic expression by wrapping it in parentheses as shown in the following example: The decision engine does not support bindings to the same declaration, but does support unification of arguments across several properties. While positional arguments are always processed with unification, the unification symbol := exists for named arguments. The following example patterns unify the age property across two Person facts: Example pattern with unification Unification declares a binding for the first occurrence and constrains to the same value of the bound field for sequence occurrences. 16.8.3. Nested constraints and inline casts In some cases, you might need to access multiple properties of a nested object, as shown in the following example: Example pattern to access multiple properties You can group these property accessors to nested objects with the syntax .( <constraints> ) for more readable rules, as shown in the following example: Example pattern with grouped constraints Note The period prefix . differentiates the nested object constraints from a method call. When you work with nested objects in patterns, you can use the syntax <type>#<subtype> to cast to a subtype and make the getters from the parent type available to the subtype. You can use either the object name or fully qualified class name, and you can cast to one or multiple subtypes, as shown in the following examples: Example patterns with inline casting to a subtype These example patterns cast Address to LongAddress , and additionally to DetailedCountry in the last example, making the parent getters available to the subtypes in each case. You can use the instanceof operator to infer the results of the specified type in subsequent uses of that field with the pattern, as shown in the following example: If an inline cast is not possible (for example, if instanceof returns false ), the evaluation is considered false . 16.8.4. Date literal in constraints By default, the decision engine supports the date format dd-mmm-yyyy . You can customize the date format, including a time format mask if needed, by providing an alternative format mask with the system property drools.dateformat="dd-mmm-yyyy hh:mm" . You can also customize the date format by changing the language locale with the drools.defaultlanguage and drools.defaultcountry system properties (for example, the locale of Thailand is set as drools.defaultlanguage=th and drools.defaultcountry=TH ). Example pattern with a date literal restriction 16.8.5. Supported operators in DRL pattern constraints DRL supports standard Java semantics for operators in pattern constraints, with some exceptions and with some additional operators that are unique in DRL. The following list summarizes the operators that are handled differently in DRL constraints than in standard Java semantics or that are unique in DRL constraints. .() , # Use the .() operator to group property accessors to nested objects, and use the # operator to cast to a subtype in nested objects. Casting to a subtype makes the getters from the parent type available to the subtype. You can use either the object name or fully qualified class name, and you can cast to one or multiple subtypes. Example patterns with nested objects Note The period prefix . differentiates the nested object constraints from a method call. Example patterns with inline casting to a subtype !. Use this operator to dereference a property in a null-safe way. The value to the left of the !. operator must be not null (interpreted as != null ) in order to give a positive result for pattern matching. Example constraint with null-safe dereferencing [] Use this operator to access a List value by index or a Map value by key. Example constraints with List and Map access < , <= , > , >= Use these operators on properties with natural ordering. For example, for Date fields, the < operator means before , and for String fields, the operator means alphabetically before . These properties apply only to comparable properties. Example constraints with before operator == , != Use these operators as equals() and !equals() methods in constraints, instead of the usual same and not same semantics. Example constraint with null-safe equality Example constraint with null-safe not equality && , || Use these operators to create an abbreviated combined relation condition that adds more than one restriction on a field. You can group constraints with parentheses () to create a recursive syntax pattern. Example constraints with abbreviated combined relation matches , not matches Use these operators to indicate that a field matches or does not match a specified Java regular expression. Typically, the regular expression is a String literal, but variables that resolve to a valid regular expression are also supported. These operators apply only to String properties. If you use matches against a null value, the resulting evaluation is always false . If you use not matches against a null value, the resulting evaluation is always true . As in Java, regular expressions that you write as String literals must use a double backslash \\ to escape. Example constraint to match or not match a regular expression contains , not contains Use these operators to verify whether a field that is an Array or a Collection contains or does not contain a specified value. These operators apply to Array or Collection properties, but you can also use these operators in place of String.contains() and !String.contains() constraints checks. Example constraints with contains and not contains for a Collection Example constraints with contains and not contains for a String literal Note For backward compatibility, the excludes operator is a supported synonym for not contains . memberOf , not memberOf Use these operators to verify whether a field is a member of or is not a member of an Array or a Collection that is defined as a variable. The Array or Collection must be a variable. Example constraints with memberOf and not memberOf with a Collection soundslike Use this operator to verify whether a word has almost the same sound, using English pronunciation, as the given value (similar to the matches operator). This operator uses the Soundex algorithm. Example constraint with soundslike str Use this operator to verify whether a field that is a String starts with or ends with a specified value. You can also use this operator to verify the length of the String . Example constraints with str in , notin Use these operators to specify more than one possible value to match in a constraint (compound value restriction). This functionality of compound value restriction is supported only in the in and not in operators. The second operand of these operators must be a comma-separated list of values enclosed in parentheses. You can provide values as variables, literals, return values, or qualified identifiers. These operators are internally rewritten as a list of multiple restrictions using the operators == or != . Example constraints with in and notin 16.8.6. Operator precedence in DRL pattern constraints DRL supports standard Java operator precedence for applicable constraint operators, with some exceptions and with some additional operators that are unique in DRL. The following table lists DRL operator precedence where applicable, from highest to lowest precedence: Table 16.2. Operator precedence in DRL pattern constraints Operator type Operators Notes Nested or null-safe property access . , .() , !. Not standard Java semantics List or Map access [] Not standard Java semantics Constraint binding : Not standard Java semantics Multiplicative * , /% Additive + , - Shift >> , >>> , << Relational < , <= , > , >= , instanceof Equality == != Uses equals() and !equals() semantics, not standard Java same and not same semantics Non-short-circuiting AND & Non-short-circuiting exclusive OR ^ Non-short-circuiting inclusive OR | Logical AND && Logical OR || Ternary ? : Comma-separated AND , Not standard Java semantics 16.8.7. Supported rule condition elements in DRL (keywords) DRL supports the following rule condition elements (keywords) that you can use with the patterns that you define in DRL rule conditions: and Use this to group conditional components into a logical conjunction. Infix and prefix and are supported. You can group patterns explicitly with parentheses () . By default, all listed patterns are combined with and when no conjunction is specified. Example patterns with and Note Do not use a leading declaration binding with the and keyword (as you can with or , for example). A declaration can only reference a single fact at a time, and if you use a declaration binding with and , then when and is satisfied, it matches both facts and results in an error. Example misuse of and or Use this to group conditional components into a logical disjunction. Infix and prefix or are supported. You can group patterns explicitly with parentheses () . You can also use pattern binding with or , but each pattern must be bound separately. Example patterns with or Example patterns with or and pattern binding The decision engine does not directly interpret the or element but uses logical transformations to rewrite a rule with or as a number of sub-rules. This process ultimately results in a rule that has a single or as the root node and one sub-rule for each of its condition elements. Each sub-rule is activated and executed like any normal rule, with no special behavior or interaction between the sub-rules. Therefore, consider the or condition element a shortcut for generating two or more similar rules that, in turn, can create multiple activations when two or more terms of the disjunction are true. exists Use this to specify facts and constraints that must exist. This option is triggered on only the first match, not subsequent matches. If you use this element with multiple patterns, enclose the patterns with parentheses () . Example patterns with exists not Use this to specify facts and constraints that must not exist. If you use this element with multiple patterns, enclose the patterns with parentheses () . Example patterns with not forall Use this to verify whether all facts that match the first pattern match all the remaining patterns. When a forall construct is satisfied, the rule evaluates to true . This element is a scope delimiter, so it can use any previously bound variable, but no variable bound inside of it is available for use outside of it. Example rule with forall In this example, the rule selects all Employee objects whose type is "fulltime" . For each fact that matches this pattern, the rule evaluates the patterns that follow (badge color) and if they match, the rule evaluates to true . To state that all facts of a given type in the working memory of the decision engine must match a set of constraints, you can use forall with a single pattern for simplicity. Example rule with forall and a single pattern You can use forall constructs with multiple patterns or nest them with other condition elements, such as inside a not element construct. Example rule with forall and multiple patterns Example rule with forall and not Note The format forall( p1 p2 p3 ... ) is equivalent to not( p1 and not( and p2 p3 ... ) ) . from Use this to specify a data source for a pattern. This enables the decision engine to reason over data that is not in the working memory. The data source can be a sub-field on a bound variable or the result of a method call. The expression used to define the object source is any expression that follows regular MVEL syntax. Therefore, the from element enables you to easily use object property navigation, execute method calls, and access maps and collection elements. Example rule with from and pattern binding Example rule with from and a graph notation Example rule with from to iterate over all objects Note For large collections of objects, instead of adding an object with a large graph that the decision engine must iterate over frequently, add the collection directly to the KIE session and then join the collection in the condition, as shown in the following example: Example rule with from and lock-on-active rule attribute Important Using from with lock-on-active rule attribute can result in rules not being executed. You can address this issue in one of the following ways: Avoid using the from element when you can insert all facts into the working memory of the decision engine or use nested object references in your constraint expressions. Place the variable used in the modify() block as the last sentence in your rule condition. Avoid using the lock-on-active rule attribute when you can explicitly manage how rules within the same ruleflow group place activations on one another. The pattern that contains a from clause cannot be followed by another pattern starting with a parenthesis. The reason for this restriction is that the DRL parser reads the from expression as "from USDl (String() or Number())" and it cannot differentiate this expression from a function call. The simplest workaround to this is to wrap the from clause in parentheses, as shown in the following example: Example rules with from used incorrectly and correctly entry-point Use this to define an entry point, or event stream , corresponding to a data source for the pattern. This element is typically used with the from condition element. You can declare an entry point for events so that the decision engine uses data from only that entry point to evaluate the rules. You can declare an entry point either implicitly by referencing it in DRL rules or explicitly in your Java application. Example rule with from entry-point Example Java application code with EntryPoint object and inserted facts import org.kie.api.runtime.KieSession; import org.kie.api.runtime.rule.EntryPoint; // Create your KIE base and KIE session as usual: KieSession session = ... // Create a reference to the entry point: EntryPoint atmStream = session.getEntryPoint("ATM Stream"); // Start inserting your facts into the entry point: atmStream.insert(aWithdrawRequest); collect Use this to define a collection of objects that the rule can use as part of the condition. The rule obtains the collection either from a specified source or from the working memory of the decision engine. The result pattern of the collect element can be any concrete class that implements the java.util.Collection interface and provides a default no-arg public constructor. You can use Java collections like List , LinkedList , and HashSet , or your own class. If variables are bound before the collect element in a condition, you can use the variables to constrain both your source and result patterns. However, any binding made inside the collect element is not available for use outside of it. Example rule with collect In this example, the rule assesses all pending alarms in the working memory of the decision engine for each given system and groups them in a List . If three or more alarms are found for a given system, the rule is executed. You can also use the collect element with nested from elements, as shown in the following example: Example rule with collect and nested from accumulate Use this to iterate over a collection of objects, execute custom actions for each of the elements, and return one or more result objects (if the constraints evaluate to true ). This element is a more flexible and powerful form of the collect condition element. You can use predefined functions in your accumulate conditions or implement custom functions as needed. You can also use the abbreviation acc for accumulate in rule conditions. Use the following format to define accumulate conditions in rules: Preferred format for accumulate Note Although the decision engine supports alternate formats for the accumulate element for backward compatibility, this format is preferred for optimal performance in rules and applications. The decision engine supports the following predefined accumulate functions. These functions accept any expression as input. average min max count sum collectList collectSet In the following example rule, min , max , and average are accumulate functions that calculate the minimum, maximum, and average temperature values over all the readings for each sensor: Example rule with accumulate to calculate temperature values The following example rule uses the average function with accumulate to calculate the average profit for all items in an order: Example rule with accumulate to calculate average profit To use custom, domain-specific functions in accumulate conditions, create a Java class that implements the org.kie.api.runtime.rule.AccumulateFunction interface. For example, the following Java class defines a custom implementation of an AverageData function: Example Java class with custom implementation of average function // An implementation of an accumulator capable of calculating average values public class AverageAccumulateFunction implements org.kie.api.runtime.rule.AccumulateFunction<AverageAccumulateFunction.AverageData> { public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException { } public void writeExternal(ObjectOutput out) throws IOException { } public static class AverageData implements Externalizable { public int count = 0; public double total = 0; public AverageData() {} public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException { count = in.readInt(); total = in.readDouble(); } public void writeExternal(ObjectOutput out) throws IOException { out.writeInt(count); out.writeDouble(total); } } /* (non-Javadoc) * @see org.kie.api.runtime.rule.AccumulateFunction#createContext() */ public AverageData createContext() { return new AverageData(); } /* (non-Javadoc) * @see org.kie.api.runtime.rule.AccumulateFunction#init(java.io.Serializable) */ public void init(AverageData context) { context.count = 0; context.total = 0; } /* (non-Javadoc) * @see org.kie.api.runtime.rule.AccumulateFunction#accumulate(java.io.Serializable, java.lang.Object) */ public void accumulate(AverageData context, Object value) { context.count++; context.total += ((Number) value).doubleValue(); } /* (non-Javadoc) * @see org.kie.api.runtime.rule.AccumulateFunction#reverse(java.io.Serializable, java.lang.Object) */ public void reverse(AverageData context, Object value) { context.count--; context.total -= ((Number) value).doubleValue(); } /* (non-Javadoc) * @see org.kie.api.runtime.rule.AccumulateFunction#getResult(java.io.Serializable) */ public Object getResult(AverageData context) { return new Double( context.count == 0 ? 0 : context.total / context.count ); } /* (non-Javadoc) * @see org.kie.api.runtime.rule.AccumulateFunction#supportsReverse() */ public boolean supportsReverse() { return true; } /* (non-Javadoc) * @see org.kie.api.runtime.rule.AccumulateFunction#getResultType() */ public Class< ? > getResultType() { return Number.class; } } To use the custom function in a DRL rule, import the function using the import accumulate statement: Format to import a custom function Example rule with the imported average function 16.8.8. OOPath syntax with graphs of objects in DRL rule conditions OOPath is an object-oriented syntax extension of XPath that is designed for browsing graphs of objects in DRL rule condition constraints. OOPath uses the compact notation from XPath for navigating through related elements while handling collections and filtering constraints, and is specifically useful for graphs of objects. When the field of a fact is a collection, you can use the from condition element (keyword) to bind and reason over all the items in that collection one by one. If you need to browse a graph of objects in the rule condition constraints, the extensive use of the from condition element results in a verbose and repetitive syntax, as shown in the following example: Example rule that browses a graph of objects with from In this example, the domain model contains a Student object with a Plan of study. The Plan can have zero or more Exam instances and an Exam can have zero or more Grade instances. Only the root object of the graph, the Student in this case, needs to be in the working memory of the decision engine for this rule setup to function. As a more efficient alternative to using extensive from statements, you can use the abbreviated OOPath syntax, as shown in the following example: Example rule that browses a graph of objects with OOPath syntax Formally, the core grammar of an OOPath expression is defined in extended Backus-Naur form (EBNF) notation in the following way: EBNF notation for OOPath expressions In practice, an OOPath expression has the following features and capabilities: Starts with a forward slash / or with a question mark and forward slash ?/ if it is a non-reactive OOPath expression (described later in this section). Can dereference a single property of an object with the period . operator. Can dereference multiple properties of an object with the forward slash / operator. If a collection is returned, the expression iterates over the values in the collection. Can filter out traversed objects that do not satisfy one or more constraints. The constraints are written as predicate expressions between square brackets, as shown in the following example: Constraints as a predicate expression Can downcast a traversed object to a subclass of the class declared in the generic collection. Subsequent constraints can also safely access the properties declared only in that subclass, as shown in the following example. Objects that are not instances of the class specified in this inline cast are automatically filtered out. Constraints with downcast objects Can backreference an object of the graph that was traversed before the currently iterated graph. For example, the following OOPath expression matches only the grades that are above the average for the passed exam: Constraints with backreferenced object Can recursively be another OOPath expression, as shown in the following example: Recursive constraint expression Can access objects by their index between square brackets [] , as shown in the following example. To adhere to Java convention, OOPath indexes are 0-based, while XPath indexes are 1-based. Constraints with access to objects by index OOPath expressions can be reactive or non-reactive. The decision engine does not react to updates involving a deeply nested object that is traversed during the evaluation of an OOPath expression. To make these objects reactive to changes, modify the objects to extend the class org.drools.core.phreak.ReactiveObject . After you modify an object to extend the ReactiveObject class, the domain object invokes the inherited method notifyModification to notify the decision engine when one of the fields has been updated, as shown in the following example: Example object method to notify the decision engine that an exam has been moved to a different course public void setCourse(String course) { this.course = course; notifyModification(this); } With the following corresponding OOPath expression, when an exam is moved to a different course, the rule is re-executed and the list of grades matching the rule is recomputed: Example OOPath expression from "Big Data" rule You can also use the ?/ separator instead of the / separator to disable reactivity in only one sub-portion of an OOPath expression, as shown in the following example: Example OOPath expression that is partially non-reactive With this example, the decision engine reacts to a change made to an exam or if an exam is added to the plan, but not if a new grade is added to an existing exam. If an OOPath portion is non-reactive, all remaining portions of the OOPath expression also become non-reactive. For example, the following OOPath expression is completely non-reactive: Example OOPath expression that is completely non-reactive For this reason, you cannot use the ?/ separator more than once in the same OOPath expression. For example, the following expression causes a compilation error: Example OOPath expression with duplicate non-reactivity markers Another alternative for enabling OOPath expression reactivity is to use the dedicated implementations for List and Set interfaces in Red Hat Process Automation Manager. These implementations are the ReactiveList and ReactiveSet classes. A ReactiveCollection class is also available. The implementations also provide reactive support for performing mutable operations through the Iterator and ListIterator classes. The following example class uses these classes to configure OOPath expression reactivity: Example Java class to configure OOPath expression reactivity public class School extends AbstractReactiveObject { private String name; private final List<Child> children = new ReactiveList<Child>(); 1 public void setName(String name) { this.name = name; notifyModification(); 2 } public void addChild(Child child) { children.add(child); 3 // No need to call `notifyModification()` here } } 1 Uses the ReactiveList instance for reactive support over the standard Java List instance. 2 Uses the required notifyModification() method for when a field is changed in reactive support. 3 The children field is a ReactiveList instance, so the notifyModification() method call is not required. The notification is handled automatically, like all other mutating operations performed over the children field. 16.9. Rule actions in DRL (THEN) The then part of the rule (also known as the Right Hand Side (RHS) of the rule) contains the actions to be performed when the conditional part of the rule has been met. Actions consist of one or more methods that execute consequences based on the rule conditions and on available data objects in the package. For example, if a bank requires loan applicants to be over 21 years of age (with a rule condition Applicant( age < 21 ) ) and a loan applicant is under 21 years old, the then action of an "Underage" rule would be setApproved( false ) , declining the loan because the applicant is under age. The main purpose of rule actions is to insert, delete, or modify data in the working memory of the decision engine. Effective rule actions are small, declarative, and readable. If you need to use imperative or conditional code in rule actions, then divide the rule into multiple smaller and more declarative rules. Example rule for loan application age limit 16.9.1. Supported rule action methods in DRL DRL supports the following rule action methods that you can use in DRL rule actions. You can use these methods to modify the working memory of the decision engine without having to first reference a working memory instance. These methods act as shortcuts to the methods provided by the RuleContext class in your Red Hat Process Automation Manager distribution. For all rule action methods, download the Red Hat Process Automation Manager 7.13.5 Source Distribution ZIP file from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/kie-api-parent-USDVERSION/kie-api/src/main/java/org/kie/api/runtime/rule/RuleContext.java . set Use this to set the value of a field. Example rule action to set the values of a loan application approval modify Use this to specify fields to be modified for a fact and to notify the decision engine of the change. This method provides a structured approach to fact updates. It combines the update operation with setter calls to change object fields. Example rule action to modify a loan application amount and approval update Use this to specify fields and the entire related fact to be updated and to notify the decision engine of the change. After a fact has changed, you must call update before changing another fact that might be affected by the updated values. To avoid this added step, use the modify method instead. Example rule action to update a loan application amount and approval Note If you provide property-change listeners, you do not need to call this method when an object changes. For more information about property-change listeners, see Decision engine in Red Hat Process Automation Manager . insert Use this to insert a new fact into the working memory of the decision engine and to define resulting fields and values as needed for the fact. Example rule action to insert a new loan applicant object insertLogical Use this to insert a new fact logically into the decision engine. The decision engine is responsible for logical decisions on insertions and retractions of facts. After regular or stated insertions, facts must be retracted explicitly. After logical insertions, the facts that were inserted are automatically retracted when the conditions in the rules that inserted the facts are no longer true. Example rule action to logically insert a new loan applicant object delete Use this to remove an object from the decision engine. The keyword retract is also supported in DRL and executes the same action, but delete is typically preferred in DRL code for consistency with the keyword insert . Example rule action to delete a loan applicant object 16.9.2. Other rule action methods from drools variable In addition to the standard rule action methods, the decision engine supports methods in conjunction with the predefined drools variable that you can also use in rule actions. You can use the drools variable to call methods from the org.kie.api.runtime.rule.RuleContext class in your Red Hat Process Automation Manager distribution, which is also the class that the standard rule action methods are based on. For all drools rule action options, download the Red Hat Process Automation Manager 7.13.5 Source Distribution ZIP file from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/kie-api-parent-USDVERSION/kie-api/src/main/java/org/kie/api/runtime/rule/RuleContext.java . The drools variable contains methods that provide information about the firing rule and the set of facts that activated the firing rule: drools.getRule().getName() : Returns the name of the currently firing rule. drools.getMatch() : Returns the Match that activated the currently firing rule. It contains information that is useful for logging and debugging purposes, for instance drools.getMatch().getObjects() returns the list of objects, enabling rule to fire in the proper tuple order. From the drools variable, you can also obtain a reference to the KieRuntime providing useful methods to interact with the running session, for example: drools.getKieRuntime().halt() : Terminates rule execution if a user or application previously called fireUntilHalt() . When a user or application calls fireUntilHalt() method, the decision engine starts in active mode and evaluates rules until the user or application explicitly calls halt() method. Otherwise, by default, the decision engine runs in passive mode and evaluates rules only when a user or an application explicitly calls fireAllRules() method. drools.getKieRuntime().getAgenda() : Returns a reference to the KIE session Agenda , and in turn provides access to rule activation groups, rule agenda groups, and ruleflow groups. Example call to access agenda group "CleanUp" and set the focus drools.getKieRuntime().getAgenda().getAgendaGroup( "CleanUp" ).setFocus(); + This example sets the focus to a specified agenda group to which the rule belongs. drools.getKieRuntime().setGlobal() , ~.getGlobal() , ~.getGlobals() : Sets or retrieves global variables. drools.getKieRuntime().getEnvironment() : Returns the runtime Environment , similar to your operating system environment. drools.getKieRuntime().getQueryResults(<string> query) : Runs a query and returns the results. 16.9.3. Advanced rule actions with conditional and named consequences In general, effective rule actions are small, declarative, and readable. However, in some cases, the limitation of having a single consequence for each rule can be challenging and lead to verbose and repetitive rule syntax, as shown in the following example rules: Example rules with verbose and repetitive syntax A partial solution to the repetition is to make the second rule extend the first rule, as shown in the following modified example: Partially enhanced example rules with an extended condition As a more efficient alternative, you can consolidate the two rules into a single rule with modified conditions and labelled corresponding rule actions, as shown in the following consolidated example: Consolidated example rule with conditional and named consequences This example rule uses two actions: the usual default action and another action named giveDiscount . The giveDiscount action is activated in the condition with the keyword do when a customer older than 60 years old is found in the KIE base, regardless of whether or not the customer owns a car. You can configure the activation of a named consequence with an additional condition, such as the if statement in the following example. The condition in the if statement is always evaluated on the pattern that immediately precedes it. Consolidated example rule with an additional condition You can also evaluate different rule conditions using a nested if and else if construct, as shown in the following more complex example: Consolidated example rule with more complex conditions This example rule gives a 10% discount and free parking to Golden customers over 60, but only a 5% discount without free parking to Silver customers. The rule activates the consequence named giveDiscount5 with the keyword break instead of do . The keyword do schedules a consequence in the decision engine agenda, enabling the remaining part of the rule conditions to continue being evaluated, while break blocks any further condition evaluation. If a named consequence does not correspond to any condition with do but is activated with break , the rule fails to compile because the conditional part of the rule is never reached. 16.10. Comments in DRL files DRL supports single-line comments prefixed with a double forward slash // and multi-line comments enclosed with a forward slash and asterisk /* ... */ . You can use DRL comments to annotate rules or any related components in DRL files. DRL comments are ignored by the decision engine when the DRL file is processed. Example rule with comments Important The hash symbol # is not supported for DRL comments. 16.11. Error messages for DRL troubleshooting Red Hat Process Automation Manager provides standardized messages for DRL errors to help you troubleshoot and resolve problems in your DRL files. The error messages use the following format: Figure 16.1. Error message format for DRL file problems 1st Block: Error code 2nd Block: Line and column in the DRL source where the error occurred 3rd Block: Description of the problem 4th Block: Component in the DRL source (rule, function, query) where the error occurred 5th Block: Pattern in the DRL source where the error occurred (if applicable) Red Hat Process Automation Manager supports the following standardized error messages: 101: no viable alternative Indicates that the parser reached a decision point but could not identify an alternative. Example rule with incorrect spelling Error message Example rule without a rule name Error message In this example, the parser encountered the keyword when but expected the rule name, so it flags when as the incorrect expected token. Example rule with incorrect syntax Error message Note A line and column value of 0:-1 means the parser reached the end of the source file ( <eof> ) but encountered incomplete constructs, usually due to missing quotation marks "... " , apostrophes '... ' , or parentheses (... ) . 102: mismatched input Indicates that the parser expected a particular symbol that is missing at the current input position. Example rule with an incomplete rule statement Error message Note A line and column value of 0:-1 means the parser reached the end of the source file ( <eof> ) but encountered incomplete constructs, usually due to missing quotation marks "... " , apostrophes '... ' , or parentheses (... ) . Example rule with incorrect syntax Error messages In this example, the syntactic problem results in multiple error messages related to each other. The single solution of replacing the commas , with && operators resolves all errors. If you encounter multiple errors, resolve one at a time in case errors are consequences of errors. 103: failed predicate Indicates that a validating semantic predicate evaluated to false . These semantic predicates are typically used to identify component keywords in DRL files, such as declare , rule , exists , not , and others. Example rule with an invalid keyword Error message The Some text line is invalid because it does not begin with or is not a part of a DRL keyword construct, so the parser fails to validate the rest of the DRL file. Note This error is similar to 102: mismatched input , but usually involves DRL keywords. 104: trailing semi-colon not allowed Indicates that an eval() clause in a rule condition uses a semicolon ; but must not use one. Example rule with eval() and trailing semicolon Error message 105: did not match anything Indicates that the parser reached a sub-rule in the grammar that must match an alternative at least once, but the sub-rule did not match anything. The parser has entered a branch with no way out. Example rule with invalid text in an empty condition Error message In this example, the condition is intended to be empty but the word None is used. This error is resolved by removing None , which is not a valid DRL keyword, data type, or pattern construct. Note If you encounter other DRL error messages that you cannot resolve, contact your Red Hat Technical Account Manager. | [
"package import function // Optional query // Optional declare // Optional global // Optional rule \"rule name\" // Attributes when // Conditions then // Actions end rule \"rule2 name\"",
"rule \"Underage\" salience 15 agenda-group \"applicationGroup\" when USDapplication : LoanApplication() Applicant( age < 21 ) then USDapplication.setApproved( false ); USDapplication.setExplanation( \"Underage\" ); end",
"package org.mortgages;",
"import org.mortgages.LoanApplication;",
"function String hello(String applicantName) { return \"Hello \" + applicantName + \"!\"; } rule \"Using a function\" when // Empty then System.out.println( hello( \"James\" ) ); end",
"import function my.package.applicant.hello; rule \"Using a function\" when // Empty then System.out.println( hello( \"James\" ) ); end",
"query \"people under the age of 21\" USDperson : Person( age < 21 ) end",
"QueryResults results = ksession.getQueryResults( \"people under the age of 21\" ); System.out.println( \"we have \" + results.size() + \" people under the age of 21\" );",
"QueryResults results = ksession.getQueryResults( \"people under the age of 21\" ); System.out.println( \"we have \" + results.size() + \" people under the age of 21\" ); System.out.println( \"These people are under the age of 21:\" ); for ( QueryResultsRow row : results ) { Person person = ( Person ) row.get( \"person\" ); System.out.println( person.getName() + \"\\n\" ); }",
"declare Person name : String dateOfBirth : java.util.Date address : Address end rule \"Using a declared type\" when USDp : Person( name == \"James\" ) then // Insert Mark, who is a customer of James. Person mark = new Person(); mark.setName( \"Mark\" ); insert( mark ); end",
"import java.util.Date declare Person name : String dateOfBirth : Date address : Address end",
"public class Person implements Serializable { private String name; private java.util.Date dateOfBirth; private Address address; // Empty constructor public Person() {...} // Constructor with all fields public Person( String name, Date dateOfBirth, Address address ) {...} // If keys are defined, constructor with keys public Person( ...keys... ) {...} // Getters and setters // `equals` and `hashCode` // `toString` }",
"rule \"Using a declared type\" when USDp : Person( name == \"James\" ) then // Insert Mark, who is a customer of James. Person mark = new Person(); mark.setName( \"Mark\" ); insert( mark ); end",
"declare enum DaysOfWeek SUN(\"Sunday\"),MON(\"Monday\"),TUE(\"Tuesday\"),WED(\"Wednesday\"),THU(\"Thursday\"),FRI(\"Friday\"),SAT(\"Saturday\"); fullName : String end rule \"Using a declared Enum\" when USDemp : Employee( dayOff == DaysOfWeek.MONDAY ) then end",
"import org.people.Person declare Person end declare Student extends Person school : String end declare LongTermStudent extends Student years : int course : String end",
"import java.util.Date declare Person @author( Bob ) @dateOfCreation( 01-Feb-2009 ) name : String @key @maxLength( 30 ) dateOfBirth : Date address : Address end",
"import org.drools.examples.Person declare Person @author( Bob ) @dateOfCreation( 01-Feb-2009 ) end",
"declare org.drools.examples.Person @author( Bob ) @dateOfCreation( 01-Feb-2009 ) end",
"public class VoiceCall { private String originNumber; private String destinationNumber; private Date callDateTime; private long callDuration; // in milliseconds // Constructors, getters, and setters }",
"@role( fact | event )",
"declare VoiceCall @role( event ) end",
"@timestamp( <attributeName> )",
"declare VoiceCall @role( event ) @timestamp( callDateTime ) end",
"@duration( <attributeName> )",
"declare VoiceCall @role( event ) @timestamp( callDateTime ) @duration( callDuration ) end",
"@expires( <timeOffset> )",
"declare VoiceCall @role( event ) @timestamp( callDateTime ) @duration( callDuration ) @expires( 1h35m ) end",
"@typesafe( <boolean> )",
"declare VoiceCall @role( fact ) @typesafe( false ) end",
"@serialVersionUID( <integer> )",
"declare VoiceCall @serialVersionUID( 42 ) end",
"<attributeDefinition> @key",
"declare Person firstName : String @key lastName : String @key age : int end",
"Person() // Empty constructor Person( String firstName, String lastName ) Person( String firstName, String lastName, int age )",
"Person person = new Person( \"John\", \"Doe\" );",
"<attributeDefinition> @position ( <integer> )",
"declare Person firstName : String @position( 1 ) lastName : String @position( 0 ) age : int @position( 2 ) occupation: String end",
"Person( \"Doe\", \"John\", USDa; ) Person( \"Doe\", \"John\"; USDa : age ) Person( \"Doe\"; firstName == \"John\", USDa : age ) Person( lastName == \"Doe\"; firstName == \"John\", USDa : age )",
"declare Person firstName : String @position( 1 ) lastName : String @position( 0 ) age : int @position( 2 ) occupation: String end declare Student extends Person degree : String @position( 1 ) school : String @position( 0 ) graduationDate : Date end",
"KnowledgeBuilderConfiguration config = KnowledgeBuilderFactory.newKnowledgeBuilderConfiguration(); config.setOption(PropertySpecificOption.ALLOWED); KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(config);",
"<system-properties> <property name=\"drools.propertySpecific\" value=\"ALLOWED\"/> </system-properties>",
"declare Person @classReactive firstName : String lastName : String end",
"@classReactive public static class Person { private String firstName; private String lastName; }",
"declare Person @propertyReactive firstName : String lastName : String end",
"@propertyReactive public static class Person { private String firstName; private String lastName; }",
"<factPattern> @watch ( <property> )",
"// Listens for changes in both `firstName` (inferred) and `lastName`: Person(firstName == USDexpectedFirstName) @watch( lastName ) // Listens for changes in all properties of the `Person` fact: Person(firstName == USDexpectedFirstName) @watch( * ) // Listens for changes in `lastName` and explicitly excludes changes in `firstName`: Person(firstName == USDexpectedFirstName) @watch( lastName, !firstName ) // Listens for changes in all properties of the `Person` fact except `age`: Person(firstName == USDexpectedFirstName) @watch( *, !age ) // Excludes changes in all properties of the `Person` fact (equivalent to using `@classReactivity` tag): Person(firstName == USDexpectedFirstName) @watch( !* )",
"declare Person @propertyChangeSupport end",
"import java.util.Date; import org.kie.api.definition.type.FactType; import org.kie.api.KieBase; import org.kie.api.runtime.KieSession; // Get a reference to a KIE base with the declared type: KieBase kbase = // Get the declared fact type: FactType personType = kbase.getFactType(\"org.drools.examples\", \"Person\"); // Create instances: Object bob = personType.newInstance(); // Set attribute values: personType.set(bob, \"name\", \"Bob\" ); personType.set(bob, \"dateOfBirth\", new Date()); personType.set(bob, \"address\", new Address(\"King's Road\",\"London\",\"404\")); // Insert the fact into a KIE session: KieSession ksession = ksession.insert(bob); ksession.fireAllRules(); // Read attributes: String name = (String) personType.get(bob, \"name\"); Date date = (Date) personType.get(bob, \"dateOfBirth\");",
"List<String> list = new ArrayList<>(); KieSession kieSession = kiebase.newKieSession(); kieSession.setGlobal( \"myGlobalList\", list );",
"global java.util.List myGlobalList; rule \"Using a global\" when // Empty then myGlobalList.add( \"My global list\" ); end",
"rule \"rule_name\" // Attribute // Attribute when // Conditions then // Actions end",
"timer ( int: <initial delay> <repeat interval> ) timer ( cron: <cron expression> )",
"// Run after a 30-second delay timer ( int: 30s ) // Run every 5 minutes after a 30-second delay each time timer ( int: 30s 5m )",
"// Run every 15 minutes timer ( cron:* 0/15 * * * ? )",
"rule \"Send SMS message every 15 minutes\" timer ( cron:* 0/15 * * * ? ) when USDa : Alarm( on == true ) then channels[ \"sms\" ].insert( new Sms( USDa.mobileNumber, \"The alarm is still on.\" ); end",
"KieSessionConfiguration ksconf = KieServices.Factory.get().newKieSessionConfiguration(); ksconf.setOption( TimedRuleExecutionOption.YES ); KSession ksession = kbase.newKieSession(ksconf, null);",
"KieSessionConfiguration ksconf = KieServices.Factory.get().newKieSessionConfiguration(); conf.setOption( new TimedRuleExecutionOption.FILTERED(new TimedRuleExecutionFilter() { public boolean accept(Rule[] rules) { return rules[0].getName().equals(\"MyRule\"); } }) );",
"declare Bean delay : String = \"30s\" period : long = 60000 end rule \"Expression timer\" timer ( expr: USDd, USDp ) when Bean( USDd : delay, USDp : period ) then // Actions end",
"timer (int: 30s 1h; start=3-JAN-2020, end=4-JAN-2020, repeat-limit=50)",
"calendars \"<definition or registered name>\"",
"// Exclude non-business hours calendars \"* * 0-7,18-23 ? * *\" // Weekdays only, as registered in the KIE session calendars \"weekday\"",
"Calendar weekDayCal = QuartzHelper.quartzCalendarAdapter(org.quartz.Calendar quartzCal)",
"ksession.getCalendars().set( \"weekday\", weekDayCal );",
"rule \"Weekdays are high priority\" calendars \"weekday\" timer ( int:0 1h ) when Alarm() then send( \"priority high - we have an alarm\" ); end rule \"Weekends are low priority\" calendars \"weekend\" timer ( int:0 4h ) when Alarm() then send( \"priority low - we have an alarm\" ); end",
"rule \"Always insert applicant\" when // Empty then // Actions to be executed once insert( new Applicant() ); end // The rule is internally rewritten in the following way: rule \"Always insert applicant\" when eval( true ) then insert( new Applicant() ); end",
"rule \"Underage\" when application : LoanApplication() Applicant( age < 21 ) then // Actions end // The rule is internally rewritten in the following way: rule \"Underage\" when application : LoanApplication() and Applicant( age < 21 ) then // Actions end",
"Person()",
"Object() // Matches all objects in the working memory",
"Person( age == 50 )",
"Person( age == 50 ) // This is the same as the following getter format: Person( getAge() == 50 )",
"public int getAge() { age++; // Do not do this. return age; }",
"public int getAge() { Date now = DateUtil.now(); // Do not do this. return DateUtil.differenceInYears(now, birthday); }",
"Person( age == 50 ) // If `Person.getAge()` does not exist, the compiler uses the following syntax: Person( age() == 50 )",
"Person( address.houseNumber == 50 ) // This is the same as the following format: Person( getAddress().getHouseNumber() == 50 )",
"Person( age == 50 )",
"Person( age > 100 && ( age % 10 == 0 ) )",
"Person( Math.round( weight / ( height * height ) ) < 25.0 )",
"Person( incrementAndGetAge() == 10 ) // Do not do this.",
"Person( System.currentTimeMillis() % 1000 == 0 ) // Do not do this.",
"Person( age == \"10\" ) // \"10\" is coerced to 10",
"// Person is at least 50 years old and weighs at least 80 kilograms: Person( age > 50, weight > 80 ) // Person is at least 50 years old, weighs at least 80 kilograms, and is taller than 2 meters: Person( age > 50, weight > 80, height > 2 )",
"// Do not use the following format: Person( ( age > 50, weight > 80 ) || height > 2 ) // Use the following format instead: Person( ( age > 50 && weight > 80 ) || height > 2 )",
"rule \"simple rule\" when USDp : Person() then System.out.println( \"Person \" + USDp ); end",
"// Two persons of the same age: Person( USDfirstAge : age ) // Binding Person( age == USDfirstAge ) // Constraint expression",
"Person( USDa : age * 2 < 100 )",
"// Do not use the following format: Person( USDa : age * 2 < 100 ) // Use the following format instead: Person( age * 2 < 100, USDa : age )",
"Person( USDa : (age * 2) )",
"Person( USDage := age ) Person( USDage := age )",
"Person( name == \"mark\", address.city == \"london\", address.country == \"uk\" )",
"Person( name == \"mark\", address.( city == \"london\", country == \"uk\") )",
"// Inline casting with subtype name: Person( name == \"mark\", address#LongAddress.country == \"uk\" ) // Inline casting with fully qualified class name: Person( name == \"mark\", address#org.domain.LongAddress.country == \"uk\" ) // Multiple inline casts: Person( name == \"mark\", address#LongAddress.country#DetailedCountry.population > 10000000 )",
"Person( name == \"mark\", address instanceof LongAddress, address.country == \"uk\" )",
"Person( bornBefore < \"27-Oct-2009\" )",
"// Ungrouped property accessors: Person( name == \"mark\", address.city == \"london\", address.country == \"uk\" ) // Grouped property accessors: Person( name == \"mark\", address.( city == \"london\", country == \"uk\") )",
"// Inline casting with subtype name: Person( name == \"mark\", address#LongAddress.country == \"uk\" ) // Inline casting with fully qualified class name: Person( name == \"mark\", address#org.domain.LongAddress.country == \"uk\" ) // Multiple inline casts: Person( name == \"mark\", address#LongAddress.country#DetailedCountry.population > 10000000 )",
"Person( USDstreetName : address!.street ) // This is internally rewritten in the following way: Person( address != null, USDstreetName : address.street )",
"// The following format is the same as `childList(0).getAge() == 18`: Person(childList[0].age == 18) // The following format is the same as `credentialMap.get(\"jdoe\").isValid()`: Person(credentialMap[\"jdoe\"].valid)",
"Person( birthDate < USDotherBirthDate ) Person( firstName < USDotherFirstName )",
"Person( firstName == \"John\" ) // This is similar to the following formats: java.util.Objects.equals(person.getFirstName(), \"John\") \"John\".equals(person.getFirstName())",
"Person( firstName != \"John\" ) // This is similar to the following format: !java.util.Objects.equals(person.getFirstName(), \"John\")",
"// Simple abbreviated combined relation condition using a single `&&`: Person(age > 30 && < 40) // Complex abbreviated combined relation using groupings: Person(age ((> 30 && < 40) || (> 20 && < 25))) // Mixing abbreviated combined relation with constraint connectives: Person(age > 30 && < 40 || location == \"london\")",
"Person( country matches \"(USA)?\\\\S*UK\" ) Person( country not matches \"(USA)?\\\\S*UK\" )",
"// Collection with a specified field: FamilyTree( countries contains \"UK\" ) FamilyTree( countries not contains \"UK\" ) // Collection with a variable: FamilyTree( countries contains USDvar ) FamilyTree( countries not contains USDvar )",
"// Sting literal with a specified field: Person( fullName contains \"Jr\" ) Person( fullName not contains \"Jr\" ) // String literal with a variable: Person( fullName contains USDvar ) Person( fullName not contains USDvar )",
"FamilyTree( person memberOf USDeuropeanDescendants ) FamilyTree( person not memberOf USDeuropeanDescendants )",
"// Match firstName \"Jon\" or \"John\": Person( firstName soundslike \"John\" )",
"// Verify what the String starts with: Message( routingValue str[startsWith] \"R1\" ) // Verify what the String ends with: Message( routingValue str[endsWith] \"R2\" ) // Verify the length of the String: Message( routingValue str[length] 17 )",
"Person( USDcolor : favoriteColor ) Color( type in ( \"red\", \"blue\", USDcolor ) ) Person( USDcolor : favoriteColor ) Color( type notin ( \"red\", \"blue\", USDcolor ) )",
"//Infix `and`: Color( colorType : type ) and Person( favoriteColor == colorType ) //Infix `and` with grouping: (Color( colorType : type ) and (Person( favoriteColor == colorType ) or Person( favoriteColor == colorType )) // Prefix `and`: (and Color( colorType : type ) Person( favoriteColor == colorType )) // Default implicit `and`: Color( colorType : type ) Person( favoriteColor == colorType )",
"// Causes compile error: USDperson : (Person( name == \"Romeo\" ) and Person( name == \"Juliet\"))",
"//Infix `or`: Color( colorType : type ) or Person( favoriteColor == colorType ) //Infix `or` with grouping: (Color( colorType : type ) or (Person( favoriteColor == colorType ) and Person( favoriteColor == colorType )) // Prefix `or`: (or Color( colorType : type ) Person( favoriteColor == colorType ))",
"pensioner : (Person( sex == \"f\", age > 60 ) or Person( sex == \"m\", age > 65 )) (or pensioner : Person( sex == \"f\", age > 60 ) pensioner : Person( sex == \"m\", age > 65 ))",
"exists Person( firstName == \"John\") exists (Person( firstName == \"John\", age == 42 )) exists (Person( firstName == \"John\" ) and Person( lastName == \"Doe\" ))",
"not Person( firstName == \"John\") not (Person( firstName == \"John\", age == 42 )) not (Person( firstName == \"John\" ) and Person( lastName == \"Doe\" ))",
"rule \"All full-time employees have red ID badges\" when forall( USDemp : Employee( type == \"fulltime\" ) Employee( this == USDemp, badgeColor = \"red\" ) ) then // True, all full-time employees have red ID badges. end",
"rule \"All full-time employees have red ID badges\" when forall( Employee( badgeColor = \"red\" ) ) then // True, all full-time employees have red ID badges. end",
"rule \"All employees have health and dental care programs\" when forall( USDemp : Employee() HealthCare( employee == USDemp ) DentalCare( employee == USDemp ) ) then // True, all employees have health and dental care. end",
"rule \"Not all employees have health and dental care\" when not ( forall( USDemp : Employee() HealthCare( employee == USDemp ) DentalCare( employee == USDemp ) ) ) then // True, not all employees have health and dental care. end",
"rule \"Validate zipcode\" when Person( USDpersonAddress : address ) Address( zipcode == \"23920W\" ) from USDpersonAddress then // Zip code is okay. end",
"rule \"Validate zipcode\" when USDp : Person() USDa : Address( zipcode == \"23920W\" ) from USDp.address then // Zip code is okay. end",
"rule \"Apply 10% discount to all items over USUSD 100 in an order\" when USDorder : Order() USDitem : OrderItem( value > 100 ) from USDorder.items then // Apply discount to `USDitem`. end",
"when USDorder : Order() OrderItem( value > 100, order == USDorder )",
"rule \"Assign people in North Carolina (NC) to sales region 1\" ruleflow-group \"test\" lock-on-active true when USDp : Person() USDa : Address( state == \"NC\" ) from USDp.address then modify (USDp) {} // Assign the person to sales region 1. end rule \"Apply a discount to people in the city of Raleigh\" ruleflow-group \"test\" lock-on-active true when USDp : Person() USDa : Address( city == \"Raleigh\" ) from USDp.address then modify (USDp) {} // Apply discount to the person. end",
"// Do not use `from` in this way: rule R when USDl : List() String() from USDl (String() or Number()) then // Actions end // Use `from` in this way instead: rule R when USDl : List() (String() from USDl) (String() or Number()) then // Actions end",
"rule \"Authorize withdrawal\" when WithdrawRequest( USDai : accountId, USDam : amount ) from entry-point \"ATM Stream\" CheckingAccount( accountId == USDai, balance > USDam ) then // Authorize withdrawal. end",
"import org.kie.api.runtime.KieSession; import org.kie.api.runtime.rule.EntryPoint; // Create your KIE base and KIE session as usual: KieSession session = // Create a reference to the entry point: EntryPoint atmStream = session.getEntryPoint(\"ATM Stream\"); // Start inserting your facts into the entry point: atmStream.insert(aWithdrawRequest);",
"import java.util.List rule \"Raise priority when system has more than three pending alarms\" when USDsystem : System() USDalarms : List( size >= 3 ) from collect( Alarm( system == USDsystem, status == 'pending' ) ) then // Raise priority because `USDsystem` has three or more `USDalarms` pending. end",
"import java.util.LinkedList; rule \"Send a message to all parents\" when USDtown : Town( name == 'Paris' ) USDmothers : LinkedList() from collect( Person( children > 0 ) from USDtown.getPeople() ) then // Send a message to all parents. end",
"accumulate( <source pattern>; <functions> [;<constraints>] )",
"rule \"Raise alarm\" when USDs : Sensor() accumulate( Reading( sensor == USDs, USDtemp : temperature ); USDmin : min( USDtemp ), USDmax : max( USDtemp ), USDavg : average( USDtemp ); USDmin < 20, USDavg > 70 ) then // Raise the alarm. end",
"rule \"Average profit\" when USDorder : Order() accumulate( OrderItem( order == USDorder, USDcost : cost, USDprice : price ); USDavgProfit : average( 1 - USDcost / USDprice ) ) then // Average profit for `USDorder` is `USDavgProfit`. end",
"// An implementation of an accumulator capable of calculating average values public class AverageAccumulateFunction implements org.kie.api.runtime.rule.AccumulateFunction<AverageAccumulateFunction.AverageData> { public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException { } public void writeExternal(ObjectOutput out) throws IOException { } public static class AverageData implements Externalizable { public int count = 0; public double total = 0; public AverageData() {} public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException { count = in.readInt(); total = in.readDouble(); } public void writeExternal(ObjectOutput out) throws IOException { out.writeInt(count); out.writeDouble(total); } } /* (non-Javadoc) * @see org.kie.api.runtime.rule.AccumulateFunction#createContext() */ public AverageData createContext() { return new AverageData(); } /* (non-Javadoc) * @see org.kie.api.runtime.rule.AccumulateFunction#init(java.io.Serializable) */ public void init(AverageData context) { context.count = 0; context.total = 0; } /* (non-Javadoc) * @see org.kie.api.runtime.rule.AccumulateFunction#accumulate(java.io.Serializable, java.lang.Object) */ public void accumulate(AverageData context, Object value) { context.count++; context.total += ((Number) value).doubleValue(); } /* (non-Javadoc) * @see org.kie.api.runtime.rule.AccumulateFunction#reverse(java.io.Serializable, java.lang.Object) */ public void reverse(AverageData context, Object value) { context.count--; context.total -= ((Number) value).doubleValue(); } /* (non-Javadoc) * @see org.kie.api.runtime.rule.AccumulateFunction#getResult(java.io.Serializable) */ public Object getResult(AverageData context) { return new Double( context.count == 0 ? 0 : context.total / context.count ); } /* (non-Javadoc) * @see org.kie.api.runtime.rule.AccumulateFunction#supportsReverse() */ public boolean supportsReverse() { return true; } /* (non-Javadoc) * @see org.kie.api.runtime.rule.AccumulateFunction#getResultType() */ public Class< ? > getResultType() { return Number.class; } }",
"import accumulate <class_name> <function_name>",
"import accumulate AverageAccumulateFunction.AverageData average rule \"Average profit\" when USDorder : Order() accumulate( OrderItem( order == USDorder, USDcost : cost, USDprice : price ); USDavgProfit : average( 1 - USDcost / USDprice ) ) then // Average profit for `USDorder` is `USDavgProfit`. end",
"rule \"Find all grades for Big Data exam\" when USDstudent: Student( USDplan: plan ) USDexam: Exam( course == \"Big Data\" ) from USDplan.exams USDgrade: Grade() from USDexam.grades then // Actions end",
"rule \"Find all grades for Big Data exam\" when Student( USDgrade: /plan/exams[course == \"Big Data\"]/grades ) then // Actions end",
"OOPExpr = [ID ( \":\" | \":=\" )] ( \"/\" | \"?/\" ) OOPSegment { ( \"/\" | \"?/\" | \".\" ) OOPSegment } ; OOPSegment = ID [\"#\" ID] [\"[\" ( Number | Constraints ) \"]\"]",
"Student( USDgrade: /plan/exams[ course == \"Big Data\" ]/grades )",
"Student( USDgrade: /plan/exams#AdvancedExam[ course == \"Big Data\", level > 3 ]/grades )",
"Student( USDgrade: /plan/exams/grades[ result > ../averageResult ] )",
"Student( USDexam: /plan/exams[ /grades[ result > 20 ] ] )",
"Student( USDgrade: /plan/exams[0]/grades )",
"public void setCourse(String course) { this.course = course; notifyModification(this); }",
"Student( USDgrade: /plan/exams[ course == \"Big Data\" ]/grades )",
"Student( USDgrade: /plan/exams[ course == \"Big Data\" ]?/grades )",
"Student( USDgrade: ?/plan/exams[ course == \"Big Data\" ]/grades )",
"Student( USDgrade: /plan?/exams[ course == \"Big Data\" ]?/grades )",
"public class School extends AbstractReactiveObject { private String name; private final List<Child> children = new ReactiveList<Child>(); 1 public void setName(String name) { this.name = name; notifyModification(); 2 } public void addChild(Child child) { children.add(child); 3 // No need to call `notifyModification()` here } }",
"rule \"Underage\" when application : LoanApplication() Applicant( age < 21 ) then application.setApproved( false ); application.setExplanation( \"Underage\" ); end",
"set<field> ( <value> )",
"USDapplication.setApproved ( false ); USDapplication.setExplanation( \"has been bankrupt\" );",
"modify ( <fact-expression> ) { <expression>, <expression>, }",
"modify( LoanApplication ) { setAmount( 100 ), setApproved ( true ) }",
"update ( <object, <handle> ) // Informs the decision engine that an object has changed update ( <object> ) // Causes `KieSession` to search for a fact handle of the object",
"LoanApplication.setAmount( 100 ); update( LoanApplication );",
"insert( new <object> );",
"insert( new Applicant() );",
"insertLogical( new <object> );",
"insertLogical( new Applicant() );",
"delete( <object> );",
"delete( Applicant );",
"drools.getKieRuntime().getAgenda().getAgendaGroup( \"CleanUp\" ).setFocus();",
"rule \"Give 10% discount to customers older than 60\" when USDcustomer : Customer( age > 60 ) then modify(USDcustomer) { setDiscount( 0.1 ) }; end rule \"Give free parking to customers older than 60\" when USDcustomer : Customer( age > 60 ) USDcar : Car( owner == USDcustomer ) then modify(USDcar) { setFreeParking( true ) }; end",
"rule \"Give 10% discount to customers older than 60\" when USDcustomer : Customer( age > 60 ) then modify(USDcustomer) { setDiscount( 0.1 ) }; end rule \"Give free parking to customers older than 60\" extends \"Give 10% discount to customers older than 60\" when USDcar : Car( owner == USDcustomer ) then modify(USDcar) { setFreeParking( true ) }; end",
"rule \"Give 10% discount and free parking to customers older than 60\" when USDcustomer : Customer( age > 60 ) do[giveDiscount] USDcar : Car( owner == USDcustomer ) then modify(USDcar) { setFreeParking( true ) }; then[giveDiscount] modify(USDcustomer) { setDiscount( 0.1 ) }; end",
"rule \"Give free parking to customers older than 60 and 10% discount to golden ones among them\" when USDcustomer : Customer( age > 60 ) if ( type == \"Golden\" ) do[giveDiscount] USDcar : Car( owner == USDcustomer ) then modify(USDcar) { setFreeParking( true ) }; then[giveDiscount] modify(USDcustomer) { setDiscount( 0.1 ) }; end",
"rule \"Give free parking and 10% discount to over 60 Golden customer and 5% to Silver ones\" when USDcustomer : Customer( age > 60 ) if ( type == \"Golden\" ) do[giveDiscount10] else if ( type == \"Silver\" ) break[giveDiscount5] USDcar : Car( owner == USDcustomer ) then modify(USDcar) { setFreeParking( true ) }; then[giveDiscount10] modify(USDcustomer) { setDiscount( 0.1 ) }; then[giveDiscount5] modify(USDcustomer) { setDiscount( 0.05 ) }; end",
"rule \"Underage\" // This is a single-line comment. when USDapplication : LoanApplication() // This is an in-line comment. Applicant( age < 21 ) then /* This is a multi-line comment in the rule actions. */ USDapplication.setApproved( false ); USDapplication.setExplanation( \"Underage\" ); end",
"1: rule \"simple rule\" 2: when 3: exists Person() 4: exits Student() // Must be `exists` 5: then 6: end",
"[ERR 101] Line 4:4 no viable alternative at input 'exits' in rule \"simple rule\"",
"1: package org.drools.examples; 2: rule // Must be `rule \"rule name\"` (or `rule rule_name` if no spacing) 3: when 4: Object() 5: then 6: System.out.println(\"A RHS\"); 7: end",
"[ERR 101] Line 3:2 no viable alternative at input 'when'",
"1: rule \"simple rule\" 2: when 3: Student( name == \"Andy ) // Must be `\"Andy\"` 4: then 5: end",
"[ERR 101] Line 0:-1 no viable alternative at input '<eof>' in rule \"simple rule\" in pattern Student",
"1: rule simple_rule 2: when 3: USDp : Person( // Must be a complete rule statement",
"[ERR 102] Line 0:-1 mismatched input '<eof>' expecting ')' in rule \"simple rule\" in pattern Person",
"1: package org.drools.examples; 2: 3: rule \"Wrong syntax\" 4: when 5: not( Car( ( type == \"tesla\", price == 10000 ) || ( type == \"kia\", price == 1000 ) ) from USDcarList ) // Must use `&&` operators instead of commas `,` 6: then 7: System.out.println(\"OK\"); 8: end",
"[ERR 102] Line 5:36 mismatched input ',' expecting ')' in rule \"Wrong syntax\" in pattern Car [ERR 101] Line 5:57 no viable alternative at input 'type' in rule \"Wrong syntax\" [ERR 102] Line 5:106 mismatched input ')' expecting 'then' in rule \"Wrong syntax\"",
"1: package nesting; 2: 3: import org.drools.compiler.Person 4: import org.drools.compiler.Address 5: 6: Some text // Must be a valid DRL keyword 7: 8: rule \"test something\" 9: when 10: USDp: Person( name==\"Michael\" ) 11: then 12: USDp.name = \"other\"; 13: System.out.println(p.name); 14: end",
"[ERR 103] Line 6:0 rule 'rule_key' failed predicate: {(validateIdentifierKey(DroolsSoftKeywords.RULE))}? in rule",
"1: rule \"simple rule\" 2: when 3: eval( abc(); ) // Must not use semicolon `;` 4: then 5: end",
"[ERR 104] Line 3:4 trailing semi-colon not allowed in rule \"simple rule\"",
"1: rule \"empty condition\" 2: when 3: None // Must remove `None` if condition is empty 4: then 5: insert( new Person() ); 6: end",
"[ERR 105] Line 2:2 required (...)+ loop did not match anything at input 'WHEN' in rule \"empty condition\""
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/drl-rules-con_drl-rules |
Chapter 1. Overview of machine management | Chapter 1. Overview of machine management You can use machine management to flexibly work with underlying infrastructure like Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), OpenStack, Red Hat Virtualization (RHV), and vSphere to manage the OpenShift Container Platform cluster. You can control the cluster and perform auto-scaling, such as scaling up and down the cluster based on specific workload policies. The OpenShift Container Platform cluster can horizontally scale up and down when the load increases or decreases. It is important to have a cluster that adapts to changing workloads. Machine management is implemented as a Custom Resource Definition (CRD). A CRD object defines a new unique object Kind in the cluster and enables the Kubernetes API server to handle the object's entire lifecycle. The Machine API Operator provisions the following resources: MachineSet Machine Cluster Autoscaler Machine Autoscaler Machine Health Checks 1.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.9 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.9 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of machines. Machine sets are to machines as replica sets are to pods. If you need more machines or must scale them down, you change the replicas field on the machine set to meet your compute need. Warning Control plane machines cannot be managed by machine sets. The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the machine set API. You can set cluster-wide scaling limits for resources such as cores, nodes, memory, GPU, and so on. You can set the priority so that the cluster prioritizes pods so that new nodes are not brought online for less important pods. You can also set the scaling policy so that you can scale up nodes but not scale them down. Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each machine set is scoped to a single zone, so the installation program sends out machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. The autoscaler provides best-effort balancing over the life of a cluster. 1.2. Managing compute machines As a cluster administrator you can: Create a machine set on: AWS Azure GCP OpenStack RHV vSphere Manually scale a machine set by adding or removing a machine from the machine set. Modify a machine set through the MachineSet YAML configuration file. Delete a machine. Create infrastructure machine sets . Configure and deploy a machine health check to automatically fix damaged machines in a machine pool. 1.3. Applying autoscaling to an OpenShift Container Platform cluster You can automatically scale your OpenShift Container Platform cluster to ensure flexibility for changing workloads. To autoscale your cluster, you must first deploy a cluster autoscaler, and then deploy a machine autoscaler for each compute machine set. The cluster autoscaler increases and decreases the size of the cluster based on deployment needs. The machine autoscaler adjusts the number of machines in the compute machine sets that you deploy in your OpenShift Container Platform cluster. 1.4. Adding compute machines on user-provisioned infrastructure User-provisioned infrastructure is an environment where you can deploy infrastructure such as compute, network, and storage resources that host the OpenShift Container Platform. You can add compute machines to a cluster on user-provisioned infrastructure during or after the installation process. 1.5. Adding RHEL compute machines to your cluster As a cluster administrator, you can perform the following actions: Add Red Hat Enterprise Linux (RHEL) compute machines , also known as worker machines, to a user-provisioned infrastructure cluster or an installation-provisioned infrastructure cluster. Add more Red Hat Enterprise Linux (RHEL) compute machines to an existing cluster. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/machine_management/overview-of-machine-management |
Chapter 5. Managing user-owned OAuth access tokens | Chapter 5. Managing user-owned OAuth access tokens Users can review their own OAuth access tokens and delete any that are no longer needed. 5.1. Listing user-owned OAuth access tokens You can list your user-owned OAuth access tokens. Token names are not sensitive and cannot be used to log in. Procedure List all user-owned OAuth access tokens: USD oc get useroauthaccesstokens Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full List user-owned OAuth access tokens for a particular OAuth client: USD oc get useroauthaccesstokens --field-selector=clientName="console" Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full 5.2. Viewing the details of a user-owned OAuth access token You can view the details of a user-owned OAuth access token. Procedure Describe the details of a user-owned OAuth access token: USD oc describe useroauthaccesstokens <token_name> Example output Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none> 1 The token name, which is the sha256 hash of the token. Token names are not sensitive and cannot be used to log in. 2 The client name, which describes where the token originated from. 3 The value in seconds from the creation time before this token expires. 4 If there is a token inactivity timeout set for the OAuth server, this is the value in seconds from the creation time before this token can no longer be used. 5 The scopes for this token. 6 The user name associated with this token. 5.3. Deleting user-owned OAuth access tokens The oc logout command only invalidates the OAuth token for the active session. You can use the following procedure to delete any user-owned OAuth tokens that are no longer needed. Deleting an OAuth access token logs out the user from all sessions that use the token. Procedure Delete the user-owned OAuth access token: USD oc delete useroauthaccesstokens <token_name> Example output useroauthaccesstoken.oauth.openshift.io "<token_name>" deleted 5.4. Adding unauthenticated groups to cluster roles As a cluster administrator, you can add unauthenticated users to the following cluster roles in OpenShift Container Platform by creating a cluster role binding. Unauthenticated users do not have access to non-public cluster roles. This should only be done in specific use cases when necessary. You can add unauthenticated users to the following cluster roles: system:scope-impersonation system:webhook system:oauth-token-deleter self-access-reviewer Important Always verify compliance with your organization's security standards when modifying unauthenticated access. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file named add-<cluster_role>-unauth.yaml and add the following content: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated Apply the configuration by running the following command: USD oc apply -f add-<cluster_role>.yaml | [
"oc get useroauthaccesstokens",
"NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full",
"oc get useroauthaccesstokens --field-selector=clientName=\"console\"",
"NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full",
"oc describe useroauthaccesstokens <token_name>",
"Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none>",
"oc delete useroauthaccesstokens <token_name>",
"useroauthaccesstoken.oauth.openshift.io \"<token_name>\" deleted",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated",
"oc apply -f add-<cluster_role>.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authentication_and_authorization/managing-oauth-access-tokens |
Chapter 2. Importing a custom workbench image | Chapter 2. Importing a custom workbench image In addition to workbench images provided and supported by Red Hat and independent software vendors (ISVs), you can import custom workbench images that cater to your project's specific requirements. You must import it so that your OpenShift AI users (data scientists) can access it when they create a project workbench. Red Hat supports adding custom workbench images to your deployment of OpenShift AI, ensuring that they are available for selection when creating a workbench. However, Red Hat does not support the contents of your custom workbench image. That is, if your custom workbench image is available for selection during workbench creation, but does not create a usable workbench, Red Hat does not provide support to fix your custom workbench image. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Your custom image exists in an image registry that is accessible to OpenShift AI. The Settings Notebook images dashboard navigation menu option is enabled, as described in Enabling custom workbench images in OpenShift AI . If you want to associate an accelerator with the custom image that you want to import, you know the accelerator's identifier - the unique string that identifies the hardware accelerator. You must also have enabled GPU support in OpenShift AI. This includes installing the Node Feature Discovery operator and NVIDIA GPU Operators. For more information, see Installing the Node Feature Discovery operator and Enabling NVIDIA GPUs . Procedure From the OpenShift AI dashboard, click Settings Notebook images . The Notebook images page appears. Previously imported images are displayed. To enable or disable a previously imported image, on the row containing the relevant image, click the toggle in the Enable column. Optional: If you want to associate an accelerator and you have not already created an accelerator profile, click Create profile on the row containing the image and complete the relevant fields. If the image does not contain an accelerator identifier, you must manually configure one before creating an associated accelerator profile. Click Import new image . Alternatively, if no previously imported images were found, click Import image . The Import Notebook images dialog appears. In the Image location field, enter the URL of the repository containing the image. For example: quay.io/my-repo/my-image:tag , quay.io/my-repo/my-image@sha256:xxxxxxxxxxxxx , or docker.io/my-repo/my-image:tag . In the Name field, enter an appropriate name for the image. Optional: In the Description field, enter a description for the image. Optional: From the Accelerator identifier list, select an identifier to set its accelerator as recommended with the image. If the image contains only one accelerator identifier, the identifier name displays by default. Optional: Add software to the image. After the import has completed, the software is added to the image's meta-data and displayed on the Jupyter server creation page. Click the Software tab. Click the Add software button. Click Edit ( ). Enter the Software name. Enter the software Version . Click Confirm ( ) to confirm your entry. To add additional software, click Add software , complete the relevant fields, and confirm your entry. Optional: Add packages to the notebook images. After the import has completed, the packages are added to the image's meta-data and displayed on the Jupyter server creation page. Click the Packages tab. Click the Add package button. Click Edit ( ). Enter the Package name. For example, if you want to include data science pipeline V2 automatically, as a runtime configuration, type odh-elyra . Enter the package Version . For example, type 3.16.7 . Click Confirm ( ) to confirm your entry. To add an additional package, click Add package , complete the relevant fields, and confirm your entry. Click Import . Verification The image that you imported is displayed in the table on the Notebook images page. Your custom image is available for selection when a user creates a workbench. Additional resources Managing image streams Understanding build configurations | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/managing_resources/importing-a-custom-workbench-image_resource-mgmt |
32.3.2. Displaying the Message Buffer | 32.3.2. Displaying the Message Buffer To display the kernel message buffer, type the log command at the interactive prompt. Example 32.3. Displaying the kernel message buffer Type help log for more information on the command usage. Note The kernel message buffer includes the most essential information about the system crash and, as such, it is always dumped first in to the vmcore-dmesg.txt file. This is useful when an attempt to get the full vmcore file failed, for example because of lack of space on the target location. By default, vmcore-dmesg.txt is located in the /var/crash/ directory. | [
"crash> log ... several lines omitted EIP: 0060:[<c068124f>] EFLAGS: 00010096 CPU: 2 EIP is at sysrq_handle_crash+0xf/0x20 EAX: 00000063 EBX: 00000063 ECX: c09e1c8c EDX: 00000000 ESI: c0a09ca0 EDI: 00000286 EBP: 00000000 ESP: ef4dbf24 DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 Process bash (pid: 5591, ti=ef4da000 task=f196d560 task.ti=ef4da000) Stack: c068146b c0960891 c0968653 00000003 00000000 00000002 efade5c0 c06814d0 <0> fffffffb c068150f b7776000 f2600c40 c0569ec4 ef4dbf9c 00000002 b7776000 <0> efade5c0 00000002 b7776000 c0569e60 c051de50 ef4dbf9c f196d560 ef4dbfb4 Call Trace: [<c068146b>] ? __handle_sysrq+0xfb/0x160 [<c06814d0>] ? write_sysrq_trigger+0x0/0x50 [<c068150f>] ? write_sysrq_trigger+0x3f/0x50 [<c0569ec4>] ? proc_reg_write+0x64/0xa0 [<c0569e60>] ? proc_reg_write+0x0/0xa0 [<c051de50>] ? vfs_write+0xa0/0x190 [<c051e8d1>] ? sys_write+0x41/0x70 [<c0409adc>] ? syscall_call+0x7/0xb Code: a0 c0 01 0f b6 41 03 19 d2 f7 d2 83 e2 03 83 e0 cf c1 e2 04 09 d0 88 41 03 f3 c3 90 c7 05 c8 1b 9e c0 01 00 00 00 0f ae f8 89 f6 <c6> 05 00 00 00 00 01 c3 89 f6 8d bc 27 00 00 00 00 8d 50 d0 83 EIP: [<c068124f>] sysrq_handle_crash+0xf/0x20 SS:ESP 0068:ef4dbf24 CR2: 0000000000000000"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-kdump-crash-log |
12.2. Setting and Removing Cluster Properties | 12.2. Setting and Removing Cluster Properties To set the value of a cluster property, use the following pcs command. For example, to set the value of symmetric-cluster to false , use the following command. You can remove a cluster property from the configuration with the following command. Alternately, you can remove a cluster property from a configuration by leaving the value field of the pcs property set command blank. This restores that property to its default value. For example, if you have previously set the symmetric-cluster property to false , the following command removes the value you have set from the configuration and restores the value of symmetric-cluster to true , which is its default value. | [
"pcs property set property = value",
"pcs property set symmetric-cluster=false",
"pcs property unset property",
"pcs property set symmetic-cluster="
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-setremoveclusterprops-HAAR |
Appendix B. The cephadm commands | Appendix B. The cephadm commands The cephadm is a command line tool to manage the local host for the Cephadm Orchestrator. It provides commands to investigate and modify the state of the current host. Some of the commands are generally used for debugging. Note cephadm is not required on all hosts, however, it is useful when investigating a particular daemon. The cephadm-ansible-preflight playbook installs cephadm on all hosts and the cephadm-ansible purge playbook requires cephadm be installed on all hosts to work properly. adopt Description Convert an upgraded storage cluster daemon to run cephadm . Syntax Example ceph-volume Description This command is used to list all the devices on the particular host. Run the ceph-volume command inside a container Deploys OSDs with different device technologies like lvm or physical disks using pluggable tools and follows a predictable, and robust way of preparing, activating, and starting OSDs. Syntax Example check-host Description Check the host configuration that is suitable for a Ceph cluster. Syntax Example deploy Description Deploys a daemon on the local host. Syntax Example enter Description Run an interactive shell inside a running daemon container. Syntax Example help Description View all the commands supported by cephadm . Syntax Example install Description Install the packages. Syntax Example inspect-image Description Inspect the local Ceph container image. Syntax Example list-networks Description List the IP networks. Syntax Example ls Description List daemon instances known to cephadm on the hosts. You can use --no-detail for the command to run faster, which gives details of the daemon name, fsid, style, and systemd unit per daemon. You can use --legacy-dir option to specify a legacy base directory to search for daemons. Syntax Example logs Description Print journald logs for a daemon container. This is similar to the journalctl command. Syntax Example prepare-host Description Prepare a host for cephadm . Syntax Example pull Description Pull the Ceph image. Syntax Example registry-login Description Give cephadm login information for an authenticated registry. Cephadm attempts to log the calling host into that registry. Syntax Example You can also use a JSON registry file containing the login info formatted as: Syntax Example rm-daemon Description Remove a specific daemon instance. If you run the cephadm rm-daemon command on the host directly, although the command removes the daemon, the cephadm mgr module notices that the daemon is missing and redeploys it. This command is problematic and should be used only for experimental purposes and debugging. Syntax Example rm-cluster Description Remove all the daemons from a storage cluster on that specific host where it is run. Similar to rm-daemon , if you remove a few daemons this way and the Ceph Orchestrator is not paused and some of those daemons belong to services that are not unmanaged, the cephadm orchestrator just redeploys them there. Syntax Example Important To better clean up the node as part of performing the cluster removal, cluster logs under /var/log/ceph directory are deleted when cephadm rm-cluster command is run. The cluster logs are removed as long as --keep-logs is not passed to the rm-cluster command. Note If the cephadm rm-cluster command is run on a host that is part of an existing cluster where the host is managed by Cephadm and the Cephadm Manager module is still enabled and running, then Cephadm might immediately start deploying new daemons, and more logs could appear. To avoid this, disable the cephadm mgr module before purging the cluster. rm-repo Description Remove a package repository configuration. This is mainly used for the disconnected installation of Red Hat Ceph Storage. Syntax Example run Description Run a Ceph daemon, in a container, in the foreground. Syntax Example shell Description Run an interactive shell with access to Ceph commands over the inferred or specified Ceph cluster. You can enter the shell using the cephadm shell command and run all the orchestrator commands within the shell. Syntax Example unit Description Start, stop, restart, enable, and disable the daemons with this operation. This operates on the daemon's systemd unit. Syntax Example version Description Provides the version of the storage cluster. Syntax Example | [
"cephadm adopt [-h] --name DAEMON_NAME --style STYLE [--cluster CLUSTER ] --legacy-dir [ LEGACY_DIR ] --config-json CONFIG_JSON ] [--skip-firewalld] [--skip-pull]",
"cephadm adopt --style=legacy --name prometheus.host02",
"cephadm ceph-volume inventory/simple/raw/lvm [-h] [--fsid FSID ] [--config-json CONFIG_JSON ] [--config CONFIG , -c CONFIG ] [--keyring KEYRING , -k KEYRING ]",
"cephadm ceph-volume inventory --fsid f64f341c-655d-11eb-8778-fa163e914bcc",
"cephadm check-host [--expect-hostname HOSTNAME ]",
"cephadm check-host --expect-hostname host02",
"cephadm shell deploy DAEMON_TYPE [-h] [--name DAEMON_NAME ] [--fsid FSID ] [--config CONFIG , -c CONFIG ] [--config-json CONFIG_JSON ] [--keyring KEYRING ] [--key KEY ] [--osd-fsid OSD_FSID ] [--skip-firewalld] [--tcp-ports TCP_PORTS ] [--reconfig] [--allow-ptrace] [--memory-request MEMORY_REQUEST ] [--memory-limit MEMORY_LIMIT ] [--meta-json META_JSON ]",
"cephadm shell deploy mon --fsid f64f341c-655d-11eb-8778-fa163e914bcc",
"cephadm enter [-h] [--fsid FSID ] --name NAME [command [command ...]]",
"cephadm enter --name 52c611f2b1d9",
"cephadm help",
"cephadm help",
"cephadm install PACKAGES",
"cephadm install ceph-common ceph-osd",
"cephadm --image IMAGE_ID inspect-image",
"cephadm --image 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a inspect-image",
"cephadm list-networks",
"cephadm list-networks",
"cephadm ls [--no-detail] [--legacy-dir LEGACY_DIR ]",
"cephadm ls --no-detail",
"cephadm logs [--fsid FSID ] --name DAEMON_NAME cephadm logs [--fsid FSID ] --name DAEMON_NAME -- -n NUMBER # Last N lines cephadm logs [--fsid FSID ] --name DAEMON_NAME -- -f # Follow the logs",
"cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 -- -n 20 cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 -- -f",
"cephadm prepare-host [--expect-hostname HOSTNAME ]",
"cephadm prepare-host cephadm prepare-host --expect-hostname host01",
"cephadm [-h] [--image IMAGE_ID ] pull",
"cephadm --image 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a pull",
"cephadm registry-login --registry-url [ REGISTRY_URL ] --registry-username [ USERNAME ] --registry-password [ PASSWORD ] [--fsid FSID ] [--registry-json JSON_FILE ]",
"cephadm registry-login --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1",
"cat REGISTRY_FILE { \"url\":\" REGISTRY_URL \", \"username\":\" REGISTRY_USERNAME \", \"password\":\" REGISTRY_PASSWORD \" }",
"cat registry_file { \"url\":\"registry.redhat.io\", \"username\":\"myuser\", \"password\":\"mypass\" } cephadm registry-login -i registry_file",
"cephadm rm-daemon [--fsid FSID ] [--name DAEMON_NAME ] [--force ] [--force-delete-data]",
"cephadm rm-daemon --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8",
"cephadm rm-cluster [--fsid FSID ] [--force]",
"cephadm rm-cluster --fsid f64f341c-655d-11eb-8778-fa163e914bcc",
"ceph mgr module disable cephadm",
"cephadm rm-repo [-h]",
"cephadm rm-repo",
"cephadm run [--fsid FSID ] --name DAEMON_NAME",
"cephadm run --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8",
"cephadm shell [--fsid FSID ] [--name DAEMON_NAME , -n DAEMON_NAME ] [--config CONFIG , -c CONFIG ] [--mount MOUNT , -m MOUNT ] [--keyring KEYRING , -k KEYRING ] [--env ENV , -e ENV ]",
"cephadm shell -- ceph orch ls cephadm shell",
"cephadm unit [--fsid FSID ] --name DAEMON_NAME start/stop/restart/enable/disable",
"cephadm unit --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8 start",
"cephadm version",
"cephadm version"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/installation_guide/the-cephadm-commands_install |
Chapter 71. KafkaConnectSpec schema reference | Chapter 71. KafkaConnectSpec schema reference Used in: KafkaConnect Full list of KafkaConnectSpec schema properties Configures a Kafka Connect cluster. 71.1. config Use the config properties to configure Kafka Connect options as keys. The values can be one of the following JSON types: String Number Boolean Certain options have default values: group.id with default value connect-cluster offset.storage.topic with default value connect-cluster-offsets config.storage.topic with default value connect-cluster-configs status.storage.topic with default value connect-cluster-status key.converter with default value org.apache.kafka.connect.json.JsonConverter value.converter with default value org.apache.kafka.connect.json.JsonConverter These options are automatically configured in case they are not present in the KafkaConnect.spec.config properties. Exceptions You can specify and configure the options listed in the Apache Kafka documentation . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Listener and REST interface configuration Plugin path configuration Properties with the following prefixes cannot be set: bootstrap.servers consumer.interceptor.classes listeners. plugin.path producer.interceptor.classes rest. sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka Connect, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Example Kafka Connect configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 # ... Important The Cluster Operator does not validate keys or values in the config object provided. If an invalid configuration is provided, the Kafka Connect cluster might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Connect nodes. 71.2. logging Kafka Connect has its own configurable loggers: connect.root.logger.level log4j.logger.org.reflections Further loggers are added depending on the Kafka Connect plugins running. Use a curl request to get a complete list of Kafka Connect loggers running from any Kafka broker pod: curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/ Kafka Connect uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # ... logging: type: inline loggers: connect.root.logger.level: INFO log4j.logger.org.apache.kafka.connect.runtime.WorkerSourceTask: TRACE log4j.logger.org.apache.kafka.connect.runtime.WorkerSinkTask: DEBUG # ... Note Setting a log level to DEBUG may result in a large amount of log output and may have performance implications. External logging apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: connect-logging.log4j # ... Any available loggers that are not configured have their level set to OFF . If Kafka Connect was deployed using the Cluster Operator, changes to Kafka Connect logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 71.3. KafkaConnectSpec schema properties Property Property type Description version string The Kafka Connect version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version. replicas integer The number of pods in the Kafka Connect group. Defaults to 3 . image string The container image used for Kafka Connect pods. If no image name is explicitly specified, it is determined based on the spec.version configuration. The image names are specifically mapped to corresponding versions in the Cluster Operator configuration. bootstrapServers string Bootstrap servers to connect to. This should be given as a comma separated list of <hostname> :_<port>_ pairs. tls ClientTls TLS configuration. authentication KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth Authentication configuration for Kafka Connect. config map The Kafka Connect configuration. Properties with the following prefixes cannot be set: ssl., sasl., security., listeners, plugin.path, rest., bootstrap.servers, consumer.interceptor.classes, producer.interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). resources ResourceRequirements The maximum limits for CPU and memory resources and the requested initial resources. livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. jvmOptions JvmOptions JVM Options for pods. jmxOptions KafkaJmxOptions JMX Options. logging InlineLogging , ExternalLogging Logging configuration for Kafka Connect. clientRackInitImage string The image of the init container used for initializing the client.rack . rack Rack Configuration of the node label which will be used as the client.rack consumer configuration. tracing JaegerTracing , OpenTelemetryTracing The configuration of tracing in Kafka Connect. template KafkaConnectTemplate Template for Kafka Connect and Kafka Mirror Maker 2 resources. The template allows users to specify how the Pods , Service , and other services are generated. externalConfiguration ExternalConfiguration Pass data from Secrets or ConfigMaps to the Kafka Connect pods and use them to configure connectors. build Build Configures how the Connect container image should be built. Optional. metricsConfig JmxPrometheusExporterMetrics Metrics configuration. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 #",
"curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # logging: type: inline loggers: connect.root.logger.level: INFO log4j.logger.org.apache.kafka.connect.runtime.WorkerSourceTask: TRACE log4j.logger.org.apache.kafka.connect.runtime.WorkerSinkTask: DEBUG #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: connect-logging.log4j #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaConnectSpec-reference |
Appendix A. Revision History | Appendix A. Revision History Revision History Revision 6.4.0-20 Thu Jun 06 2017 David Le Sage Updated for 6.4. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/appe-revision_history |
Chapter 8. Configuring OpenShift Serverless Functions | Chapter 8. Configuring OpenShift Serverless Functions To improve the process of deployment of your application code, you can use OpenShift Serverless to deploy stateless, event-driven functions as a Knative service on OpenShift Container Platform. If you want to develop functions, you must complete the set up steps. 8.1. Prerequisites To enable the use of OpenShift Serverless Functions on your cluster, you must complete the following steps: The OpenShift Serverless Operator and Knative Serving are installed on your cluster. Note Functions are deployed as a Knative service. If you want to use event-driven architecture with your functions, you must also install Knative Eventing. You have the oc CLI installed. You have the Knative ( kn ) CLI installed. Installing the Knative CLI enables the use of kn func commands which you can use to create and manage functions. You have installed Docker Container Engine or Podman version 3.4.7 or higher. You have access to an available image registry, such as the OpenShift Container Registry. If you are using Quay.io as the image registry, you must ensure that either the repository is not private, or that you have followed the OpenShift Container Platform documentation on Allowing pods to reference images from other secured registries . If you are using the OpenShift Container Registry, a cluster administrator must expose the registry . 8.2. Setting up Podman To use advanced container management features, you might want to use Podman with OpenShift Serverless Functions. To do so, you need to start the Podman service and configure the Knative ( kn ) CLI to connect to it. Procedure Start the Podman service that serves the Docker API on a UNIX socket at USD{XDG_RUNTIME_DIR}/podman/podman.sock : USD systemctl start --user podman.socket Note On most systems, this socket is located at /run/user/USD(id -u)/podman/podman.sock . Establish the environment variable that is used to build a function: USD export DOCKER_HOST="unix://USD{XDG_RUNTIME_DIR}/podman/podman.sock" Run the build command inside your function project directory with the -v flag to see verbose output. You should see a connection to your local UNIX socket: USD kn func build -v 8.3. Setting up Podman on macOS To use advanced container management features, you might want to use Podman with OpenShift Serverless Functions. To do so on macOS, you need to start the Podman machine and configure the Knative ( kn ) CLI to connect to it. Procedure Create the Podman machine: USD podman machine init --memory=8192 --cpus=2 --disk-size=20 Start the Podman machine, which serves the Docker API on a UNIX socket: USD podman machine start Starting machine "podman-machine-default" Waiting for VM ... Mounting volume... /Users/myuser:/Users/user [...truncated output...] You can still connect Docker API clients by setting DOCKER_HOST using the following command in your terminal session: export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock' Machine "podman-machine-default" started successfully Note On most macOS systems, this socket is located at /Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock . Establish the environment variable that is used to build a function: USD export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock' Run the build command inside your function project directory with the -v flag to see verbose output. You should see a connection to your local UNIX socket: USD kn func build -v 8.4. steps For more information about Docker Container Engine or Podman, see Container build tool options . See Getting started with functions . | [
"systemctl start --user podman.socket",
"export DOCKER_HOST=\"unix://USD{XDG_RUNTIME_DIR}/podman/podman.sock\"",
"kn func build -v",
"podman machine init --memory=8192 --cpus=2 --disk-size=20",
"podman machine start Starting machine \"podman-machine-default\" Waiting for VM Mounting volume... /Users/myuser:/Users/user [...truncated output...] You can still connect Docker API clients by setting DOCKER_HOST using the following command in your terminal session: export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock' Machine \"podman-machine-default\" started successfully",
"export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock'",
"kn func build -v"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/installing_openshift_serverless/configuring-serverless-functions |
Managing configurations by using Ansible integration | Managing configurations by using Ansible integration Red Hat Satellite 6.16 Configure Ansible integration in Satellite and use Ansible roles and playbooks to configure your hosts Red Hat Satellite Documentation Team [email protected] | [
"satellite-installer --enable-foreman-proxy-plugin-ansible",
"subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms",
"subscription-manager repos --enable=rhel-7-server-extras-rpms",
"satellite-maintain packages install rhel-system-roles",
"--- collections: - name: my_namespace.my_collection version: 1.2.3",
"ansible-vault encrypt /etc/ansible/roles/ Role_Name /vars/main.yml",
"chgrp foreman-proxy /etc/ansible/roles/ Role_Name /vars/main.yml chmod 0640 /etc/ansible/roles/ Role_Name /vars/main.yml",
"chown foreman-proxy:foreman-proxy /usr/share/foreman-proxy/.ansible_vault_password chmod 0400 /usr/share/foreman-proxy/.ansible_vault_password",
"[defaults] vault_password_file = /usr/share/foreman-proxy/.ansible_vault_password",
"name = Reboot and host.name = staging.example.com name = Reboot and host.name ~ *.staging.example.com name = \"Restart service\" and host_group.name = webservers",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=ssh",
"dnf install katello-pull-transport-migrate",
"yum install katello-pull-transport-migrate",
"systemctl status yggdrasild",
"hammer job-template create --file \" Path_to_My_Template_File \" --job-category \" My_Category_Name \" --name \" My_Template_Name \" --provider-type SSH",
"curl --header 'Content-Type: application/json' --request GET https:// satellite.example.com /ansible/api/v2/ansible_playbooks/fetch?proxy_id= My_capsule_ID",
"curl --data '{ \"playbook_names\": [\" My_Playbook_Name \"] }' --header 'Content-Type: application/json' --request PUT https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_capsule_ID",
"curl -X PUT -H 'Content-Type: application/json' https:// satellite.example.com /ansible/api/v2/ansible_playbooks/sync?proxy_id= My_capsule_ID",
"hammer settings set --name=remote_execution_fallback_proxy --value=true",
"hammer settings set --name=remote_execution_global_proxy --value=true",
"mkdir /My_Remote_Working_Directory",
"chcon --reference=/tmp /My_Remote_Working_Directory",
"satellite-installer --foreman-proxy-plugin-ansible-working-dir /My_Remote_Working_Directory",
"ssh-copy-id -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy.pub [email protected]",
"ssh -i ~foreman-proxy/.ssh/id_rsa_foreman_proxy [email protected]",
"ssh-keygen -p -f ~foreman-proxy/.ssh/id_rsa_foreman_proxy",
"mkdir ~/.ssh",
"curl https:// capsule.example.com :9090/ssh/pubkey >> ~/.ssh/authorized_keys",
"chmod 700 ~/.ssh",
"chmod 600 ~/.ssh/authorized_keys",
"<%= snippet 'remote_execution_ssh_keys' %>",
"id -u foreman-proxy",
"umask 077",
"mkdir -p \"/var/kerberos/krb5/user/ My_User_ID \"",
"cp My_Client.keytab /var/kerberos/krb5/user/ My_User_ID /client.keytab",
"chown -R foreman-proxy:foreman-proxy \"/var/kerberos/krb5/user/ My_User_ID \"",
"chmod -wx \"/var/kerberos/krb5/user/ My_User_ID /client.keytab\"",
"restorecon -RvF /var/kerberos/krb5",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-ssh-kerberos-auth true",
"hostgroup_fullname ~ \" My_Host_Group *\"",
"hammer settings set --name=remote_execution_global_proxy --value=false",
"hammer job-template list",
"hammer job-template info --id My_Template_ID",
"hammer job-invocation create --inputs My_Key_1 =\" My_Value_1 \", My_Key_2 =\" My_Value_2 \",... --job-template \" My_Template_Name \" --search-query \" My_Search_Query \"",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit MAX_JOBS_NUMBER",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mqtt-rate-limit 200",
"systemctl start ansible-callback",
"systemctl status ansible-callback",
"satellite.example.com systemd[1]: Started Provisioning callback to Ansible Automation Controller",
"curl --data curl --data host_config_key= My_Host_Config_Key --insecure --show-error --silent https:// controller.example.com /api/v2/job_templates/ 8 /callback/",
"<%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'nginx' %> <%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'nginx' %>",
"<%= render_template 'Package Action - SSH Default', :action => 'install', :package => input(\"package\") %>",
"restorecon -RvF <%= input(\"directory\") %>",
"<%= render_template(\"Run Command - restorecon\", :directory => \"/home\") %>",
"<%= render_template(\"Power Action - SSH Default\", :action => \"restart\") %>"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html-single/managing_configurations_by_using_ansible_integration/index |
Preface | Preface Red Hat Quay is an enterprise-quality container registry. Use Red Hat Quay to build and store container images, then make them available to deploy across your enterprise. The Red Hat Quay Operator provides a simple method to deploy and manage Red Hat Quay on an OpenShift cluster. With the release of Red Hat Quay 3.4.0, the Red Hat Quay Operator was re-written to offer an enhanced experience and to add more support for Day 2 operations. As a result, the Red Hat Quay Operator is now simpler to use and is more opinionated. The key difference from versions prior to Red Hat Quay 3.4.0 include the following: The QuayEcosystem custom resource has been replaced with the QuayRegistry custom resource. The default installation options produces a fully supported Red Hat Quay environment, with all managed dependencies, such as database, caches, object storage, and so on, supported for production use. Note Some components might not be highly available. A new validation library for Red Hat Quay's configuration. Object storage can now be managed by the Red Hat Quay Operator using the ObjectBucketClaim Kubernetes API Note Red Hat OpenShift Data Foundation can be used to provide a supported implementation of this API on OpenShift Container Platform. Customization of the container images used by deployed pods for testing and development scenarios. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/pr01 |
7.2. Starting and Stopping a Cluster | 7.2. Starting and Stopping a Cluster You can use the ccs command to stop a cluster by using the following command to stop cluster services on all nodes in the cluster: You can use the ccs command to start a cluster that is not running by using the following command to start cluster services on all nodes in the cluster: When you use the --startall option of the ccs command to start a cluster, the command automatically enables the cluster resources. For some configurations, such as when services have been intentionally disabled on one node to disable fence loops, you may not want to enable the services on that node. As of Red Hat Enterprise Linux 6.6 release, you can use the --noenable option of the ccs --startall command to prevent the services from being enabled: | [
"ccs -h host --stopall",
"ccs -h host --startall",
"ccs -h host --startall --noenable"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-admin-start-ccs-ca |
Chapter 2. Get started using the Insights for RHEL malware detection service | Chapter 2. Get started using the Insights for RHEL malware detection service To begin using the malware detection service, you must perform the following actions. Procedures for each action follow in this chapter. Note Some procedures require sudo access on the system and others require that the administrator performing the actions be a member of a User Access group with the Malware detection administrator role. Table 2.1. Procedure and access requirements to set up malware detection service. Action Description Required privileges Install YARA and configure the Insights client Install the YARA application and configure the Insights client to use the malware detection service Sudo access Configure User Access on the Red Hat Hybrid Cloud Console In Red Hat Hybrid Cloud Console > the Settings icon (β) > Identity & Access Management > User Access > Groups , create malware detection groups, and then add the appropriate roles and members to the groups Organization Administrator on the Red Hat account View results See the results of system scans in the Hybrid Cloud Console Membership in a User Access group with the Malware detection viewer role 2.1. Installing YARA and configuring the Insights client Perform the following procedure to install YARA and the malware detection controller on the RHEL system, then run test and full malware detection scans and report data to the Insights for Red Hat Enterprise Linux application. Prerequisites The system operating system version must be RHEL8 or RHEL9. The administrator must have sudo access on the system. The system must have the Insights client package installed, and be registered to Insights for Red Hat Enterprise Linux. Procedure Install YARA. Yara RPMs for RHEL8 and RHEL9 are available on the Red Hat Customer Portal: Note Insights for Red Hat Enterprise Linux malware detection is not supported on RHEL7. If not yet completed, register the system with Insights for Red Hat Enterprise Linux. Important The Insights client package must be installed on the system and the system registered with Insights for Red Hat Enterprise Linux before the malware detection service can be used. Install the Insights client RPM. Test the connection to Insights for Red Hat Enterprise Linux. Register the system with Insights for Red Hat Enterprise Linux. Run the Insights client malware detection collector. The collector takes the following actions for this initial run: Creates a malware detection configuration file in /etc/insights-client/malware-detection-config.yml Performs a test scan and uploads the results Note This is a very minimal scan of your system with a simple test rule. The test scan is mainly to help verify that the installation, operation, and uploads are working correctly for the malware detection service. There will be a couple of matches found but this is intentional and nothing to worry about. Results from the initial test scan will not appear in the malware detection service UI. Perform a full filesystem scan. Edit /etc/insights-client/malware-detection-config.yml and set the test_scan option to false. test_scan: false Consider setting the following options to minimize scan time: filesystem_scan_only - to only scan certain directories on the system filesystem_scan_exclude - to exclude certain directories from being scanned filesystem_scan_since - to scan only recently modified files Re-run the client collector: Optionally, scan processes. This will scan the filesystem first, followed by a scan of all processes. After the filesystem and process scans are complete, view the results at Security > Malware . Important By default, scanning processes is disabled. There is an issue with YARA and scanning processes on Linux systems that may cause poor system performance. This problem will be fixed in an upcoming release of YARA, but until then it is recommended to NOT scan processes . To enable process scanning, set scan_processes: true in /etc/insights-client/malware-detection-config.yml . scan_processes: true Note Consider setting these processes related options while you are there: processes_scan_only - to only scan certain processes on the system processess_scan_exclude - to exclude certain processes from being scanned processes_scan_since - to scan only recently started processes Save the changes and run the collector again. 2.2. User Access settings in the Red Hat Hybrid Cloud Console User Access is the Red Hat implementation of role-based access control (RBAC). Your Organization Administrator uses User Access to configure what users can see and do on the Red Hat Hybrid Cloud Console (the console): Control user access by organizing roles instead of assigning permissions individually to users. Create groups that include roles and their corresponding permissions. Assign users to these groups, allowing them to inherit the permissions associated with their group's roles. 2.2.1. Predefined User Access groups and roles To make groups and roles easier to manage, Red Hat provides two predefined groups and a set of predefined roles. 2.2.1.1. Predefined groups The Default access group contains all users in your organization. Many predefined roles are assigned to this group. It is automatically updated by Red Hat. Note If the Organization Administrator makes changes to the Default access group its name changes to Custom default access group and it is no longer updated by Red Hat. The Default admin access group contains only users who have Organization Administrator permissions. This group is automatically maintained and users and roles in this group cannot be changed. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (β) > Identity & Access Management > User Access > Groups to see the current groups in your account. This view is limited to the Organization Administrator. 2.2.1.2. Predefined roles assigned to groups The Default access group contains many of the predefined roles. Because all users in your organization are members of the Default access group, they inherit all permissions assigned to that group. The Default admin access group includes many (but not all) predefined roles that provide update and delete permissions. The roles in this group usually include administrator in their name. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (β) > Identity & Access Management > User Access > Roles to see the current roles in your account. You can see how many groups each role is assigned to. This view is limited to the Organization Administrator. See User Access Configuration Guide for Role-based Access Control (RBAC) for additional information. 2.2.2. Access permissions The Prerequisites for each procedure list which predefined role provides the permissions you must have. As a user, you can navigate to Red Hat Hybrid Cloud Console > the Settings icon (β) > My User Access to view the roles and application permissions currently inherited by you. If you try to access Insights for Red Hat Enterprise Linux features and see a message that you do not have permission to perform this action, you must obtain additional permissions. The Organization Administrator or the User Access administrator for your organization configures those permissions. Use the Red Hat Hybrid Cloud Console Virtual Assistant to ask "Contact my Organization Administrator". The assistant sends an email to the Organization Administrator on your behalf. 2.2.3. User Access roles for the Malware detection service The following predefined roles on the Red Hat Hybrid Cloud Console enable access to malware detection features in Insights for Red Hat Enterprise Linux. Important There is no "default-group" role for malware detection service users. For users to be able to view data or control settings in the malware detection service, they must be members of the User Access group with one of the following roles: Table 2.2. Permissions provided by the User Access roles User Access Role Permissions Malware detection viewer Read All Malware detection editor Read All Set user acknowledgment Malware detection administrator Read All Set user acknowledgment Delete hits Disable signatures permissions 2.3. Viewing malware detection scan results in the Red Hat Hybrid Cloud Console View results of system scans on the Hybrid Cloud Console. Prerequisites YARA and the Insights client are installed and configured on the RHEL system. You must be logged into the Hybrid Cloud Console. You are a member of a Hybrid Cloud Console User Access group with the Malware detection administrator or Malware detection viewer role . Procedures Navigate to Security > Malware > Systems . View the dashboard to get a quick synopsis of all of your RHEL systems with malware detection enabled and reporting results. To see results for a specific system, use the Filter by name search box to search for the system by name. | [
"sudo dnf install yara",
"sudo yum install insights-client",
"sudo insights-client --test-connection",
"sudo insights-client --register",
"sudo insights-client --collector malware-detection",
"test_scan: false",
"sudo insights-client --collector malware-detection",
"scan_processes: true",
"sudo insights-client --collector malware-detection"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_reporting_malware_signatures_on_rhel_systems/malware-svc-getting-started |
Chapter 4. ComponentStatus [v1] | Chapter 4. ComponentStatus [v1] Description ComponentStatus (and ComponentStatusList) holds the cluster validation info. Deprecated: This API is deprecated in v1.19+ Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array List of component conditions observed conditions[] object Information about the condition of a component. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.1. .conditions Description List of component conditions observed Type array 4.1.2. .conditions[] Description Information about the condition of a component. Type object Required type status Property Type Description error string Condition error code for a component. For example, a health check error code. message string Message about the condition for a component. For example, information about a health check. status string Status of the condition for a component. Valid values for "Healthy": "True", "False", or "Unknown". type string Type of condition for a component. Valid value: "Healthy" 4.2. API endpoints The following API endpoints are available: /api/v1/componentstatuses GET : list objects of kind ComponentStatus /api/v1/componentstatuses/{name} GET : read the specified ComponentStatus 4.2.1. /api/v1/componentstatuses HTTP method GET Description list objects of kind ComponentStatus Table 4.1. HTTP responses HTTP code Reponse body 200 - OK ComponentStatusList schema 401 - Unauthorized Empty 4.2.2. /api/v1/componentstatuses/{name} Table 4.2. Global path parameters Parameter Type Description name string name of the ComponentStatus HTTP method GET Description read the specified ComponentStatus Table 4.3. HTTP responses HTTP code Reponse body 200 - OK ComponentStatus schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/metadata_apis/componentstatus-v1 |
Pipelines | Pipelines Red Hat OpenShift Service on AWS 4 A cloud-native continuous integration and continuous delivery solution based on Kubernetes resources Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/pipelines/index |
Chapter 55. TlsSidecar schema reference | Chapter 55. TlsSidecar schema reference The type TlsSidecar has been deprecated. Used in: CruiseControlSpec , EntityOperatorSpec Full list of TlsSidecar schema properties The TLS sidecar type is not used anymore. If set, it will be ignored 55.1. TlsSidecar schema properties Property Property type Description image string The docker image for the container. resources ResourceRequirements CPU and memory resources to reserve. livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. logLevel string (one of [emerg, debug, crit, err, alert, warning, notice, info]) The log level for the TLS sidecar. Default value is notice . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-TlsSidecar-reference |
Chapter 1. Cryostat migration overview | Chapter 1. Cryostat migration overview As a cluster administrator in Red Hat OpenShift, you can upgrade from Cryostat 2.4 to Cryostat 3.0. This guide covers new updates available in the Cryostat 3.0 release, deprecated and unsupported features, and any required application and Operator configuration updates to maintain consistent behavior. Before you begin the migration process , complete the following steps: Review the major Cryostat Operator changes in Cryostat 3.0 . Review the application configuration changes . Review migration recommendations . 1.1. Major Cryostat Operator changes Red Hat build of Cryostat 3.0 includes major updates to the installation mode of the Cryostat Operator as well as the types of provided APIs. Installation mode changes Red Hat build of Cryostat version Installation modes 2.4 and earlier All namespaces on the cluster (default) A specific namespace on the cluster 3.0 All namespaces on the cluster (default) In Red Hat OpenShift, the Cryostat Operator can now only be installed on a cluster-wide basis ( All namespaces on the cluster ) rather than into a subset of cluster namespaces. Cluster-wide installation is the preferred mode for the Operator Lifecycle Manager and per-namespace installations are a deprecated feature. Figure 1.1. Installation modes in Cryostat 3.0 Provided API changes Red Hat build of Cryostat version Provided APIs 2.4 and earlier ClusterCryostat Cryostat 3.0 Cryostat ClusterCryostat and Cryostat APIs have now been unified into a singular Cryostat API. You can use the Cryostat API and its optional Target Namespaces field to create one or more Cryostat instances that correspond to namespaces or groups of namespaces that contain your applications. Figure 1.2. Provided APIs available in Cryostat 3.0 1.2. Application configuration changes For applications that are deployed with the Cryostat agent, an updated 0.4.0 version of the agent is available and required for using Red Hat build of Cryostat 3.0. For information about the latest available build version of the Cryostat agent, refer to the Red Hat Maven repository . In addition to upgrading the agent, some of the agent configuration properties have changed: Agent configuration property Cryostat 2.4 value Cryostat 3.0 value Details CRYOSTAT_AGENT_BASEURI http://cryostat.mynamespace.mycluster.svc:8181 http://cryostat.mynamespace.mycluster.svc:4180 Service port has changed from 8181 to 4180. CRYOSTAT_AGENT_AUTHORIZATION Bearer Base64 token Bearer raw (plain-text) token Bearer token no longer needs to be Base64-encoded. For more information about agent configuration changes and other new features and enhancements, see the Release notes for the Red Hat build of Cryostat 3.0 . Note For applications that are using remote JMX connections, Red Hat build of Cryostat does not include any configuration changes. 1.3. Migration recommendations Before migrating, consider backing up any Cryostat data to ensure that customizations can be restored after upgrading. This data includes: Custom profile templates Custom dashboard layouts Active and archived recordings Custom automated rules If there are SSL certificates and stored credentials configurations, ensure that these are available during migration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/migrating_cryostat_2.4_to_cryostat_3.0/cryostat-migration-overview_cryostat |
Chapter 3. LocalSubjectAccessReview [authorization.openshift.io/v1] | Chapter 3. LocalSubjectAccessReview [authorization.openshift.io/v1] Description LocalSubjectAccessReview is an object for requesting information about whether a user or group can perform an action in a particular namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required namespace verb resourceAPIGroup resourceAPIVersion resource resourceName path isNonResourceURL user groups scopes 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources content RawExtension Content is the actual content of the request for create and update groups array (string) Groups is optional. Groups is the list of groups to which the User belongs. isNonResourceURL boolean IsNonResourceURL is true if this is a request for a non-resource URL (outside of the resource hierarchy) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces path string Path is the path of a non resource URL resource string Resource is one of the existing resource types resourceAPIGroup string Group is the API group of the resource Serialized as resourceAPIGroup to avoid confusion with the 'groups' field when inlined resourceAPIVersion string Version is the API version of the resource Serialized as resourceAPIVersion to avoid confusion with TypeMeta.apiVersion and ObjectMeta.resourceVersion when inlined resourceName string ResourceName is the name of the resource being requested for a "get" or deleted for a "delete" scopes array (string) Scopes to use for the evaluation. Empty means "use the unscoped (full) permissions of the user/groups". Nil for a self-SAR, means "use the scopes on this request". Nil for a regular SAR, means the same as empty. user string User is optional. If both User and Groups are empty, the current authenticated user is used. verb string Verb is one of: get, list, watch, create, update, delete 3.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/namespaces/{namespace}/localsubjectaccessreviews POST : create a LocalSubjectAccessReview 3.2.1. /apis/authorization.openshift.io/v1/namespaces/{namespace}/localsubjectaccessreviews Table 3.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a LocalSubjectAccessReview Table 3.2. Body parameters Parameter Type Description body LocalSubjectAccessReview schema Table 3.3. HTTP responses HTTP code Reponse body 200 - OK LocalSubjectAccessReview schema 201 - Created LocalSubjectAccessReview schema 202 - Accepted LocalSubjectAccessReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authorization_apis/localsubjectaccessreview-authorization-openshift-io-v1 |
Chapter 7. Console [config.openshift.io/v1] | Chapter 7. Console [config.openshift.io/v1] Description Console holds cluster-wide configuration for the web console, including the logout URL, and reports the public URL of the console. The canonical name is cluster . Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 7.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description authentication object ConsoleAuthentication defines a list of optional configuration for console authentication. 7.1.2. .spec.authentication Description ConsoleAuthentication defines a list of optional configuration for console authentication. Type object Property Type Description logoutRedirect string An optional, absolute URL to redirect web browsers to after logging out of the console. If not specified, it will redirect to the default login page. This is required when using an identity provider that supports single sign-on (SSO) such as: - OpenID (Keycloak, Azure) - RequestHeader (GSSAPI, SSPI, SAML) - OAuth (GitHub, GitLab, Google) Logging out of the console will destroy the user's token. The logoutRedirect provides the user the option to perform single logout (SLO) through the identity provider to destroy their single sign-on session. 7.1.3. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description consoleURL string The URL for the console. This will be derived from the host for the route that is created for the console. 7.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/consoles DELETE : delete collection of Console GET : list objects of kind Console POST : create a Console /apis/config.openshift.io/v1/consoles/{name} DELETE : delete a Console GET : read the specified Console PATCH : partially update the specified Console PUT : replace the specified Console /apis/config.openshift.io/v1/consoles/{name}/status GET : read status of the specified Console PATCH : partially update status of the specified Console PUT : replace status of the specified Console 7.2.1. /apis/config.openshift.io/v1/consoles Table 7.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Console Table 7.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Console Table 7.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleList schema 401 - Unauthorized Empty HTTP method POST Description create a Console Table 7.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.7. Body parameters Parameter Type Description body Console schema Table 7.8. HTTP responses HTTP code Reponse body 200 - OK Console schema 201 - Created Console schema 202 - Accepted Console schema 401 - Unauthorized Empty 7.2.2. /apis/config.openshift.io/v1/consoles/{name} Table 7.9. Global path parameters Parameter Type Description name string name of the Console Table 7.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Console Table 7.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.12. Body parameters Parameter Type Description body DeleteOptions schema Table 7.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Console Table 7.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 7.15. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Console Table 7.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.17. Body parameters Parameter Type Description body Patch schema Table 7.18. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Console Table 7.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.20. Body parameters Parameter Type Description body Console schema Table 7.21. HTTP responses HTTP code Reponse body 200 - OK Console schema 201 - Created Console schema 401 - Unauthorized Empty 7.2.3. /apis/config.openshift.io/v1/consoles/{name}/status Table 7.22. Global path parameters Parameter Type Description name string name of the Console Table 7.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Console Table 7.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 7.25. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Console Table 7.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.27. Body parameters Parameter Type Description body Patch schema Table 7.28. HTTP responses HTTP code Reponse body 200 - OK Console schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Console Table 7.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.30. Body parameters Parameter Type Description body Console schema Table 7.31. HTTP responses HTTP code Reponse body 200 - OK Console schema 201 - Created Console schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/config_apis/console-config-openshift-io-v1 |
Pipelines | Pipelines OpenShift Container Platform 4.14 A cloud-native continuous integration and continuous delivery solution based on Kubernetes resources Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/pipelines/index |
Chapter 30. ImageService | Chapter 30. ImageService 30.1. ExportImages GET /v1/export/images 30.1.1. Description 30.1.2. Parameters 30.1.2.1. Query Parameters Name Description Required Default Pattern timeout - null query - null 30.1.3. Return Type Stream_result_of_v1ExportImageResponse 30.1.4. Content Type application/json 30.1.5. Responses Table 30.1. HTTP Response Codes Code Message Datatype 200 A successful response.(streaming responses) Stream_result_of_v1ExportImageResponse 0 An unexpected error response. RuntimeError 30.1.6. Samples 30.1.7. Common object reference 30.1.7.1. CVSSV2AccessComplexity Enum Values ACCESS_HIGH ACCESS_MEDIUM ACCESS_LOW 30.1.7.2. CVSSV2Authentication Enum Values AUTH_MULTIPLE AUTH_SINGLE AUTH_NONE 30.1.7.3. CVSSV3Complexity Enum Values COMPLEXITY_LOW COMPLEXITY_HIGH 30.1.7.4. CVSSV3Privileges Enum Values PRIVILEGE_NONE PRIVILEGE_LOW PRIVILEGE_HIGH 30.1.7.5. CVSSV3UserInteraction Enum Values UI_NONE UI_REQUIRED 30.1.7.6. EmbeddedVulnerabilityVulnerabilityType Enum Values UNKNOWN_VULNERABILITY IMAGE_VULNERABILITY K8S_VULNERABILITY ISTIO_VULNERABILITY NODE_VULNERABILITY OPENSHIFT_VULNERABILITY 30.1.7.7. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.1.7.7.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 30.1.7.8. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 30.1.7.9. RuntimeStreamError Field Name Required Nullable Type Description Format grpcCode Integer int32 httpCode Integer int32 message String httpStatus String details List of ProtobufAny 30.1.7.10. StorageCVSSV2 Field Name Required Nullable Type Description Format vector String attackVector StorageCVSSV2AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, accessComplexity CVSSV2AccessComplexity ACCESS_HIGH, ACCESS_MEDIUM, ACCESS_LOW, authentication CVSSV2Authentication AUTH_MULTIPLE, AUTH_SINGLE, AUTH_NONE, confidentiality StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, integrity StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, availability StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, exploitabilityScore Float float impactScore Float float score Float float severity StorageCVSSV2Severity UNKNOWN, LOW, MEDIUM, HIGH, 30.1.7.11. StorageCVSSV2AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK 30.1.7.12. StorageCVSSV2Impact Enum Values IMPACT_NONE IMPACT_PARTIAL IMPACT_COMPLETE 30.1.7.13. StorageCVSSV2Severity Enum Values UNKNOWN LOW MEDIUM HIGH 30.1.7.14. StorageCVSSV3 Field Name Required Nullable Type Description Format vector String exploitabilityScore Float float impactScore Float float attackVector StorageCVSSV3AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, ATTACK_PHYSICAL, attackComplexity CVSSV3Complexity COMPLEXITY_LOW, COMPLEXITY_HIGH, privilegesRequired CVSSV3Privileges PRIVILEGE_NONE, PRIVILEGE_LOW, PRIVILEGE_HIGH, userInteraction CVSSV3UserInteraction UI_NONE, UI_REQUIRED, scope StorageCVSSV3Scope UNCHANGED, CHANGED, confidentiality StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, integrity StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, availability StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, score Float float severity StorageCVSSV3Severity UNKNOWN, NONE, LOW, MEDIUM, HIGH, CRITICAL, 30.1.7.15. StorageCVSSV3AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK ATTACK_PHYSICAL 30.1.7.16. StorageCVSSV3Impact Enum Values IMPACT_NONE IMPACT_LOW IMPACT_HIGH 30.1.7.17. StorageCVSSV3Scope Enum Values UNCHANGED CHANGED 30.1.7.18. StorageCVSSV3Severity Enum Values UNKNOWN NONE LOW MEDIUM HIGH CRITICAL 30.1.7.19. StorageCosignSignature Field Name Required Nullable Type Description Format rawSignature byte[] byte signaturePayload byte[] byte certPem byte[] byte certChainPem byte[] byte 30.1.7.20. StorageDataSource Field Name Required Nullable Type Description Format id String name String mirror String 30.1.7.21. StorageEmbeddedImageScanComponent Field Name Required Nullable Type Description Format name String version String license StorageLicense vulns List of StorageEmbeddedVulnerability layerIndex Integer int32 priority String int64 source StorageSourceType OS, PYTHON, JAVA, RUBY, NODEJS, GO, DOTNETCORERUNTIME, INFRASTRUCTURE, location String topCvss Float float riskScore Float float fixedBy String Component version that fixes all the fixable vulnerabilities in this component. executables List of StorageEmbeddedImageScanComponentExecutable 30.1.7.22. StorageEmbeddedImageScanComponentExecutable Field Name Required Nullable Type Description Format path String dependencies List of string 30.1.7.23. StorageEmbeddedVulnerability Field Name Required Nullable Type Description Format cve String cvss Float float summary String link String fixedBy String scoreVersion StorageEmbeddedVulnerabilityScoreVersion V2, V3, cvssV2 StorageCVSSV2 cvssV3 StorageCVSSV3 publishedOn Date date-time lastModified Date date-time vulnerabilityType EmbeddedVulnerabilityVulnerabilityType UNKNOWN_VULNERABILITY, IMAGE_VULNERABILITY, K8S_VULNERABILITY, ISTIO_VULNERABILITY, NODE_VULNERABILITY, OPENSHIFT_VULNERABILITY, vulnerabilityTypes List of EmbeddedVulnerabilityVulnerabilityType suppressed Boolean suppressActivation Date date-time suppressExpiry Date date-time firstSystemOccurrence Date Time when the CVE was first seen, for this specific distro, in the system. date-time firstImageOccurrence Date Time when the CVE was first seen in this image. date-time severity StorageVulnerabilitySeverity UNKNOWN_VULNERABILITY_SEVERITY, LOW_VULNERABILITY_SEVERITY, MODERATE_VULNERABILITY_SEVERITY, IMPORTANT_VULNERABILITY_SEVERITY, CRITICAL_VULNERABILITY_SEVERITY, state StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, 30.1.7.24. StorageEmbeddedVulnerabilityScoreVersion Enum Values V2 V3 30.1.7.25. StorageImage Field Name Required Nullable Type Description Format id String name StorageImageName names List of StorageImageName This should deprecate the ImageName field long-term, allowing images with the same digest to be associated with different locations. TODO(dhaus): For now, this message will be without search tags due to duplicated search tags otherwise. metadata StorageImageMetadata scan StorageImageScan signatureVerificationData StorageImageSignatureVerificationData signature StorageImageSignature components Integer int32 cves Integer int32 fixableCves Integer int32 lastUpdated Date date-time notPullable Boolean isClusterLocal Boolean priority String int64 riskScore Float float topCvss Float float notes List of StorageImageNote 30.1.7.26. StorageImageLayer Field Name Required Nullable Type Description Format instruction String value String created Date date-time author String empty Boolean 30.1.7.27. StorageImageMetadata Field Name Required Nullable Type Description Format v1 StorageV1Metadata v2 StorageV2Metadata layerShas List of string dataSource StorageDataSource version String uint64 30.1.7.28. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 30.1.7.29. StorageImageNote Enum Values MISSING_METADATA MISSING_SCAN_DATA MISSING_SIGNATURE MISSING_SIGNATURE_VERIFICATION_DATA 30.1.7.30. StorageImageScan Field Name Required Nullable Type Description Format scannerVersion String scanTime Date date-time components List of StorageEmbeddedImageScanComponent operatingSystem String dataSource StorageDataSource notes List of StorageImageScanNote hash String uint64 30.1.7.31. StorageImageScanNote Enum Values UNSET OS_UNAVAILABLE PARTIAL_SCAN_DATA OS_CVES_UNAVAILABLE OS_CVES_STALE LANGUAGE_CVES_UNAVAILABLE CERTIFIED_RHEL_SCAN_UNAVAILABLE 30.1.7.32. StorageImageSignature Field Name Required Nullable Type Description Format signatures List of StorageSignature fetched Date date-time 30.1.7.33. StorageImageSignatureVerificationData Field Name Required Nullable Type Description Format results List of StorageImageSignatureVerificationResult 30.1.7.34. StorageImageSignatureVerificationResult Field Name Required Nullable Type Description Format verificationTime Date date-time verifierId String verifier_id correlates to the ID of the signature integration used to verify the signature. status StorageImageSignatureVerificationResultStatus UNSET, VERIFIED, FAILED_VERIFICATION, INVALID_SIGNATURE_ALGO, CORRUPTED_SIGNATURE, GENERIC_ERROR, description String description is set in the case of an error with the specific error's message. Otherwise, this will not be set. verifiedImageReferences List of string The full image names that are verified by this specific signature integration ID. 30.1.7.35. StorageImageSignatureVerificationResultStatus Status represents the status of the result. VERIFIED: VERIFIED is set when the signature's verification was successful. FAILED_VERIFICATION: FAILED_VERIFICATION is set when the signature's verification failed. INVALID_SIGNATURE_ALGO: INVALID_SIGNATURE_ALGO is set when the signature's algorithm is invalid and unsupported. CORRUPTED_SIGNATURE: CORRUPTED_SIGNATURE is set when the raw signature is corrupted, i.e. wrong base64 encoding. GENERIC_ERROR: GENERIC_ERROR is set when an error occurred during verification that cannot be associated with a specific status. Enum Values UNSET VERIFIED FAILED_VERIFICATION INVALID_SIGNATURE_ALGO CORRUPTED_SIGNATURE GENERIC_ERROR 30.1.7.36. StorageLicense Field Name Required Nullable Type Description Format name String type String url String 30.1.7.37. StorageSignature Field Name Required Nullable Type Description Format cosign StorageCosignSignature 30.1.7.38. StorageSourceType Enum Values OS PYTHON JAVA RUBY NODEJS GO DOTNETCORERUNTIME INFRASTRUCTURE 30.1.7.39. StorageV1Metadata Field Name Required Nullable Type Description Format digest String created Date date-time author String layers List of StorageImageLayer user String command List of string entrypoint List of string volumes List of string labels Map of string 30.1.7.40. StorageV2Metadata Field Name Required Nullable Type Description Format digest String 30.1.7.41. StorageVulnerabilitySeverity Enum Values UNKNOWN_VULNERABILITY_SEVERITY LOW_VULNERABILITY_SEVERITY MODERATE_VULNERABILITY_SEVERITY IMPORTANT_VULNERABILITY_SEVERITY CRITICAL_VULNERABILITY_SEVERITY 30.1.7.42. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. Enum Values OBSERVED DEFERRED FALSE_POSITIVE 30.1.7.43. StreamResultOfV1ExportImageResponse Field Name Required Nullable Type Description Format result V1ExportImageResponse error RuntimeStreamError 30.1.7.44. V1ExportImageResponse Field Name Required Nullable Type Description Format image StorageImage 30.2. InvalidateScanAndRegistryCaches GET /v1/images/cache/invalidate InvalidateScanAndRegistryCaches removes the image metadata cache. 30.2.1. Description 30.2.2. Parameters 30.2.3. Return Type Object 30.2.4. Content Type application/json 30.2.5. Responses Table 30.2. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 30.2.6. Samples 30.2.7. Common object reference 30.2.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.2.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 30.2.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 30.3. CountImages GET /v1/imagescount CountImages returns a count of images that match the input query. 30.3.1. Description 30.3.2. Parameters 30.3.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 30.3.3. Return Type V1CountImagesResponse 30.3.4. Content Type application/json 30.3.5. Responses Table 30.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1CountImagesResponse 0 An unexpected error response. RuntimeError 30.3.6. Samples 30.3.7. Common object reference 30.3.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.3.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 30.3.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 30.3.7.3. V1CountImagesResponse Field Name Required Nullable Type Description Format count Integer int32 30.4. DeleteImages DELETE /v1/images DeleteImage removes the images based on a query 30.4.1. Description 30.4.2. Parameters 30.4.2.1. Query Parameters Name Description Required Default Pattern query.query - null query.pagination.limit - null query.pagination.offset - null query.pagination.sortOption.field - null query.pagination.sortOption.reversed - null query.pagination.sortOption.aggregateBy.aggrFunc - UNSET query.pagination.sortOption.aggregateBy.distinct - null confirm - null 30.4.3. Return Type V1DeleteImagesResponse 30.4.4. Content Type application/json 30.4.5. Responses Table 30.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1DeleteImagesResponse 0 An unexpected error response. RuntimeError 30.4.6. Samples 30.4.7. Common object reference 30.4.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.4.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 30.4.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 30.4.7.3. V1DeleteImagesResponse Field Name Required Nullable Type Description Format numDeleted Long int64 dryRun Boolean 30.5. ListImages GET /v1/images ListImages returns all the images that match the input query. 30.5.1. Description 30.5.2. Parameters 30.5.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 30.5.3. Return Type V1ListImagesResponse 30.5.4. Content Type application/json 30.5.5. Responses Table 30.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListImagesResponse 0 An unexpected error response. RuntimeError 30.5.6. Samples 30.5.7. Common object reference 30.5.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.5.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 30.5.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 30.5.7.3. StorageListImage Field Name Required Nullable Type Description Format id String name String components Integer int32 cves Integer int32 fixableCves Integer int32 created Date date-time lastUpdated Date date-time priority String int64 30.5.7.4. V1ListImagesResponse Field Name Required Nullable Type Description Format images List of StorageListImage 30.6. GetImage GET /v1/images/{id} GetImage returns the image given its ID. 30.6.1. Description 30.6.2. Parameters 30.6.2.1. Path Parameters Name Description Required Default Pattern id X null 30.6.2.2. Query Parameters Name Description Required Default Pattern includeSnoozed - null stripDescription - null 30.6.3. Return Type StorageImage 30.6.4. Content Type application/json 30.6.5. Responses Table 30.6. HTTP Response Codes Code Message Datatype 200 A successful response. StorageImage 0 An unexpected error response. RuntimeError 30.6.6. Samples 30.6.7. Common object reference 30.6.7.1. CVSSV2AccessComplexity Enum Values ACCESS_HIGH ACCESS_MEDIUM ACCESS_LOW 30.6.7.2. CVSSV2Authentication Enum Values AUTH_MULTIPLE AUTH_SINGLE AUTH_NONE 30.6.7.3. CVSSV3Complexity Enum Values COMPLEXITY_LOW COMPLEXITY_HIGH 30.6.7.4. CVSSV3Privileges Enum Values PRIVILEGE_NONE PRIVILEGE_LOW PRIVILEGE_HIGH 30.6.7.5. CVSSV3UserInteraction Enum Values UI_NONE UI_REQUIRED 30.6.7.6. EmbeddedVulnerabilityVulnerabilityType Enum Values UNKNOWN_VULNERABILITY IMAGE_VULNERABILITY K8S_VULNERABILITY ISTIO_VULNERABILITY NODE_VULNERABILITY OPENSHIFT_VULNERABILITY 30.6.7.7. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.6.7.7.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 30.6.7.8. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 30.6.7.9. StorageCVSSV2 Field Name Required Nullable Type Description Format vector String attackVector StorageCVSSV2AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, accessComplexity CVSSV2AccessComplexity ACCESS_HIGH, ACCESS_MEDIUM, ACCESS_LOW, authentication CVSSV2Authentication AUTH_MULTIPLE, AUTH_SINGLE, AUTH_NONE, confidentiality StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, integrity StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, availability StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, exploitabilityScore Float float impactScore Float float score Float float severity StorageCVSSV2Severity UNKNOWN, LOW, MEDIUM, HIGH, 30.6.7.10. StorageCVSSV2AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK 30.6.7.11. StorageCVSSV2Impact Enum Values IMPACT_NONE IMPACT_PARTIAL IMPACT_COMPLETE 30.6.7.12. StorageCVSSV2Severity Enum Values UNKNOWN LOW MEDIUM HIGH 30.6.7.13. StorageCVSSV3 Field Name Required Nullable Type Description Format vector String exploitabilityScore Float float impactScore Float float attackVector StorageCVSSV3AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, ATTACK_PHYSICAL, attackComplexity CVSSV3Complexity COMPLEXITY_LOW, COMPLEXITY_HIGH, privilegesRequired CVSSV3Privileges PRIVILEGE_NONE, PRIVILEGE_LOW, PRIVILEGE_HIGH, userInteraction CVSSV3UserInteraction UI_NONE, UI_REQUIRED, scope StorageCVSSV3Scope UNCHANGED, CHANGED, confidentiality StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, integrity StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, availability StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, score Float float severity StorageCVSSV3Severity UNKNOWN, NONE, LOW, MEDIUM, HIGH, CRITICAL, 30.6.7.14. StorageCVSSV3AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK ATTACK_PHYSICAL 30.6.7.15. StorageCVSSV3Impact Enum Values IMPACT_NONE IMPACT_LOW IMPACT_HIGH 30.6.7.16. StorageCVSSV3Scope Enum Values UNCHANGED CHANGED 30.6.7.17. StorageCVSSV3Severity Enum Values UNKNOWN NONE LOW MEDIUM HIGH CRITICAL 30.6.7.18. StorageCosignSignature Field Name Required Nullable Type Description Format rawSignature byte[] byte signaturePayload byte[] byte certPem byte[] byte certChainPem byte[] byte 30.6.7.19. StorageDataSource Field Name Required Nullable Type Description Format id String name String mirror String 30.6.7.20. StorageEmbeddedImageScanComponent Field Name Required Nullable Type Description Format name String version String license StorageLicense vulns List of StorageEmbeddedVulnerability layerIndex Integer int32 priority String int64 source StorageSourceType OS, PYTHON, JAVA, RUBY, NODEJS, GO, DOTNETCORERUNTIME, INFRASTRUCTURE, location String topCvss Float float riskScore Float float fixedBy String Component version that fixes all the fixable vulnerabilities in this component. executables List of StorageEmbeddedImageScanComponentExecutable 30.6.7.21. StorageEmbeddedImageScanComponentExecutable Field Name Required Nullable Type Description Format path String dependencies List of string 30.6.7.22. StorageEmbeddedVulnerability Field Name Required Nullable Type Description Format cve String cvss Float float summary String link String fixedBy String scoreVersion StorageEmbeddedVulnerabilityScoreVersion V2, V3, cvssV2 StorageCVSSV2 cvssV3 StorageCVSSV3 publishedOn Date date-time lastModified Date date-time vulnerabilityType EmbeddedVulnerabilityVulnerabilityType UNKNOWN_VULNERABILITY, IMAGE_VULNERABILITY, K8S_VULNERABILITY, ISTIO_VULNERABILITY, NODE_VULNERABILITY, OPENSHIFT_VULNERABILITY, vulnerabilityTypes List of EmbeddedVulnerabilityVulnerabilityType suppressed Boolean suppressActivation Date date-time suppressExpiry Date date-time firstSystemOccurrence Date Time when the CVE was first seen, for this specific distro, in the system. date-time firstImageOccurrence Date Time when the CVE was first seen in this image. date-time severity StorageVulnerabilitySeverity UNKNOWN_VULNERABILITY_SEVERITY, LOW_VULNERABILITY_SEVERITY, MODERATE_VULNERABILITY_SEVERITY, IMPORTANT_VULNERABILITY_SEVERITY, CRITICAL_VULNERABILITY_SEVERITY, state StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, 30.6.7.23. StorageEmbeddedVulnerabilityScoreVersion Enum Values V2 V3 30.6.7.24. StorageImage Field Name Required Nullable Type Description Format id String name StorageImageName names List of StorageImageName This should deprecate the ImageName field long-term, allowing images with the same digest to be associated with different locations. TODO(dhaus): For now, this message will be without search tags due to duplicated search tags otherwise. metadata StorageImageMetadata scan StorageImageScan signatureVerificationData StorageImageSignatureVerificationData signature StorageImageSignature components Integer int32 cves Integer int32 fixableCves Integer int32 lastUpdated Date date-time notPullable Boolean isClusterLocal Boolean priority String int64 riskScore Float float topCvss Float float notes List of StorageImageNote 30.6.7.25. StorageImageLayer Field Name Required Nullable Type Description Format instruction String value String created Date date-time author String empty Boolean 30.6.7.26. StorageImageMetadata Field Name Required Nullable Type Description Format v1 StorageV1Metadata v2 StorageV2Metadata layerShas List of string dataSource StorageDataSource version String uint64 30.6.7.27. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 30.6.7.28. StorageImageNote Enum Values MISSING_METADATA MISSING_SCAN_DATA MISSING_SIGNATURE MISSING_SIGNATURE_VERIFICATION_DATA 30.6.7.29. StorageImageScan Field Name Required Nullable Type Description Format scannerVersion String scanTime Date date-time components List of StorageEmbeddedImageScanComponent operatingSystem String dataSource StorageDataSource notes List of StorageImageScanNote hash String uint64 30.6.7.30. StorageImageScanNote Enum Values UNSET OS_UNAVAILABLE PARTIAL_SCAN_DATA OS_CVES_UNAVAILABLE OS_CVES_STALE LANGUAGE_CVES_UNAVAILABLE CERTIFIED_RHEL_SCAN_UNAVAILABLE 30.6.7.31. StorageImageSignature Field Name Required Nullable Type Description Format signatures List of StorageSignature fetched Date date-time 30.6.7.32. StorageImageSignatureVerificationData Field Name Required Nullable Type Description Format results List of StorageImageSignatureVerificationResult 30.6.7.33. StorageImageSignatureVerificationResult Field Name Required Nullable Type Description Format verificationTime Date date-time verifierId String verifier_id correlates to the ID of the signature integration used to verify the signature. status StorageImageSignatureVerificationResultStatus UNSET, VERIFIED, FAILED_VERIFICATION, INVALID_SIGNATURE_ALGO, CORRUPTED_SIGNATURE, GENERIC_ERROR, description String description is set in the case of an error with the specific error's message. Otherwise, this will not be set. verifiedImageReferences List of string The full image names that are verified by this specific signature integration ID. 30.6.7.34. StorageImageSignatureVerificationResultStatus Status represents the status of the result. VERIFIED: VERIFIED is set when the signature's verification was successful. FAILED_VERIFICATION: FAILED_VERIFICATION is set when the signature's verification failed. INVALID_SIGNATURE_ALGO: INVALID_SIGNATURE_ALGO is set when the signature's algorithm is invalid and unsupported. CORRUPTED_SIGNATURE: CORRUPTED_SIGNATURE is set when the raw signature is corrupted, i.e. wrong base64 encoding. GENERIC_ERROR: GENERIC_ERROR is set when an error occurred during verification that cannot be associated with a specific status. Enum Values UNSET VERIFIED FAILED_VERIFICATION INVALID_SIGNATURE_ALGO CORRUPTED_SIGNATURE GENERIC_ERROR 30.6.7.35. StorageLicense Field Name Required Nullable Type Description Format name String type String url String 30.6.7.36. StorageSignature Field Name Required Nullable Type Description Format cosign StorageCosignSignature 30.6.7.37. StorageSourceType Enum Values OS PYTHON JAVA RUBY NODEJS GO DOTNETCORERUNTIME INFRASTRUCTURE 30.6.7.38. StorageV1Metadata Field Name Required Nullable Type Description Format digest String created Date date-time author String layers List of StorageImageLayer user String command List of string entrypoint List of string volumes List of string labels Map of string 30.6.7.39. StorageV2Metadata Field Name Required Nullable Type Description Format digest String 30.6.7.40. StorageVulnerabilitySeverity Enum Values UNKNOWN_VULNERABILITY_SEVERITY LOW_VULNERABILITY_SEVERITY MODERATE_VULNERABILITY_SEVERITY IMPORTANT_VULNERABILITY_SEVERITY CRITICAL_VULNERABILITY_SEVERITY 30.6.7.41. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. Enum Values OBSERVED DEFERRED FALSE_POSITIVE 30.7. ScanImage POST /v1/images/scan ScanImage scans a single image and returns the result 30.7.1. Description 30.7.2. Parameters 30.7.2.1. Body Parameter Name Description Required Default Pattern body V1ScanImageRequest X 30.7.3. Return Type StorageImage 30.7.4. Content Type application/json 30.7.5. Responses Table 30.7. HTTP Response Codes Code Message Datatype 200 A successful response. StorageImage 0 An unexpected error response. RuntimeError 30.7.6. Samples 30.7.7. Common object reference 30.7.7.1. CVSSV2AccessComplexity Enum Values ACCESS_HIGH ACCESS_MEDIUM ACCESS_LOW 30.7.7.2. CVSSV2Authentication Enum Values AUTH_MULTIPLE AUTH_SINGLE AUTH_NONE 30.7.7.3. CVSSV3Complexity Enum Values COMPLEXITY_LOW COMPLEXITY_HIGH 30.7.7.4. CVSSV3Privileges Enum Values PRIVILEGE_NONE PRIVILEGE_LOW PRIVILEGE_HIGH 30.7.7.5. CVSSV3UserInteraction Enum Values UI_NONE UI_REQUIRED 30.7.7.6. EmbeddedVulnerabilityVulnerabilityType Enum Values UNKNOWN_VULNERABILITY IMAGE_VULNERABILITY K8S_VULNERABILITY ISTIO_VULNERABILITY NODE_VULNERABILITY OPENSHIFT_VULNERABILITY 30.7.7.7. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.7.7.7.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 30.7.7.8. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 30.7.7.9. StorageCVSSV2 Field Name Required Nullable Type Description Format vector String attackVector StorageCVSSV2AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, accessComplexity CVSSV2AccessComplexity ACCESS_HIGH, ACCESS_MEDIUM, ACCESS_LOW, authentication CVSSV2Authentication AUTH_MULTIPLE, AUTH_SINGLE, AUTH_NONE, confidentiality StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, integrity StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, availability StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, exploitabilityScore Float float impactScore Float float score Float float severity StorageCVSSV2Severity UNKNOWN, LOW, MEDIUM, HIGH, 30.7.7.10. StorageCVSSV2AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK 30.7.7.11. StorageCVSSV2Impact Enum Values IMPACT_NONE IMPACT_PARTIAL IMPACT_COMPLETE 30.7.7.12. StorageCVSSV2Severity Enum Values UNKNOWN LOW MEDIUM HIGH 30.7.7.13. StorageCVSSV3 Field Name Required Nullable Type Description Format vector String exploitabilityScore Float float impactScore Float float attackVector StorageCVSSV3AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, ATTACK_PHYSICAL, attackComplexity CVSSV3Complexity COMPLEXITY_LOW, COMPLEXITY_HIGH, privilegesRequired CVSSV3Privileges PRIVILEGE_NONE, PRIVILEGE_LOW, PRIVILEGE_HIGH, userInteraction CVSSV3UserInteraction UI_NONE, UI_REQUIRED, scope StorageCVSSV3Scope UNCHANGED, CHANGED, confidentiality StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, integrity StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, availability StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, score Float float severity StorageCVSSV3Severity UNKNOWN, NONE, LOW, MEDIUM, HIGH, CRITICAL, 30.7.7.14. StorageCVSSV3AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK ATTACK_PHYSICAL 30.7.7.15. StorageCVSSV3Impact Enum Values IMPACT_NONE IMPACT_LOW IMPACT_HIGH 30.7.7.16. StorageCVSSV3Scope Enum Values UNCHANGED CHANGED 30.7.7.17. StorageCVSSV3Severity Enum Values UNKNOWN NONE LOW MEDIUM HIGH CRITICAL 30.7.7.18. StorageCosignSignature Field Name Required Nullable Type Description Format rawSignature byte[] byte signaturePayload byte[] byte certPem byte[] byte certChainPem byte[] byte 30.7.7.19. StorageDataSource Field Name Required Nullable Type Description Format id String name String mirror String 30.7.7.20. StorageEmbeddedImageScanComponent Field Name Required Nullable Type Description Format name String version String license StorageLicense vulns List of StorageEmbeddedVulnerability layerIndex Integer int32 priority String int64 source StorageSourceType OS, PYTHON, JAVA, RUBY, NODEJS, GO, DOTNETCORERUNTIME, INFRASTRUCTURE, location String topCvss Float float riskScore Float float fixedBy String Component version that fixes all the fixable vulnerabilities in this component. executables List of StorageEmbeddedImageScanComponentExecutable 30.7.7.21. StorageEmbeddedImageScanComponentExecutable Field Name Required Nullable Type Description Format path String dependencies List of string 30.7.7.22. StorageEmbeddedVulnerability Field Name Required Nullable Type Description Format cve String cvss Float float summary String link String fixedBy String scoreVersion StorageEmbeddedVulnerabilityScoreVersion V2, V3, cvssV2 StorageCVSSV2 cvssV3 StorageCVSSV3 publishedOn Date date-time lastModified Date date-time vulnerabilityType EmbeddedVulnerabilityVulnerabilityType UNKNOWN_VULNERABILITY, IMAGE_VULNERABILITY, K8S_VULNERABILITY, ISTIO_VULNERABILITY, NODE_VULNERABILITY, OPENSHIFT_VULNERABILITY, vulnerabilityTypes List of EmbeddedVulnerabilityVulnerabilityType suppressed Boolean suppressActivation Date date-time suppressExpiry Date date-time firstSystemOccurrence Date Time when the CVE was first seen, for this specific distro, in the system. date-time firstImageOccurrence Date Time when the CVE was first seen in this image. date-time severity StorageVulnerabilitySeverity UNKNOWN_VULNERABILITY_SEVERITY, LOW_VULNERABILITY_SEVERITY, MODERATE_VULNERABILITY_SEVERITY, IMPORTANT_VULNERABILITY_SEVERITY, CRITICAL_VULNERABILITY_SEVERITY, state StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, 30.7.7.23. StorageEmbeddedVulnerabilityScoreVersion Enum Values V2 V3 30.7.7.24. StorageImage Field Name Required Nullable Type Description Format id String name StorageImageName names List of StorageImageName This should deprecate the ImageName field long-term, allowing images with the same digest to be associated with different locations. TODO(dhaus): For now, this message will be without search tags due to duplicated search tags otherwise. metadata StorageImageMetadata scan StorageImageScan signatureVerificationData StorageImageSignatureVerificationData signature StorageImageSignature components Integer int32 cves Integer int32 fixableCves Integer int32 lastUpdated Date date-time notPullable Boolean isClusterLocal Boolean priority String int64 riskScore Float float topCvss Float float notes List of StorageImageNote 30.7.7.25. StorageImageLayer Field Name Required Nullable Type Description Format instruction String value String created Date date-time author String empty Boolean 30.7.7.26. StorageImageMetadata Field Name Required Nullable Type Description Format v1 StorageV1Metadata v2 StorageV2Metadata layerShas List of string dataSource StorageDataSource version String uint64 30.7.7.27. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 30.7.7.28. StorageImageNote Enum Values MISSING_METADATA MISSING_SCAN_DATA MISSING_SIGNATURE MISSING_SIGNATURE_VERIFICATION_DATA 30.7.7.29. StorageImageScan Field Name Required Nullable Type Description Format scannerVersion String scanTime Date date-time components List of StorageEmbeddedImageScanComponent operatingSystem String dataSource StorageDataSource notes List of StorageImageScanNote hash String uint64 30.7.7.30. StorageImageScanNote Enum Values UNSET OS_UNAVAILABLE PARTIAL_SCAN_DATA OS_CVES_UNAVAILABLE OS_CVES_STALE LANGUAGE_CVES_UNAVAILABLE CERTIFIED_RHEL_SCAN_UNAVAILABLE 30.7.7.31. StorageImageSignature Field Name Required Nullable Type Description Format signatures List of StorageSignature fetched Date date-time 30.7.7.32. StorageImageSignatureVerificationData Field Name Required Nullable Type Description Format results List of StorageImageSignatureVerificationResult 30.7.7.33. StorageImageSignatureVerificationResult Field Name Required Nullable Type Description Format verificationTime Date date-time verifierId String verifier_id correlates to the ID of the signature integration used to verify the signature. status StorageImageSignatureVerificationResultStatus UNSET, VERIFIED, FAILED_VERIFICATION, INVALID_SIGNATURE_ALGO, CORRUPTED_SIGNATURE, GENERIC_ERROR, description String description is set in the case of an error with the specific error's message. Otherwise, this will not be set. verifiedImageReferences List of string The full image names that are verified by this specific signature integration ID. 30.7.7.34. StorageImageSignatureVerificationResultStatus Status represents the status of the result. VERIFIED: VERIFIED is set when the signature's verification was successful. FAILED_VERIFICATION: FAILED_VERIFICATION is set when the signature's verification failed. INVALID_SIGNATURE_ALGO: INVALID_SIGNATURE_ALGO is set when the signature's algorithm is invalid and unsupported. CORRUPTED_SIGNATURE: CORRUPTED_SIGNATURE is set when the raw signature is corrupted, i.e. wrong base64 encoding. GENERIC_ERROR: GENERIC_ERROR is set when an error occurred during verification that cannot be associated with a specific status. Enum Values UNSET VERIFIED FAILED_VERIFICATION INVALID_SIGNATURE_ALGO CORRUPTED_SIGNATURE GENERIC_ERROR 30.7.7.35. StorageLicense Field Name Required Nullable Type Description Format name String type String url String 30.7.7.36. StorageSignature Field Name Required Nullable Type Description Format cosign StorageCosignSignature 30.7.7.37. StorageSourceType Enum Values OS PYTHON JAVA RUBY NODEJS GO DOTNETCORERUNTIME INFRASTRUCTURE 30.7.7.38. StorageV1Metadata Field Name Required Nullable Type Description Format digest String created Date date-time author String layers List of StorageImageLayer user String command List of string entrypoint List of string volumes List of string labels Map of string 30.7.7.39. StorageV2Metadata Field Name Required Nullable Type Description Format digest String 30.7.7.40. StorageVulnerabilitySeverity Enum Values UNKNOWN_VULNERABILITY_SEVERITY LOW_VULNERABILITY_SEVERITY MODERATE_VULNERABILITY_SEVERITY IMPORTANT_VULNERABILITY_SEVERITY CRITICAL_VULNERABILITY_SEVERITY 30.7.7.41. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. Enum Values OBSERVED DEFERRED FALSE_POSITIVE 30.7.7.42. V1ScanImageRequest Field Name Required Nullable Type Description Format imageName String force Boolean includeSnoozed Boolean cluster String Cluster to delegate scan to, may be the cluster's name or ID. 30.8. UnwatchImage DELETE /v1/watchedimages UnwatchImage marks an image name to no longer be watched. It returns successfully if the image is no longer being watched after the call, irrespective of whether the image was already being watched. 30.8.1. Description 30.8.2. Parameters 30.8.2.1. Query Parameters Name Description Required Default Pattern name The name of the image to unwatch. Should match the name of a previously watched image. - null 30.8.3. Return Type Object 30.8.4. Content Type application/json 30.8.5. Responses Table 30.8. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 30.8.6. Samples 30.8.7. Common object reference 30.8.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.8.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 30.8.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 30.9. GetWatchedImages GET /v1/watchedimages GetWatchedImages returns the list of image names that are currently being watched. 30.9.1. Description 30.9.2. Parameters 30.9.3. Return Type V1GetWatchedImagesResponse 30.9.4. Content Type application/json 30.9.5. Responses Table 30.9. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetWatchedImagesResponse 0 An unexpected error response. RuntimeError 30.9.6. Samples 30.9.7. Common object reference 30.9.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.9.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 30.9.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 30.9.7.3. StorageWatchedImage Field Name Required Nullable Type Description Format name String 30.9.7.4. V1GetWatchedImagesResponse Field Name Required Nullable Type Description Format watchedImages List of StorageWatchedImage 30.10. WatchImage POST /v1/watchedimages WatchImage marks an image name as to be watched. 30.10.1. Description 30.10.2. Parameters 30.10.2.1. Body Parameter Name Description Required Default Pattern body V1WatchImageRequest X 30.10.3. Return Type V1WatchImageResponse 30.10.4. Content Type application/json 30.10.5. Responses Table 30.10. HTTP Response Codes Code Message Datatype 200 A successful response. V1WatchImageResponse 0 An unexpected error response. RuntimeError 30.10.6. Samples 30.10.7. Common object reference 30.10.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 30.10.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 30.10.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 30.10.7.3. V1WatchImageRequest Field Name Required Nullable Type Description Format name String The name of the image. This must be fully qualified, including a tag, but must NOT include a SHA. 30.10.7.4. V1WatchImageResponse Field Name Required Nullable Type Description Format normalizedName String errorType WatchImageResponseErrorType NO_ERROR, INVALID_IMAGE_NAME, NO_VALID_INTEGRATION, SCAN_FAILED, errorMessage String Only set if error_type is NOT equal to \"NO_ERROR\". 30.10.7.5. WatchImageResponseErrorType Enum Values NO_ERROR INVALID_IMAGE_NAME NO_VALID_INTEGRATION SCAN_FAILED | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next Tag: 13",
"Next Tag: 21",
"Next Tag: 19",
"If any fields of ImageMetadata are modified including subfields, please check pkg/images/enricher/metadata.go to ensure that those changes will be automatically picked up Next Tag: 6",
"Next tag: 8",
"Next Tag: 6",
"Stream result of v1ExportImageResponse",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next Tag: 13",
"Next Tag: 21",
"Next Tag: 19",
"If any fields of ImageMetadata are modified including subfields, please check pkg/images/enricher/metadata.go to ensure that those changes will be automatically picked up Next Tag: 6",
"Next tag: 8",
"Next Tag: 6",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next Tag: 13",
"Next Tag: 21",
"Next Tag: 19",
"If any fields of ImageMetadata are modified including subfields, please check pkg/images/enricher/metadata.go to ensure that those changes will be automatically picked up Next Tag: 6",
"Next tag: 8",
"Next Tag: 6",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/imageservice |
Chapter 13. Using a vault to obtain secrets | Chapter 13. Using a vault to obtain secrets Keycloak currently provides two out-of-the-box implementations of the Vault SPI: a plain-text file-based vault and Java KeyStore-based vault. To obtain a secret from a vault rather than entering it directly, enter the following specially crafted string into the appropriate field: where the key is the name of the secret recognized by the vault. To prevent secrets from leaking across realms, Red Hat build of Keycloak combines the realm name with the key obtained from the vault expression. This method means that the key does not directly map to an entry in the vault but creates the final entry name according to the algorithm used to combine the key with the realm name. In case of the file-based vault, such combination reflects to a specific filename, for the Java KeyStore-based vault it's a specific alias name. You can obtain the secret from the vault in the following fields: SMTP password In the realm SMTP settings LDAP bind credential In the LDAP settings of LDAP-based user federation. OIDC identity provider secret In the Client Secret inside identity provider OpenID Connect Config 13.1. Key resolvers All built-in providers support the configuration of key resolvers. A key resolver implements the algorithm or strategy for combining the realm name with the key, obtained from the USD{vault.key} expression, into the final entry name used to retrieve the secret from the vault. Red Hat build of Keycloak uses the keyResolvers property to configure the resolvers that the provider uses. The value is a comma-separated list of resolver names. An example of the configuration for the files-plaintext provider follows: kc.[sh|bat] start --spi-vault-file-key-resolvers=REALM_UNDERSCORE_KEY,KEY_ONLY The resolvers run in the same order you declare them in the configuration. For each resolver, Red Hat build of Keycloak uses the last entry name the resolver produces, which combines the realm with the vault key to search for the vault's secret. If Red Hat build of Keycloak finds a secret, it returns the secret. If not, Red Hat build of Keycloak uses the resolver. This search continues until Red Hat build of Keycloak finds a non-empty secret or runs out of resolvers. If Red Hat build of Keycloak finds no secret, Red Hat build of Keycloak returns an empty secret. In the example, Red Hat build of Keycloak uses the REALM_UNDERSCORE_KEY resolver first. If Red Hat build of Keycloak finds an entry in the vault that using that resolver, Red Hat build of Keycloak returns that entry. If not, Red Hat build of Keycloak searches again using the KEY_ONLY resolver. If Red Hat build of Keycloak finds an entry by using the KEY_ONLY resolver, Red Hat build of Keycloak returns that entry. If Red Hat build of Keycloak uses all resolvers, Red Hat build of Keycloak returns an empty secret. A list of the currently available resolvers follows: Name Description KEY_ONLY Red Hat build of Keycloak ignores the realm name and uses the key from the vault expression. REALM_UNDERSCORE_KEY Red Hat build of Keycloak combines the realm and key by using an underscore character. Red Hat build of Keycloak escapes occurrences of underscores in the realm or key with another underscore character. For example, if the realm is called master_realm and the key is smtp_key , the combined key is master__realm_smtp__key . REALM_FILESEPARATOR_KEY Red Hat build of Keycloak combines the realm and key by using the platform file separator character. If you have not configured a resolver for the built-in providers, Red Hat build of Keycloak selects the REALM_UNDERSCORE_KEY . | [
"USD{vault.key}",
"kc.[sh|bat] start --spi-vault-file-key-resolvers=REALM_UNDERSCORE_KEY,KEY_ONLY"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_administration_guide/vault-administration |
Chapter 2. Managing Red Hat subscriptions | Chapter 2. Managing Red Hat subscriptions Red Hat Satellite can import content from the Red Hat Content Delivery Network (CDN). Satellite requires a Red Hat subscription manifest to find, access, and download content from the corresponding repositories. You must have a Red Hat subscription manifest containing a subscription allocation for each organization on Satellite Server. All subscription information is available in your Red Hat Customer Portal account. Use this chapter to import a Red Hat subscription manifest and manage the manifest within the Satellite web UI. Subscription allocations and organizations You can manage more than one organization if you have more than one subscription allocation. Satellite requires a single allocation for each organization configured in Satellite Server. The advantage of this is that each organization maintains separate subscriptions so that you can support multiple organizations, each with their own Red Hat accounts. Future-dated subscriptions You can use future-dated subscriptions in a subscription manifest. When you add future-dated subscriptions to your manifest before the expiry date of the existing subscriptions, you can have uninterrupted access to repositories. Prerequisites Ensure you have a Red Hat subscription manifest. If your Satellite is connected, use the Red Hat Hybrid Cloud Console to create the manifest. For more information, see Creating and managing manifests for a connected Satellite Server in Subscription Central . If your Satellite is disconnected, use the Red Hat Customer Portal to create the manifest. For more information, see Using manifests for a disconnected Satellite Server in Subscription Central . Additional resources Configuring Satellite Server to Consume Content from a Custom CDN in Installing Satellite Server in a disconnected network environment 2.1. Importing a Red Hat subscription manifest into Satellite Server Use the following procedure to import a Red Hat subscription manifest into Satellite Server. Note Simple Content Access (SCA) is set on the organization, not the manifest. Importing a manifest does not change your organization's Simple Content Access status. Simple Content Access simplifies the subscription experience for administrators. For more information, see the Subscription Management Administration Guide for Red Hat Enterprise Linux on the Red Hat Customer Portal. Prerequisites Ensure you have a Red Hat subscription manifest. If your Satellite is connected, use the Red Hat Hybrid Cloud Console to create and export the manifest. For more information, see Creating and managing manifests for a connected Satellite Server in Subscription Central . If your Satellite is disconnected, use the Red Hat Customer Portal to create and export the manifest. For more information, see Using manifests for a disconnected Satellite Server in Subscription Central . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions and click Manage Manifest . In the Manage Manifest window, click Choose File . Navigate to the location that contains the Red Hat subscription manifest file, then click Open . CLI procedure Copy the Red Hat subscription manifest file from your local machine to Satellite Server: Log in to Satellite Server as the root user and import the Red Hat subscription manifest file: You can now enable repositories and import Red Hat content. For more information, see Importing Content in Managing content . 2.2. Locating a Red Hat subscription When you import a Red Hat subscription manifest into Satellite Server, the subscriptions from your manifest are listed in the Subscriptions window. If you have a high volume of subscriptions, you can filter the results to find a specific subscription. Prerequisites You must have a Red Hat subscription manifest file imported to Satellite Server. For more information, see Section 2.1, "Importing a Red Hat subscription manifest into Satellite Server" . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions . In the Subscriptions window, click the Search field to view the list of search criteria for building your search query. Select search criteria to display further options. When you have built your search query, click the search icon. For example, if you place your cursor in the Search field and select expires , then press the space bar, another list appears with the options of placing a > , < , or = character. If you select > and press the space bar, another list of automatic options appears. You can also enter your own criteria. 2.3. Adding Red Hat subscriptions to subscription manifests Use the following procedure to add Red Hat subscriptions to a subscription manifest in the Satellite web UI. Prerequisites You must have a Red Hat subscription manifest file imported to Satellite Server. For more information, see Section 2.1, "Importing a Red Hat subscription manifest into Satellite Server" . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions . In the Subscriptions window, click Add Subscriptions . On the row of each subscription you want to add, enter the quantity in the Quantity to Allocate column. Click Submit 2.4. Removing Red Hat subscriptions from subscription manifests Use the following procedure to remove Red Hat subscriptions from a subscription manifest in the Satellite web UI. Note Manifests must not be deleted. If you delete the manifest from the Red Hat Customer Portal or in the Satellite web UI, all of the entitlements for all of your content hosts will be removed. Prerequisites You must have a Red Hat subscription manifest file imported to Satellite Server. For more information, see Section 2.1, "Importing a Red Hat subscription manifest into Satellite Server" . Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions . On the row of each subscription you want to remove, select the corresponding checkbox. Click Delete , and then confirm deletion. 2.5. Updating and refreshing Red Hat subscription manifests Every time that you change a subscription allocation, you must refresh the manifest to reflect these changes. For example, you must refresh the manifest if you take any of the following actions: Renewing a subscription Adjusting subscription quantities Purchasing additional subscriptions You can refresh the manifest directly in the Satellite web UI. Alternatively, you can import an updated manifest that contains the changes. Procedure In the Satellite web UI, ensure the context is set to the organization you want to use. In the Satellite web UI, navigate to Content > Subscriptions . In the Subscriptions window, click Manage Manifest . In the Manage Manifest window, click Refresh . 2.6. Content Delivery Network structure Red Hat Content Delivery Network (CDN), located at cdn.redhat.com , is a geographically distributed series of static webservers which include content and errata designed to be used by systems. This content can be accessed directly through a system registered by using Subscription Manager or through the Satellite web UI. The accessible subset of the CDN is configured through content available to a system by using Red Hat Subscription Management or by using Satellite Server. Red Hat Content Delivery network is protected by X.509 certificate authentication to ensure that only valid users can access it. Directory structure of the CDN 1 The content directory. 2 Directory responsible for the lifecycle of the content. Common directories include beta (for Beta code), dist (for Production) and eus (For Extended Update Support) directories. 3 Directory responsible for the product name. Usually rhel for Red Hat Enterprise Linux. 4 Directory responsible for the type of the product. For Red Hat Enterprise Linux this might include server , workstation , and computenode directories. 5 Directory responsible for the release version, such as 7 , 7.2 or 7Server . 6 Directory responsible for the base architecture, such as i386 or x86_64 . 7 Directory responsible for the repository name, such as sat-tools , kickstart , rhscl . Some components have additional subdirectories which might vary. This directory structure is also used in the Red Hat Subscription Manifest. | [
"scp ~/ manifest_file .zip root@ satellite.example.com :~/.",
"hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \"",
"tree -d -L 11 βββ content 1 βββ beta 2 β βββ rhel 3 β βββ server 4 β βββ 7 5 β βββ x86_64 6 β βββ sat-tools 7 βββ dist βββ rhel βββ server βββ 7 βββ 7.2 β βββ x86_64 β βββ kickstart βββ 7Server βββ x86_64 βββ os"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_content/managing_red_hat_subscriptions_content-management |
Chapter 3. Unsupported features and deprecated features | Chapter 3. Unsupported features and deprecated features 3.1. Unsupported features Support for some technologies is removed due to the high maintenance cost, low community interest, and better alternative solutions. The following features are not supported in JBoss EAP XP 5.0: MicroProfile Metrics MicroProfile Metrics is no longer supported, and is replaced by Micrometer. MicroProfile OpenTracing MicroProfile OpenTracing is no longer supported, and is replaced by OpenTelemetry Tracing. Red Hat OpenShift Streams for Apache Kafka Since Red Hat no longer offers Red Hat OpenShift Streams for Apache Kafka (RHOSAK), the MicroProfile Reactive Messaging with Kafka quickstart has been updated to not demonstrate connecting to RHOSAK. Snappy compression disabled for Windows systems JBoss EAP XP 5.0 does not support Snappy compression on Windows platforms for the MicroProfile Reactive Messaging Kafka connector. The Kafka connector for MicroProfile Reactive Messaging introduced Snappy compression in JBoss EAP XP 4.0.2 using Snappy version 1.1.8.4. However, Java Snappy compression was removed in version 1.1.9.0 of Snappy due to data corruption concerns. JBoss EAP XP 5.0 now uses the latest version of Snappy, 1.1.10.5, which uses native code for compression. The Red Hat build of the Snappy jar includes only the Linux natives. MicroProfile Reactive Messaging Kafka client The MicroProfile Reactive Messaging Kafka client available with JBoss EAP XP 5.0 is not supported on Windows platforms. For a complete list of unsupported features in JBoss EAP 8.0, see the Unsupported features section in JBoss EAP 8.0 Release Notes. 3.2. Deprecated features Some features have been deprecated with this release. This means that no enhancements are made to these features, and they might be removed in the future, usually the major release. For more information, see Deprecated features in Red Hat JBoss Enterprise Application Platform expansion pack (EAP XP) 5 . Red Hat continues to provide full support and bug fixes under our standard support terms and conditions. For more information about the Red Hat support policy for JBoss EAP XP, see the Red Hat JBoss Enterprise Application Platform expansion pack life cycle and support policies located on the Red Hat Customer Portal. Important All of the features that were deprecated in Red Hat JBoss Enterprise Application Platform 8.0 are also deprecated in JBoss EAP XP 5.0.0. For more information about deprecated features in JBoss EAP 8.0, see Deprecated features in the Release notes for Red Hat JBoss Enterprise Application Platform 8.0 . JBoss EAP OpenShift templates JBoss EAP templates for OpenShift are removed. Legacy patching for bootable jar The legacy patching feature for bootable jar is removed. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/red_hat_jboss_eap_xp_5.0_release_notes/unsupported_features_and_deprecated_features |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.