title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
8.8. Selecting Network Team Configuration Methods
|
8.8. Selecting Network Team Configuration Methods To configure a network team using NetworkManager 's text user interface tool, nmtui , proceed to Section 8.9, "Configure a Network Team Using the Text User Interface, nmtui" To create a network team using the command-line tool , nmcli , proceed to Section 8.10.1, "Configure Network Teaming Using nmcli" . To create a network team using the Team daemon , teamd , proceed to Section 8.10.2, "Creating a Network Team Using teamd" . To create a network team using configuration files , proceed to Section 8.10.3, "Creating a Network Team Using ifcfg Files" . To configure a network team using a graphical user interface , see Section 8.14, "Creating a Network Team Using a GUI"
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-selecting_network_team_configuration_methods
|
Red Hat Ansible Security Automation Guide
|
Red Hat Ansible Security Automation Guide Red Hat Ansible Automation Platform 2.3 This guide provides procedures for automating and streamlining various security processes needed to identify, triage, and respond to security events using Ansible. Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_security_automation_guide/index
|
6.4. Changing the Default Mapping
|
6.4. Changing the Default Mapping In Red Hat Enterprise Linux 6, Linux users are mapped to the SELinux __default__ login by default (which is in turn mapped to the SELinux unconfined_u user). If you would like new Linux users, and Linux users not specifically mapped to an SELinux user to be confined by default, change the default mapping with the semanage login command. For example, run the following command as the Linux root user to change the default mapping from unconfined_u to user_u : Run the semanage login -l command as the Linux root user to verify the __default__ login is mapped to user_u : If a new Linux user is created and an SELinux user is not specified, or if an existing Linux user logs in and does not match a specific entry from the semanage login -l output, they are mapped to user_u , as per the __default__ login. To change back to the default behavior, run the following command as the Linux root user to map the __default__ login to the SELinux unconfined_u user:
|
[
"~]# semanage login -m -S targeted -s \"user_u\" -r s0 __default__",
"~]# semanage login -l Login Name SELinux User MLS/MCS Range __default__ user_u s0 root unconfined_u s0-s0:c0.c1023 system_u system_u s0-s0:c0.c1023",
"~]# semanage login -m -S targeted -s \"unconfined_u\" -r s0-s0:c0.c1023 __default__"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-confining_users-changing_the_default_mapping
|
Chapter 56. Kernel
|
Chapter 56. Kernel Cache information is missing in sysfs if firmware does not support ACPI PPTT The kernel-alt package has been updated to use the Advanced Configuration and Power Interface Processor Properties Topology Table (ACPI PPTT) to populate CPU topology including the CPU's cache information. Consequently, on systems whose firmware does not support ACPI PPTT, the /sys/devices/system/cpu/cpu0/cache file does not contain the cache information. To work around this problem, check for updated firmware that includes ACPI PPTT support with your hardware vendor. (BZ#1615370) PCI-passthrough of devices connected to PCIe slots is not possible with default settings of HPE ProLiant Gen8 and Gen9 Default settings of HPE ProLiant Gen8 and Gen9 systems disallow use of PCI-passthrough for devices connected to PCIe slots. Consequently, any attempt to pass through such devices fails with the following message in the kernel log: To work around this problem: In case of HPE ProLiant Gen8, reconfigure mentioned system settings with the conrep tool provided by HPE. In case of HPE ProLiant Gen9, update system firmware or NICs firmware depending on type of used NICs. For more details about the workaround, see https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c04781229 . (BZ#1615210) Attaching a non-RoCE device to RXE driver no longer causes a kernel to panic When a user created a Soft RDMA Over Converged Ethernet (Soft RoCE) interface and attached a non-RoCE device, certain issues were observed in the RXE driver. As a consequence, a kernel panicked when rebooting or shutting down a host. With this update, disabling the Soft RoCE interface before rebooting or shutting down a host fixes the issue. As a result, the host no longer panics in the described scenario. (BZ#1520302) Enabling the BCC packages for the 64-bit AMD and Intel architectures only The BPF Compiler Collection (BCC) library and the pcp-pmda-bcc plugins use the bpf() system call, which is enabled only on the 64-bit AMD and Intel CPU architectures. As a result, Red Hat Enterprise Linux 7 only supports BCC and pcp-pmda-bcc for the 64-bit AMD and Intel CPU architectures. (BZ# 1633185 ) Branch prediction of ternary operators no longer causes a system panic Previously, the branch prediction of ternary operators caused that the compiler incorrectly called the blk_queue_nonrot() function before checking the mddev->queue structure. As a consequence, the system panicked. With this update, checking mddev->queue and then calling blk_queue_nonrot() prevents the bug from appearing. As a result, the system no longer panics in the described scenario. (BZ#1627563) RAID1 write-behind causes a kernel panic Write-behind mode in the Redundant Array of Independent Disks Mode 1 (RAID1) virtualization technology uses the upper layer bio structures, which are freed immediately after the bio structures written to bottom layer disks come back. As a consequence, a kernel panic is triggered and the write-behind function cannot be used. (BZ#1632575) The i40iw module does not load automatically on boot Some i40e NICs do not support iWarp and the i40iw module does not fully support suspend and resume operations. Consequently, the i40iw module is not automatically loaded by default to ensure suspend and resume operations work properly. To work around this problem, edit the /lib/udev/rules.d/90-rdma-hw-modules.rules file to enable automated load of i40iw . Also note that if there is another RDMA device installed with an i40e device on the same machine, the non-i40e RDMA device triggers the rdma service, which loads all enabled RDMA stack modules, including the i40iw module. (BZ#1622413)
|
[
"Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor."
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/known_issues_kernel
|
Deploying RHEL 8 on Amazon Web Services
|
Deploying RHEL 8 on Amazon Web Services Red Hat Enterprise Linux 8 Obtaining RHEL system images and creating RHEL instances on AWS Red Hat Customer Content Services
|
[
"yum install python3 python3-pip",
"pip3 install awscli",
"aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:",
"BUCKET= bucketname aws s3 mb s3://USDBUCKET",
"{ \"Version\": \"2022-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\": { \"sts:Externalid\": \"vmimport\" } } }] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Action\": [\"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\"], \"Resource\": [\"arn:aws:s3:::%s\", \"arn:aws:s3:::%s/ \"] }, { \"Effect\": \"Allow\", \"Action\": [\"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe \"], \"Resource\": \"*\" }] } USDBUCKET USDBUCKET",
"aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json",
"aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json",
"provider = \"aws\" [settings] accessKeyID = \" AWS_ACCESS_KEY_ID \" secretAccessKey = \"AWS_SECRET_ACCESS_KEY\" bucket = \"AWS_BUCKET\" region = \"AWS_REGION\" key = \"IMAGE_KEY\"",
"composer-cli compose start blueprint-name image-type image-key configuration-file .toml",
"composer-cli compose status",
"chmod 400 <_your-instance-name.pem_>",
"ssh -i <_your-instance-name.pem_> ec2-user@<_your-instance-IP-address_>",
"virt-install --name kvmtest --memory 2048 --vcpus 2 --cdrom /home/username/Downloads/rhel8.iso,bus=virtio --os-variant=rhel8.0",
"subscription-manager register --auto-attach",
"yum install cloud-init systemctl enable --now cloud-init.service",
"dracut -f --add-drivers \"nvme xen-netfront xen-blkfront\"",
"dracut -f --add-drivers \"nvme\"",
"yum install awscli",
"aws --version aws-cli/1.19.77 Python/3.6.15 Linux/5.14.16-201.fc34.x86_64 botocore/1.20.77",
"aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\":{ \"sts:Externalid\": \"vmimport\" } } } ] }",
"aws iam create-role --role-name vmimport --assume-role-policy-document file:///home/sample/ImportService/trust-policy.json",
"{ \"Version\":\"2012-10-17\", \"Statement\":[ { \"Effect\":\"Allow\", \"Action\":[ \"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\" ], \"Resource\":[ \"arn:aws:s3:::s3-bucket-name\", \"arn:aws:s3:::s3-bucket-name/*\" ] }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe*\" ], \"Resource\":\"*\" } ] }",
"aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file:///home/sample/ImportService/role-policy.json",
"qemu-img convert -f qcow2 -O raw rhel-8.0-sample.qcow2 rhel-8.0-sample.raw",
"aws s3 cp rhel-8.0-sample.raw s3://s3-bucket-name",
"{ \"Description\": \"rhel-8.0-sample.raw\", \"Format\": \"raw\", \"UserBucket\": { \"S3Bucket\": \"s3-bucket-name\", \"S3Key\": \"s3-key\" } }",
"aws ec2 import-snapshot --disk-container file://containers.json",
"{ \"SnapshotTaskDetail\": { \"Status\": \"active\", \"Format\": \"RAW\", \"DiskImageSize\": 0.0, \"UserBucket\": { \"S3Bucket\": \"s3-bucket-name\", \"S3Key\": \"rhel-8.0-sample.raw\" }, \"Progress\": \"3\", \"StatusMessage\": \"pending\" }, \"ImportTaskId\": \"import-snap-06cea01fa0f1166a8\" }",
"aws ec2 describe-import-snapshot-tasks --import-task-ids import-snap-06cea01fa0f1166a8",
"aws ec2 register-image --name \"myimagename\" --description \"myimagedescription\" --architecture x86_64 --virtualization-type hvm --root-device-name \"/dev/sda1\" --ena-support --block-device-mappings \"{\\\"DeviceName\\\": \\\"/dev/sda1\\\",\\\"Ebs\\\": {\\\"SnapshotId\\\": \\\"snap-0ce7f009b69ab274d\\\"}}\"",
"subscription-manager register --auto-attach",
"insights-client register --display-name <display-name-value>",
"subscription-manager identity system identity: fdc46662-c536-43fb-a18a-bbcb283102b7 name: 192.168.122.222 org name: 6340056 org ID: 6340056",
"yum install awscli",
"aws --version aws-cli/1.19.77 Python/3.6.15 Linux/5.14.16-201.fc34.x86_64 botocore/1.20.77",
"aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:",
"chmod 400 KeyName.pem",
"sudo -i yum -y remove rh-amazon-rhui-client *",
"subscription-manager register --auto-attach",
"subscription-manager repos --disable= *",
"subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms",
"yum update -y",
"yum install pcs pacemaker fence-agents-aws",
"passwd hacluster",
"firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload",
"systemctl start pcsd.service systemctl enable pcsd.service",
"systemctl status pcsd.service pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-03-01 14:53:28 UTC; 28min ago Docs: man:pcsd(8) man:pcs(8) Main PID: 5437 (pcsd) CGroup: /system.slice/pcsd.service └─5437 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null & Mar 01 14:53:27 ip-10-0-0-48.ec2.internal systemd[1]: Starting PCS GUI and remote configuration interface... Mar 01 14:53:28 ip-10-0-0-48.ec2.internal systemd[1]: Started PCS GUI and remote configuration interface.",
"pcs host auth <hostname1> <hostname2> <hostname3>",
"pcs host auth node01 node02 node03 Username: hacluster Password: node01: Authorized node02: Authorized node03: Authorized",
"pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>",
"pcs cluster setup new_cluster node01 node02 node03 [...] Synchronizing pcsd certificates on nodes node01, node02, node03 node02: Success node03: Success node01: Success Restarting pcsd on the nodes in order to reload the certificates node02: Success node03: Success node01: Success",
"pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster Enabled",
"pcs cluster start --all node02: Starting Cluster node03: Starting Cluster node01: Starting Cluster",
"echo USD(curl -s http://169.254.169.254/latest/meta-data/instance-id)",
"echo USD(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6",
"pcs stonith create <name> fence_aws access_key=access-key secret_key= <secret-access-key> region= <region> pcmk_host_map=\"rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3\" power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4",
"pcs stonith create clusterfence fence_aws access_key=AKIAI123456MRMJA secret_key=a75EYIG4RVL3hdsdAslK7koQ8dzaDyn5yoIZ/ region=us-east-1 pcmk_host_map=\"ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7\" power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4",
"aws ec2 describe-vpcs --output text --filters \"Name=tag:Name,Values= <clustername> -vpc\" --query 'Vpcs[ * ].VpcId' vpc-06bc10ac8f6006664",
"aws ec2 describe-instances --output text --filters \"Name=vpc-id,Values=vpc-06bc10ac8f6006664\" --query 'Reservations[ * ].Instances[ * ].{Name:Tags[? Key== Name ]|[0].Value,Instance:InstanceId}' | grep \"\\-node[a-c]\" i-0b02af8927a895137 <clustername> -nodea-vm i-0cceb4ba8ab743b69 <clustername> -nodeb-vm i-0502291ab38c762a5 <clustername> -nodec-vm",
"CLUSTER= <clustername> && pcs stonith create fenceUSD{CLUSTER} fence_aws access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=USD(for NODE in node{a..c}; do ssh USD{NODE} \"echo -n \\USD{HOSTNAME}:\\USD(curl -s http://169.254.169.254/latest/meta-data/instance-id)\\;\"; done) pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"pcs stonith config fenceUSD{CLUSTER} Resource: <clustername> (class=stonith type=fence_aws) Attributes: access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=nodea:i-0b02af8927a895137;nodeb:i-0cceb4ba8ab743b69;nodec:i-0502291ab38c762a5; pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Operations: monitor interval=60s ( <clustername> -monitor-interval-60s)",
"pcs stonith fence <awsnodename>",
"pcs stonith fence ip-10-0-0-58 Node: ip-10-0-0-58 fenced",
"pcs status",
"pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 19:55:41 2018 Last change: Fri Mar 2 19:24:59 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 1 resource configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ] OFFLINE: [ ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled",
"pcs cluster start <awshostname>",
"pcs status",
"pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 20:01:31 2018 Last change: Fri Mar 2 19:24:59 2018 by root via cibadmin on ip-10-0-0-48 3 nodes configured 1 resource configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled",
"aws ec2 describe-instances --output text --query 'Reservations[ * ].Instances[ * ].[InstanceId,Tags[?Key== Name ].Value]' i-07f1ac63af0ec0ac6 ip-10-0-0-48 i-063fc5fe93b4167b2 ip-10-0-0-46 i-08bd39eb03a6fd2c7 ip-10-0-0-58",
"yum install resource-agents",
"aws ec2 allocate-address --domain vpc --output text eipalloc-4c4a2c45 vpc 35.169.153.122",
"pcs resource describe awseip",
"pcs resource create <resource-id> awseip elastic_ip= <Elastic-IP-Address> allocation_id= <Elastic-IP-Association-ID> --group networking-group",
"pcs resource create elastic awseip elastic_ip=35.169.153.122 allocation_id=eipalloc-4c4a2c45 --group networking-group",
"pcs status",
"pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-58 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Mon Mar 5 16:27:55 2018 Last change: Mon Mar 5 15:57:51 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 4 resources configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Resource Group: networking-group vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-48 elastic (ocf::heartbeat:awseip): Started ip-10-0-0-48 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled",
"ssh -l <user-name> -i ~/.ssh/<KeyName>.pem <elastic-IP>",
"ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122",
"yum install resource-agents",
"pcs resource describe awsvip",
"pcs resource create <resource-id> awsvip secondary_private_ip= <Unused-IP-Address> --group <group-name>",
"pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-group",
"pcs resource create <resource-id> IPaddr2 ip= <secondary-private-IP> --group <group-name>",
"root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-group",
"pcs status",
"pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 22:34:24 2018 Last change: Fri Mar 2 22:14:58 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 3 resources configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Resource Group: networking-group privip (ocf::heartbeat:awsvip): Started ip-10-0-0-48 vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-58 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled",
"yum install resource-agents",
"pcs resource describe aws-vpc-move-ip",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"Stmt1424870324000\", \"Effect\": \"Allow\", \"Action\": \"ec2:DescribeRouteTables\", \"Resource\": \"*\" }, { \"Sid\": \"Stmt1424860166260\", \"Action\": [ \"ec2:CreateRoute\", \"ec2:ReplaceRoute\" ], \"Effect\": \"Allow\", \"Resource\": \"arn:aws:ec2:<region>:<account-id>:route-table/<ClusterRouteTableID>\" } ] }",
"aws ec2 create-route --route-table-id <ClusterRouteTableID> --destination-cidr-block <NewCIDRblockIP/NetMask> --instance-id <ClusterNodeID>",
"pcs resource create vpcip aws-vpc-move-ip ip= 192.168.0.15 interface=eth0 routing_table= <ClusterRouteTableID>",
"192.168.0.15 vpcip",
"pcs resource move vpcip",
"pcs resource clear vpcip",
"aws ec2 create-volume --availability-zone <availability_zone> --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled",
"aws ec2 create-volume --availability-zone us-east-1a --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled { \"AvailabilityZone\": \"us-east-1a\", \"CreateTime\": \"2020-08-27T19:16:42.000Z\", \"Encrypted\": false, \"Size\": 1024, \"SnapshotId\": \"\", \"State\": \"creating\", \"VolumeId\": \"vol-042a5652867304f09\", \"Iops\": 51200, \"Tags\": [ ], \"VolumeType\": \"io1\" }",
"aws ec2 attach-volume --device /dev/xvdd --instance-id <instance_id> --volume-id <volume_id>",
"aws ec2 attach-volume --device /dev/xvdd --instance-id i-0eb803361c2c887f2 --volume-id vol-042a5652867304f09 { \"AttachTime\": \"2020-08-27T19:26:16.086Z\", \"Device\": \"/dev/xvdd\", \"InstanceId\": \"i-0eb803361c2c887f2\", \"State\": \"attaching\", \"VolumeId\": \"vol-042a5652867304f09\" }",
"ssh <ip_address> \"hostname ; lsblk -d | grep ' 1T '\"",
"ssh 198.51.100.3 \"hostname ; lsblk -d | grep ' 1T '\" nodea nvme2n1 259:1 0 1T 0 disk",
"ssh <ip_address> \"hostname ; lsblk -d | grep ' 1T ' | awk '{print \\USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='\"",
"ssh 198.51.100.3 \"hostname ; lsblk -d | grep ' 1T ' | awk '{print \\USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='\" nodea E: ID_SERIAL=Amazon Elastic Block Store_vol0fa5342e7aedf09f7"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/deploying_rhel_8_on_amazon_web_services/console.redhat.com
|
Chapter 7. Configure disk encryption
|
Chapter 7. Configure disk encryption 7.1. Configuring Network-Bound Disk Encryption key servers Prerequisites You must have installed a Network-Bound Disk Encryption key server ( Installing Network-Bound Disk Encryption key servers ). Procedure Start and enable the tangd service: Run the following command on each Network-Bound Disk Encryption (NBDE) key server. Verify that hyperconverged hosts have access to the key server. Log in to a hyperconverged host. Request a decryption key from the key server. If you see output like the following, the key server is accessible and advertising keys correctly. 7.2. Configuring hyperconverged hosts as Network-Bound Disk Encryption clients 7.2.1. Defining disk encryption configuration details Log in to the first hyperconverged host. Change into the hc-ansible-deployment directory: Make a copy of the luks_tang_inventory.yml file for future reference. Define your configuration in the luks_tang_inventory.yml file. Use the example luks_tang_inventory.yml file to define the details of disk encryption on each host. A complete outline of this file is available in Understanding the luks_tang_inventory.yml file . Encrypt the luks_tang_inventory.yml file and specify a password using ansible-vault . The required variables in luks_tang_inventory.yml include password values, so it is important to encrypt the file to protect the password values. Enter and confirm a new vault password when prompted. 7.2.2. Executing the disk encryption configuration playbook Prerequisites Define configuration in the luks_tang_inventory.yml playbook: Section 7.2.1, "Defining disk encryption configuration details" . Hyperconverged hosts must have encrypted boot disks. Procedure Log in to the first hyperconverged host. Change into the hc-ansible-deployment directory. Run the following command as the root user to start the configuration process. Enter the vault password for this file when prompted to start disk encryption configuration. Verify Reboot each host and verify that they are able to boot to a login prompt without requiring manual entry of the decryption passphrase. Note that the devices that use disk encryption have a path of /dev/mapper/luks_sdX when you continue with Red Hat Hyperconverged Infrastructure for Virtualization setup. Troubleshooting The given boot device /dev/sda2 is not encrypted. Solution: Reinstall the hyperconverged hosts using the process outlined in Section 3.1, "Installing hyperconverged hosts" , ensuring that you select Encrypt my data during the installation process and follow all directives related to disk encryption. The output has been hidden due to the fact that no_log: true was specified for this result. This output has been censored in order to not expose a passphrase. If you see this output for the Encrypt devices using key file task, the device failed to encrypt. You may have provided the incorrect disk in the inventory file. Solution: Clean up the deployment attempt using Cleaning up Network-Bound Disk Encryption after a failed deployment . Then correct the disk names in the inventory file. Non-zero return code from Tang server This error indicates that the server cannot access the url provided, either because the FQDN provided is incorrect or because it cannot be found from the host. Solution: Correct the url value provided for the NBDE key server or ensure that the url value is accessible from the host. Then run the playbook again with the bindtang tag: For any other playbook failures, use the instructions in Cleaning up Network-Bound Disk Encryption after a failed deployment to clean up your deployment. Review the playbook and inventory files for incorrect values and test access to all servers before executing the configuration playbook again.
|
[
"systemctl enable tangd.socket --now",
"curl key-server.example.com /adv",
"{\"payload\":\"eyJrZXlzIjpbeyJhbGciOiJFQ01SIiwiY3J2IjoiUC01MjEiLCJrZXlfb3BzIjpbImRlcml2ZUtleSJdLCJrdHkiOiJFQyIsIngiOiJBQ2ZjNVFwVmlhal9wNWcwUlE4VW52dmdNN1AyRTRqa21XUEpSM3VRUkFsVWp0eWlfZ0Y5WEV3WmU5TmhIdHhDaG53OXhMSkphajRieVk1ZVFGNGxhcXQ2IiwieSI6IkFOMmhpcmNpU2tnWG5HV2VHeGN1Nzk3N3B3empCTzZjZWt5TFJZdlh4SkNvb3BfNmdZdnR2bEpJUk4wS211Y1g3WHUwMlNVWlpqTVVxU3EtdGwyeEQ1SGcifSx7ImFsZyI6IkVTNTEyIiwiY3J2IjoiUC01MjEiLCJrZXlfb3BzIjpbInZlcmlmeSJdLCJrdHkiOiJFQyIsIngiOiJBQXlXeU8zTTFEWEdIaS1PZ04tRFhHU29yNl9BcUlJdzQ5OHhRTzdMam1kMnJ5bDN2WUFXTUVyR1l2MVhKdzdvbEhxdEdDQnhqV0I4RzZZV09vLWRpTUxwIiwieSI6IkFVWkNXUTAxd3lVMXlYR2R0SUMtOHJhVUVadWM5V3JyekFVbUIyQVF5VTRsWDcxd1RUWTJEeDlMMzliQU9tVk5oRGstS2lQNFZfYUlsZDFqVl9zdHRuVGoifV19\",\"protected\":\"eyJhbGciOiJFUzUxMiIsImN0eSI6Imp3ay1zZXQranNvbiJ9\",\"signature\":\"ARiMIYnCj7-1C-ZAQ_CKee676s_vYpi9J94WBibroou5MRsO6ZhRohqh_SCbW1jWWJr8btymTfQgBF_RwzVNCnllAXt_D5KSu8UDc4LnKU-egiV-02b61aiWB0udiEfYkF66krIajzA9y5j7qTdZpWsBObYVvuoJvlRo_jpzXJv0qEMi\"}",
"cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment",
"cp luks_tang_inventory.yml luks_tang_inventory.yml.backup",
"ansible-vault encrypt luks_tang_inventory.yml",
"cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment",
"ansible-playbook -i luks_tang_inventory.yml tasks/luks_tang_setup.yml --tags=blacklistdevices,luksencrypt,bindtang --ask-vault-pass",
"TASK [Check if root device is encrypted] fatal: [server1.example.com]: FAILED! => {\"changed\": false, \"msg\": \" The given boot device /dev/sda2 is not encrypted. \"}",
"TASK [gluster.infra/roles/backend_setup : Encrypt devices using key file ] failed: [host1.example.com] (item=None) => {\"censored\": \" the output has been hidden due to the fact that no_log: true was specified for this result \", \"changed\": true}",
"TASK [gluster.infra/roles/backend_setup : Download the advertisement from tang server for IPv4] * failed: [host1.example.com] (item={ url : http://tang-server.example.com }) => {\"ansible_index_var\": \"index\", \"ansible_loop_var\": \"item\", \"changed\": true, \"cmd\": \"curl -sfg \\\"http://tang-server.example.com/adv\\\" -o /etc/adv0.jws\", \"delta\": \"0:02:08.703711\", \"end\": \"2020-06-10 18:18:09.853701\", \"index\": 0, \"item\": {\"url\": \" http://tang-server.example.com \"}, \"msg\": \" non-zero return code *\", \"rc\": 7, \"start\": \"2020-06-10 18:16:01.149990\", \"stderr\": \"\", \"stderr_lines\": [], \"stdout\": \"\", \"stdout_lines\": []}",
"ansible-playbook -i luks_tang_inventory.yml tasks/luks_tang_setup.yml --ask-vault-pass --tags=bindtang"
] |
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/automating_rhhi_for_virtualization_deployment/assembly_configure-disk-encryption
|
Chapter 8. Adjusting IdM clients during recovery
|
Chapter 8. Adjusting IdM clients during recovery While IdM servers are being restored, you may need to adjust IdM clients to reflect changes in the replica topology. Procedure Adjusting DNS configuration : If /etc/hosts contains any references to IdM servers, ensure that hard-coded IP-to-hostname mappings are valid. If IdM clients are using IdM DNS for name resolution, ensure that the nameserver entries in /etc/resolv.conf point to working IdM replicas providing DNS services. Adjusting Kerberos configuration : By default, IdM clients look to DNS Service records for Kerberos servers, and will adjust to changes in the replica topology: If IdM clients have been hard-coded to use specific IdM servers in /etc/krb5.conf : make sure kdc , master_kdc and admin_server entries in /etc/krb5.conf are pointing to IdM servers that work properly: Adjusting SSSD configuration : By default, IdM clients look to DNS Service records for LDAP servers and adjust to changes in the replica topology: If IdM clients have been hard-coded to use specific IdM servers in /etc/sssd/sssd.conf , make sure the ipa_server entry points to IdM servers that are working properly: Clearing SSSD's cached information : The SSSD cache may contain outdated information pertaining to lost servers. If users experience inconsistent authentication problems, purge the SSSD cache : Verification Verify the Kerberos configuration by retrieving a Kerberos Ticket-Granting-Ticket as an IdM user. Verify the SSSD configuration by retrieving IdM user information.
|
[
"grep dns_lookup_kdc /etc/krb5.conf dns_lookup_kdc = true",
"grep dns_lookup_kdc /etc/krb5.conf dns_lookup_kdc = false",
"[realms] EXAMPLE.COM = { kdc = functional-server.example.com :88 master_kdc = functional-server.example.com :88 admin_server = functional-server.example.com :749 default_domain = example.com pkinit_anchors = FILE:/var/lib/ipa-client/pki/kdc-ca-bundle.pem pkinit_pool = FILE:/var/lib/ipa-client/pki/ca-bundle.pem }",
"grep ipa_server /etc/sssd/sssd.conf ipa_server = _srv_ , functional-server.example.com",
"grep ipa_server /etc/sssd/sssd.conf ipa_server = functional-server.example.com",
"sss_cache -E",
"kinit admin Password for [email protected]: klist Ticket cache: KCM:0 Default principal: [email protected] Valid starting Expires Service principal 10/31/2019 18:44:58 11/25/2019 18:44:55 krbtgt/[email protected]",
"id admin uid=1965200000(admin) gid=1965200000(admins) groups=1965200000(admins)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/performing_disaster_recovery_with_identity_management/adjusting-idm-clients-during-recovery_performing-disaster-recovery
|
Chapter 13. Coordination APIs
|
Chapter 13. Coordination APIs 13.1. Coordination APIs 13.1.1. Lease [coordination.k8s.io/v1] Description Lease defines a lease concept. Type object 13.2. Lease [coordination.k8s.io/v1] Description Lease defines a lease concept. Type object 13.2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object LeaseSpec is a specification of a Lease. 13.2.1.1. .spec Description LeaseSpec is a specification of a Lease. Type object Property Type Description acquireTime MicroTime acquireTime is a time when the current lease was acquired. holderIdentity string holderIdentity contains the identity of the holder of a current lease. leaseDurationSeconds integer leaseDurationSeconds is a duration that candidates for a lease need to wait to force acquire it. This is measure against time of last observed renewTime. leaseTransitions integer leaseTransitions is the number of transitions of a lease between holders. renewTime MicroTime renewTime is a time when the current holder of a lease has last updated the lease. 13.2.2. API endpoints The following API endpoints are available: /apis/coordination.k8s.io/v1/leases GET : list or watch objects of kind Lease /apis/coordination.k8s.io/v1/watch/leases GET : watch individual changes to a list of Lease. deprecated: use the 'watch' parameter with a list operation instead. /apis/coordination.k8s.io/v1/namespaces/{namespace}/leases DELETE : delete collection of Lease GET : list or watch objects of kind Lease POST : create a Lease /apis/coordination.k8s.io/v1/watch/namespaces/{namespace}/leases GET : watch individual changes to a list of Lease. deprecated: use the 'watch' parameter with a list operation instead. /apis/coordination.k8s.io/v1/namespaces/{namespace}/leases/{name} DELETE : delete a Lease GET : read the specified Lease PATCH : partially update the specified Lease PUT : replace the specified Lease /apis/coordination.k8s.io/v1/watch/namespaces/{namespace}/leases/{name} GET : watch changes to an object of kind Lease. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 13.2.2.1. /apis/coordination.k8s.io/v1/leases Table 13.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Lease Table 13.2. HTTP responses HTTP code Reponse body 200 - OK LeaseList schema 401 - Unauthorized Empty 13.2.2.2. /apis/coordination.k8s.io/v1/watch/leases Table 13.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Lease. deprecated: use the 'watch' parameter with a list operation instead. Table 13.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.2.3. /apis/coordination.k8s.io/v1/namespaces/{namespace}/leases Table 13.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 13.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Lease Table 13.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 13.8. Body parameters Parameter Type Description body DeleteOptions schema Table 13.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Lease Table 13.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 13.11. HTTP responses HTTP code Reponse body 200 - OK LeaseList schema 401 - Unauthorized Empty HTTP method POST Description create a Lease Table 13.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.13. Body parameters Parameter Type Description body Lease schema Table 13.14. HTTP responses HTTP code Reponse body 200 - OK Lease schema 201 - Created Lease schema 202 - Accepted Lease schema 401 - Unauthorized Empty 13.2.2.4. /apis/coordination.k8s.io/v1/watch/namespaces/{namespace}/leases Table 13.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 13.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Lease. deprecated: use the 'watch' parameter with a list operation instead. Table 13.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.2.5. /apis/coordination.k8s.io/v1/namespaces/{namespace}/leases/{name} Table 13.18. Global path parameters Parameter Type Description name string name of the Lease namespace string object name and auth scope, such as for teams and projects Table 13.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Lease Table 13.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 13.21. Body parameters Parameter Type Description body DeleteOptions schema Table 13.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Lease Table 13.23. HTTP responses HTTP code Reponse body 200 - OK Lease schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Lease Table 13.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 13.25. Body parameters Parameter Type Description body Patch schema Table 13.26. HTTP responses HTTP code Reponse body 200 - OK Lease schema 201 - Created Lease schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Lease Table 13.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.28. Body parameters Parameter Type Description body Lease schema Table 13.29. HTTP responses HTTP code Reponse body 200 - OK Lease schema 201 - Created Lease schema 401 - Unauthorized Empty 13.2.2.6. /apis/coordination.k8s.io/v1/watch/namespaces/{namespace}/leases/{name} Table 13.30. Global path parameters Parameter Type Description name string name of the Lease namespace string object name and auth scope, such as for teams and projects Table 13.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Lease. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 13.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/coordination-apis-1
|
Part I. Installing Identity Management; Servers and Services
|
Part I. Installing Identity Management; Servers and Services
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/install
|
OperatorHub APIs
|
OperatorHub APIs OpenShift Container Platform 4.15 Reference guide for OperatorHub APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operatorhub_apis/index
|
Chapter 3. Default FIPS configurations in Red Hat build of OpenJDK 8
|
Chapter 3. Default FIPS configurations in Red Hat build of OpenJDK 8 3.1. Security providers The Red Hat build of OpenJDK security policy is controled by the global java security policy file. You can find the java security policy file at USDJRE_HOME/lib/security/java.security . With FIPS mode enabled, Red Hat build of OpenJDK replaces the installed security providers with the following ones (in descending priority order): SunPKCS11-NSS-FIPS Initialized with a Network Security Services (NSS) Software Token (PKCS#11 backend). The NSS Software Token is configured as follows: name = NSS-FIPS nssLibraryDirectory = /usr/lib64 nssSecmodDirectory = /etc/pki/nssdb nssDbMode = readOnly nssModule = fips The NSS library implements a FIPS-compliant Software Token. Also, FIPS policy-aware in RHEL. SUN For X.509 certificates support only. Make sure that your application is not using other cryptographic algorithms from this provider. For example, MessageDigest.getInstance("SHA-256", Security.getProvider("SUN")) would work but lead to a non-FIPS compliant MessageDigest service. SunEC For SunPKCS11 auxiliary helpers only. Make sure that your application is not explicitly using this provider. SunJSSE Initialized with the SunPKCS11-NSS-FIPS provider for all cryptographic primitives required by the TLS engine, including key derivation. 3.2. Crypto-policies With FIPS mode enabled, Red Hat build of OpenJDK takes configuration values of cryptographic algorithms from global crypto-policies. You can find these values at /etc/crypto-policies/back-ends/java.config . You can use the update-crypto-policies tooling from RHEL to manage crypto-policies in a consistent way. Note A crypto-policies approved algorithm might not be usable in Red Hat build of OpenJDK's FIPS mode. This occurs when a FIPS-compliant implementation is not available in the NSS library or when it is not supported in Red Hat build of OpenJDK's SunPKCS11 security provider. 3.3. Trust Anchor certificates Red Hat build of OpenJDK uses the global Trust Anchor certificates repository when in FIPS mode. You can locate this repository at /etc/pki/java/cacerts . Use the update-ca-trust tooling from RHEL to manage certificates in a consistent way. 3.4. Key store With FIPS mode, Red Hat build of OpenJDK uses the NSS DB as a read-only PKCS#11 store for keys. As a result, the keystore.type security property is set to PKCS11 . You can locate the NSS DB repository at /etc/pki/nssdb . Use the modutil tooling in RHEL to manage NSS DB keys. Revised on 2024-11-25 10:50:15 UTC
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/configuring_red_hat_build_of_openjdk_8_on_rhel_with_fips/openjdk-default-fips-configuration
|
Chapter 7. Technology Previews
|
Chapter 7. Technology Previews This chapter provides a list of all Technology Previews available in Red Hat Enterprise Linux 7. For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope . 7.1. General Updates The systemd-importd VM and container image import and export service Latest systemd version now contains the systemd-importd daemon that was not enabled in the earlier build, which caused the machinectl pull-* commands to fail. Note that the systemd-importd daemon is offered as a Technology Preview and should not be considered stable. ( BZ#1284974 ) 7.2. Authentication and Interoperability Containerized Identity Management server available as Technology Preview The rhel7/ipa-server container image is available as a Technology Preview feature. Note that the rhel7/sssd container image is now fully supported. For details, see Using Containerized Identity Management Services . (BZ#1405325) DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now support DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2 Secure Domain Name System (DNS) Deployment Guide DNSSEC Key Rollover Timing Considerations Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices described in the Red Hat Enterprise Linux Networking Guide . ( BZ#1115294 ) Identity Management JSON-RPC API available as a Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as Technology Preview. In RHEL 7.3, the IdM API was enhanced to enable multiple versions of API commands. Previously, enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers to use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see the related Knowlegdebase article . ( BZ#1298286 ) Setting up IdM as a hidden replica is now available as a Technology Preview This enhancement enables administrators to set up an Identity Management (IdM) replica as a hidden replica. A hidden replica is an IdM server that has all services running and available. However, it is not advertised to other clients or masters because no SRV records exist for the services in DNS, and LDAP server roles are not enabled. Therefore, clients cannot use service discovery to detect hidden replicas. Hidden replicas are primarily designed for dedicated services that can otherwise disrupt clients. For example, a full backup of IdM requires to shut down all IdM services on the master or replica. Since no clients use a hidden replica, administrators can temporarily shut down the services on this host without affecting any clients. Other use cases include high-load operations on the IdM API or the LDAP server, such as a mass import or extensive queries. To install a new hidden replica, use the ipa-replica-install --hidden-replica command. To change the state of an existing replica, use the ipa server-state command. ( BZ#1518939 ) Use of AD and LDAP sudo providers The Active Directory (AD) provider is a back end used to connect to an AD server. Starting with RHEL 7.2, using the AD sudo provider together with the LDAP provider is available as a Technology Preview. To enable the AD sudo provider, add the sudo_provider=ad setting in the [domain] section of the sssd.conf file. ( BZ#1068725 ) The Custodia secrets service provider is available as a Technology Preview As a Technology Preview, you can use Custodia, a secrets service provider. Custodia stores or serves as a proxy for secrets, such as keys or passwords. For details, see the upstream documentation at http://custodia.readthedocs.io . Note that since Red Hat Enterprise Linux 7.6, Custodia has been deprecated. ( BZ#1403214 ) 7.3. Clustering Heuristics in corosync-qdevice available as a Technology Preview Heuristics are a set of commands executed locally on startup, cluster membership change, successful connect to corosync-qnetd , and, optionally, on a periodic basis. When all commands finish successfully on time (their return error code is zero), heuristics have passed; otherwise, they have failed. The heuristics result is sent to corosync-qnetd where it is used in calculations to determine which partition should be quorate. ( BZ#1413573 ) New fence-agents-heuristics-ping fence agent As a Technology Preview, Pacemaker now supports the fence_heuristics_ping agent. This agent aims to open a class of experimental fence agents that do no actual fencing by themselves but instead exploit the behavior of fencing levels in a new way. If the heuristics agent is configured on the same fencing level as the fence agent that does the actual fencing but is configured before that agent in sequence, fencing issues an off action on the heuristics agent before it attempts to do so on the agent that does the fencing. If the heuristics agent gives a negative result for the off action it is already clear that the fencing level is not going to succeed, causing Pacemaker fencing to skip the step of issuing the off action on the agent that does the fencing. A heuristics agent can exploit this behavior to prevent the agent that does the actual fencing from fencing a node under certain conditions. A user might want to use this agent, especially in a two-node cluster, when it would not make sense for a node to fence the peer if it can know beforehand that it would not be able to take over the services properly. For example, it might not make sense for a node to take over services if it has problems reaching the networking uplink, making the services unreachable to clients, a situation which a ping to a router might detect in that case. (BZ#1476401) The pcs tool now manages bundle resources in Pacemaker As a Technology Preview starting with Red Hat Enterprise Linux 7.4, Pacemaker supports a special syntax for launching a Docker container with any infrastructure it requires: the bundle. After you have created a Pacemaker bundle, you can create a Pacemaker resource that the bundle encapsulates. For information on Pacemaker support for containers, see the High Availability Add-On Reference . There is one exception to this feature being Technology Preview: As of RHEL 7.4, Red Hat fully supports the usage of Pacemaker bundles for Red Hat Openstack Platform (RHOSP) deployments. ( BZ#1433016 ) New LVM and LVM lock manager resource agents As a Technology Preview, Red Hat Enterprise Linux 7.6 introduces two new resource agents: lvmlockd and LVM-activate . The LVM-activate agent provides a choice from multiple methods for LVM management throughout a cluster: tagging: the same as tagging with the existing lvm resource agent clvmd: the same as clvmd with the existing lvm resource agent system ID: a new option for using system ID for volume group failover (an alternative to tagging). lvmlockd: a new option for using lvmlockd and dlm for volume group sharing (an alternative to clvmd ). The new lvmlockd resource agent is used to start the lvmlockd daemon when LVM-activate is configured to use lvmlockd . For information on the lvmlockd and LVM-activate resource agent, see the PCS help screens for those agents. For information on setting up LVM for use with lvmlockd , see the lvmlockd(8) man page. (BZ#1513957) 7.4. Desktop Wayland available as a Technology Preview The Wayland display server protocol is available in Red Hat Enterprise Linux as a Technology Preview with the dependent packages required to enable Wayland support in GNOME, which supports fractional scaling. Wayland uses the libinput library as its input driver. The following features are currently unavailable or do not work correctly: Multiple GPU support is not possible at this time. The NVIDIA binary driver does not work under Wayland . The xrandr utility does not work under Wayland due to its different approach to handling, resolutions, rotations, and layout. Screen recording, remote desktop, and accessibility do not always work correctly under Wayland . No clipboard manager is available. It is currently impossible to restart GNOME Shell under Wayland . Wayland ignores keyboard grabs issued by X11 applications, such as virtual machines viewers. (BZ#1481411) Fractional Scaling available as a Technology Preview Starting with Red Hat Enterprise Linux 7.5, GNOME provides, as a Technology Preview, fractional scaling to address problems with monitors whose DPI lies in the middle between lo (scale 1) and hi (scale 2). Due to technical limitations, fractional scaling is available only on Wayland. ( BZ#1481395 ) 7.5. File Systems File system DAX is now available for ext4 and XFS as a Technology Preview Starting with Red Hat Enterprise Linux 7.3, Direct Access (DAX) provides, as a Technology Preview, a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. (BZ#1274459) pNFS block layout is now available As a Technology Preview, Red Hat Enterprise Linux clients can now mount pNFS shares with the block layout feature. Note that Red Hat recommends using the pNFS SCSI layout instead, which is similar to block layout but easier to use. (BZ#1111712) OverlayFS OverlayFS is a type of union file system. It allows the user to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. See the Linux kernel documentation for additional information. OverlayFS remains a Technology Preview under most circumstances. As such, the kernel will log warnings when this technology is activated. Full support is available for OverlayFS when used with Docker under the following restrictions: OverlayFS is only supported for use as a Docker graph driver. Its use can only be supported for container COW content, not for persistent storage. Any persistent storage must be placed on non-OverlayFS volumes to be supported. Only default Docker configuration can be used; that is, one level of overlay, one lowerdir, and both lower and upper levels are on the same file system. Only XFS is currently supported for use as a lower layer file system. On Red Hat Enterprise Linux 7.3 and earlier, SELinux must be enabled and in enforcing mode on the physical machine, but must be disabled in the container when performing container separation, that is the /etc/sysconfig/docker file must not contain --selinux-enabled . Starting with Red Hat Enterprise Linux 7.4, OverlayFS supports SELinux security labels, and you can enable SELinux support for containers by specifying --selinux-enabled in /etc/sysconfig/docker . The OverlayFS kernel ABI and userspace behavior are not considered stable, and may see changes in future updates. In order to make the yum and rpm utilities work properly inside the container, the user should be using the yum-plugin-ovl packages. Note that OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. Note that XFS file systems must be created with the -n ftype=1 option enabled for use as an overlay. With the rootfs and any file systems created during system installation, set the --mkfsoptions=-n ftype=1 parameters in the Anaconda kickstart. When creating a new file system after the installation, run the # mkfs -t xfs -n ftype=1 /PATH/TO/DEVICE command. To determine whether an existing file system is eligible for use as an overlay, run the # xfs_info /PATH/TO/DEVICE | grep ftype command to see if the ftype=1 option is enabled. There are also several known issues associated with OverlayFS in this release. For details, see Non-standard behavior in the Linux kernel documentation . (BZ#1206277) Btrfs file system The B-Tree file system, Btrfs , is available as a Technology Preview in Red Hat Enterprise Linux 7. Red Hat Enterprise Linux 7.4 introduced the last planned update to this feature. Btrfs has been deprecated, which means Red Hat will not be moving Btrfs to a fully supported feature and it will be removed in a future major release of Red Hat Enterprise Linux. (BZ#1477977) 7.6. Hardware Enablement LSI Syncro CS HA-DAS adapters Red Hat Enterprise Linux 7.1 included code in the megaraid_sas driver to enable LSI Syncro CS high-availability direct-attached storage (HA-DAS) adapters. While the megaraid_sas driver is fully supported for previously enabled adapters, the use of this driver for Syncro CS is available as a Technology Preview. Support for this adapter is provided directly by LSI, your system integrator, or system vendor. Users deploying Syncro CS on Red Hat Enterprise Linux 7.2 and later are encouraged to provide feedback to Red Hat and LSI. (BZ#1062759) tss2 enables TPM 2.0 for IBM Power LE The tss2 package adds IBM implementation of a Trusted Computing Group Software Stack (TSS) 2.0 as a Technology Preview for the IBM Power LE architecture. This package enables users to interact with TPM 2.0 devices. (BZ#1384452) The ibmvnic device driver available as a Technology Preview Since Red Hat Enterprise Linux 7.3, the IBM Virtual Network Interface Controller (vNIC) driver for IBM POWER architectures, ibmvnic , has been available as a Technology Preview. vNIC is a PowerVM virtual networking technology that delivers enterprise capabilities and simplifies network management. It is a high-performance, efficient technology that when combined with SR-IOV NIC provides bandwidth control Quality of Service (QoS) capabilities at the virtual NIC level. vNIC significantly reduces virtualization overhead, resulting in lower latencies and fewer server resources, including CPU and memory, required for network virtualization. In Red Hat Enterprise Linux 7.6, the ibmvnic driver was upgraded to version 1.0, which provides a number of bug fixes and enhancements over the version. Notable changes include: The code that previously requested error information has been removed because no error ID is provided by the Virtual Input-Output (VIOS) Server. Error reporting has been updated with the cause string. As a result, during a recovery, the driver classifies the string as a warning rather than an error. Error recovery on a login failure has been fixed. The failed state that occurred after a failover while migrating Logical Partitioning (LPAR) has been fixed. The driver can now handle all possible login response return values. A driver crash that happened during a failover or Link Power Management (LPM) if the Transmit and Receive (Tx/Rx) queues have changed has been fixed. (BZ#1519746) The igc driver available as a Technology Preview The Intel(R) 2.5G Ethernet Linux Driver ( igc.ko.xz ) is available as a Technology Preview. (BZ#1454918) The ice driver available as a Technology Preview The Intel(R) Ethernet Connection E800 Series Linux Driver ( ice.ko.xz ) is available as a Technology Preview. (BZ#1454916) 7.7. Kernel eBPF system call for tracing Red Hat Enterprise Linux 7.6 introduced the Extended Berkeley Packet Filter tool (eBPF) as a Technology Preview. This tool is enabled only for the tracing subsystem. For details, see the related Red Hat Knowledgebase article . (BZ#1559615) Heterogeneous memory management included as a Technology Preview Red Hat Enterprise Linux 7 introduced the heterogeneous memory management (HMM) feature as a Technology Preview. This feature has been added to the kernel as a helper layer for devices that want to mirror a process address space into their own memory management unit (MMU). Thus a non-CPU device processor is able to read system memory using the unified system address space. To enable this feature, add experimental_hmm=enable to the kernel command line. (BZ#1230959) kexec as a Technology Preview The kexec system call has been provided as a Technology Preview. This system call enables loading and booting into another kernel from the currently running kernel, thus performing the function of the boot loader from within the kernel. Hardware initialization, which is normally done during a standard system boot, is not performed during a kexec boot, which significantly reduces the time required for a reboot. (BZ#1460849) kexec fast reboot as a Technology Preview The kexec fast reboot feature, which was introduced in Red Hat Enterprise Linux 7.5, continues to be available as a Technology Preview. kexec fast reboot makes the reboot significantly faster. To use this feature, you must load the kexec kernel manually, and then reboot the operating system. It is not possible to make kexec fast reboot as the default reboot action. Special case is using kexec fast reboot for Anaconda . It still does not enable to make kexec fast reboot default. However, when used with Anaconda , the operating system can automatically use kexec fast reboot after the installation is complete in case that user boots kernel with the anaconda option. To schedule a kexec reboot, use the inst.kexec command on the kernel command line, or include a reboot --kexec line in the Kickstart file. (BZ#1464377) perf cqm has been replaced by resctrl The Intel Cache Allocation Technology (CAT) was introduced in Red Hat Enterprise Linux 7.4 as a Technology Preview. However, the perf cqm tool did not work correctly due to an incompatibility between perf infrastructure and Cache Quality of Service Monitoring (CQM) hardware support. Consequently, multiple problems occurred when using perf cqm . These problems included most notably: perf cqm did not support the group of tasks which is allocated using resctrl perf cqm gave random and inaccurate data due to several problems with recycling perf cqm did not provide enough support when running different kinds of events together (the different events are, for example, tasks, system-wide, and cgroup events) perf cqm provided only partial support for cgroup events The partial support for cgroup events did not work in cases with a hierarchy of cgroup events, or when monitoring a task in a cgroup and the cgroup together Monitoring tasks for the lifetime caused perf overhead perf cqm reported the aggregate cache occupancy or memory bandwidth over all sockets, while in most cloud and VMM-bases use cases the individual per-socket usage is needed In Red Hat Enterprise Linux 7.5, perf cqm was replaced by the approach based on the resctrl file system, which addressed all of the aforementioned problems. (BZ#1457533) TC HW offloading available as a Technology Preview Starting with Red Hat Enterprise Linux 7.6, Traffic Control (TC) Hardware offloading has been provided as a Technology Preview. Hardware offloading enables that the selected functions of network traffic processing, such as shaping, scheduling, policing and dropping, are executed directly in the hardware instead of waiting for software processing, which improves the performance. (BZ#1503123) AMD xgbe network driver available as a Technology Preview Starting with Red Hat Enterprise Linux 7.6, the AMD xgbe network driver has been provided as a Technology Preview. (BZ#1589397) Secure Memory Encryption is available only as a Technology Preview Currently, Secure Memory Encryption (SME) is incompatible with kdump functionality, as the kdump kernel lacks the memory key to decrypt SME-encrypted memory. Red Hat found that with SME enabled, servers under testing might fail to perform some functions and therefore the feature is unfit for use in production. Consequently, SME is changing the support level from Supported to Technology Preview. Customers are encouraged to report any issues found while testing in pre-production to Red Hat or their system vendor. (BZ#1726642) criu available as a Technology Preview Red Hat Enterprise Linux 7.2 introduced the criu tool as a Technology Preview. This tool implements Checkpoint/Restore in User-space (CRIU) , which can be used to freeze a running application and store it as a collection of files. Later, the application can be restored from its frozen state. Note that the criu tool depends on Protocol Buffers , a language-neutral, platform-neutral extensible mechanism for serializing structured data. The protobuf and protobuf-c packages, which provide this dependency, were also introduced in Red Hat Enterprise Linux 7.2 as a Technology Preview. Since Red Hat Enterprise Linux 7.8, the criu package provides support for Podman to do a container checkpoint and restore. The newly added functionality only works without SELinux support. ( BZ#1400230 ) 7.8. Networking Cisco usNIC driver Cisco Unified Communication Manager (UCM) servers have an optional feature to provide a Cisco proprietary User Space Network Interface Controller (usNIC), which allows performing Remote Direct Memory Access (RDMA)-like operations for user-space applications. The libusnic_verbs driver, which is available as a Technology Preview, makes it possible to use usNIC devices through the standard InfiniBand RDMA programming based on the Verbs API. (BZ#916384) Cisco VIC kernel driver The Cisco VIC Infiniband kernel driver, which is available as a Technology Preview, allows the use of Remote Directory Memory Access (RDMA)-like semantics on proprietary Cisco architectures. (BZ#916382) Trusted Network Connect Trusted Network Connect, available as a Technology Preview, is used with existing network access control (NAC) solutions, such as TLS, 802.1X, or IPsec to integrate endpoint posture assessment; that is, collecting an endpoint's system information (such as operating system configuration settings, installed packages, and others, termed as integrity measurements). Trusted Network Connect is used to verify these measurements against network access policies before allowing the endpoint to access the network. (BZ#755087) SR-IOV functionality in the qlcnic driver Support for Single-Root I/O virtualization (SR-IOV) has been added to the qlcnic driver as a Technology Preview. Support for this functionality will be provided directly by QLogic, and customers are encouraged to provide feedback to QLogic and Red Hat. Other functionality in the qlcnic driver remains fully supported. Note that the qlcnic driver has been deprecated and is not available in RHEL 8. (BZ#1259547) The flower classifier with off-loading support flower is a Traffic Control (TC) classifier intended to allow users to configure matching on well-known packet fields for various protocols. It is intended to make it easier to configure rules over the u32 classifier for complex filtering and classification tasks. flower also supports the ability to off-load classification and action rules to underlying hardware if the hardware supports it. The flower TC classifier is now provided as a Technology Preview. (BZ#1393375) 7.9. Red Hat Enterprise Linux System Roles The postfix role of RHEL System Roles available as a Technology Preview Red Hat Enterprise Linux System Roles provides a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles. This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases. Since Red Hat Enterprise Linux 7.4, the rhel-system-roles packages have been distributed through the Extras repository. The postfix role is available as a Technology Preview. The following roles are fully supported: kdump network selinux storage timesync For more information, see the Knowledgebase article about RHEL System Roles . (BZ#1439896) rhel-system-roles-sap available as a Technology Preview The rhel-system-roles-sap package provides Red Hat Enterprise Linux (RHEL) System Roles for SAP, which can be used to automate the configuration of a RHEL system to run SAP workloads. These roles greatly reduce the time to configure a system to run SAP workloads by automatically applying the optimal settings that are based on best practices outlined in relevant SAP Notes. Access is limited to RHEL for SAP Solutions offerings. Please contact Red Hat Customer Support if you need assistance with your subscription. The following new roles in the rhel-system-roles-sap package are available as a Technology Preview: sap-preconfigure sap-netweaver-preconfigure sap-hana-preconfigure For more information, see Red Hat Enterprise Linux System Roles for SAP . Note: RHEL 7.8 for SAP Solutions is currently not scheduled to be validated for use with SAP HANA on Intel 64 architecture and IBM POWER8. Other SAP applications and database products, for example, SAP NetWeaver and SAP ASE, can use RHEL 7.8 features. Please consult SAP Notes 2369910 and 2235581 for the latest information about validated releases and SAP support. (BZ#1660838) 7.10. Security SECCOMP can be now enabled in libreswan As a Technology Preview, the seccomp=enabled|tolerant|disabled option has been added to the ipsec.conf configuration file, which makes it possible to use the Secure Computing mode (SECCOMP). This improves the syscall security by whitelisting all the system calls that Libreswan is allowed to execute. For more information, see the ipsec.conf(5) man page. ( BZ#1375750 ) pk12util can now import certificates with RSA-PSS keys The pk12util tool now provides importing a certificate signed with the RSA-PSS algorithm as a Technology Preview. Note that if the corresponding private key is imported and has the PrivateKeyInfo.privateKeyAlgorithm field that restricts the signing algorithm to RSA-PSS , it is ignored when importing the key. See MZBZ#1413596 for more information. ( BZ#1431210 ) Support for certificates signed with RSA-PSS in certutil has been improved Support for certificates signed with the RSA-PSS algorithm in the certutil tool has been improved. Notable enhancements and fixes include: The --pss option is now documented. The PKCS#1 v1.5 algorithm is no longer used for self-signed signatures when a certificate is restricted to use RSA-PSS . Empty RSA-PSS parameters in the subjectPublicKeyInfo field are no longer printed as invalid when listing certificates. The --pss-sign option for creating regular RSA certificates signed with the RSA-PSS algorithm has been added. Support for certificates signed with RSA-PSS in certutil is provided as a Technology Preview. ( BZ#1425514 ) NSS is now able to verify RSA-PSS signatures on certificates Since the RHEL 7.5 version of the nss package, the Network Security Services (NSS) libraries provide verifying RSA-PSS signatures on certificates as a Technology Preview. Prior to this update, clients using NSS as the SSL backend were not able to establish a TLS connection to a server that offered only certificates signed with the RSA-PSS algorithm. Note that the functionality has the following limitations: The algorithm policy settings in the /etc/pki/nss-legacy/rhel7.config file do not apply to the hash algorithms used in RSA-PSS signatures. RSA-PSS parameters restrictions between certificate chains are ignored and only a single certificate is taken into account. ( BZ#1432142 ) USBGuard enables blocking USB devices while the screen is locked as a Technology Preview With the USBGuard framework, you can influence how an already running usbguard-daemon instance handles newly inserted USB devices by setting the value of the InsertedDevicePolicy runtime parameter. This functionality is provided as a Technology Preview, and the default choice is to apply the policy rules to figure out whether to authorize the device or not. See the Blocking USB devices while the screen is locked Knowledgebase article. (BZ#1480100) 7.11. Storage Multi-queue I/O scheduling for SCSI Red Hat Enterprise Linux 7 includes a new multiple-queue I/O scheduling mechanism for block devices known as blk-mq . The scsi-mq package allows the Small Computer System Interface (SCSI) subsystem to make use of this new queuing mechanism. This functionality is provided as a Technology Preview and is not enabled by default. To enable it, add scsi_mod.use_blk_mq=Y to the kernel command line. Also note that although blk-mq is intended to offer improved performance, particularly for low-latency devices, it is not guaranteed to always provide better performance. Notably, in some cases, enabling scsi-mq can result in significantly deteriorated performance, especially on systems with many CPUs. (BZ#1109348) Targetd plug-in from the libStorageMgmt API Since Red Hat Enterprise Linux 7.1, storage array management with libStorageMgmt, a storage array independent API, has been fully supported. The provided API is stable, consistent, and allows developers to programmatically manage different storage arrays and utilize the hardware-accelerated features provided. System administrators can also use libStorageMgmt to manually configure storage and to automate storage management tasks with the included command-line interface. The Targetd plug-in is not fully supported and remains a Technology Preview. (BZ#1119909) SCSI-MQ as a Technology Preview in the qla2xxx and lpfc drivers The qla2xxx driver updated in Red Hat Enterprise Linux 7.4 can enable the use of SCSI-MQ (multiqueue) with the ql2xmqsupport=1 module parameter. The default value is 0 (disabled). The SCSI-MQ functionality is provided as a Technology Preview when used with the qla2xxx or the lpfc drivers. Note that a recent performance testing at Red Hat with async IO over Fibre Channel adapters using SCSI-MQ has shown significant performance degradation under certain conditions. (BZ#1414957) 7.12. System and Subscription Management YUM 4 available as Technology Preview YUM version 4, a generation of the YUM package manager, is available as a Technology Preview in the Red Hat Enterprise Linux 7 Extras repository . YUM 4 is based on the DNF technology and offers the following advantages over the standard YUM 3 used on RHEL 7: Increased performance Support for modular content Well-designed stable API for integration with tooling To install YUM 4 , run the yum install nextgen-yum4 command. Make sure to install the dnf-plugin-subscription-manager package, which includes the subscription-manager plug-in. This plug-in is required for accessing protected repositories provided by the Red Hat Customer Portal or Red Hat Satellite 6, and for automatic updates of the /etc/yum.repos.d/redhat.repo file. To manage packages, use the yum4 command and its particular options the same way as the yum command. For detailed information about differences between the new YUM 4 tool and YUM 3 , see Changes in DNF CLI compared to YUM . For instructions on how to enable the Extras repository, see the Knowledgebase article How to subscribe to the Extras channel/repo . (BZ#1461652) 7.13. Virtualization USB 3.0 support for KVM guests USB 3.0 host adapter (xHCI) emulation for KVM guests remains a Technology Preview in Red Hat Enterprise Linux 7. (BZ#1103193) No-IOMMU mode for VFIO drivers As a Technology Preview, this update adds No-IOMMU mode for virtual function I/O (VFIO) drivers. The No-IOMMU mode provides the user with full user-space I/O (UIO) access to a direct memory access (DMA)-capable device without a I/O memory management unit (IOMMU). Note that in addition to not being supported, using this mode is not secure due to the lack of I/O management provided by IOMMU. ( BZ#1299662 ) Azure M416v2 as a host for RHEL 7 guests As a Technology Preview, the Azure M416v2 instance type can now be used as a host for virtual machines that use RHEL 7.6 and later as the guest operating systems. (BZ#1661654) virt-v2v can convert Debian and Ubuntu guests As a Technology Preview, the virt-v2v utility can now convert Debian and Ubuntu guest virtual machines. Note that the following problems currently occur when performing this conversion: virt-v2v cannot change the default kernel in the GRUB2 configuration, and the kernel configured in the guest is not changed during the conversion, even if a more optimal version of the kernel is available on the guest. After converting a Debian or Ubuntu VMware guest to KVM, the name of the guest's network interface may change, and thus requires manual configuration. ( BZ#1387213 ) GPU-based mediated devices now support the VNC console As a Technology Preview, the Virtual Network Computing (VNC) console is now available for use with GPU-based mediated devices, such as the NVIDIA vGPU technology. As a result, it is now possible to use these mediated devices for real-time rendering of a virtual machine's graphical output. (BZ#1475770) Open Virtual Machine Firmware The Open Virtual Machine Firmware (OVMF) is available as a Technology Preview in Red Hat Enterprise Linux 7. OVMF is a UEFI secure boot environment for AMD64 and Intel 64 guests. However, OVMF is not bootable with virtualization components available in RHEL 7. Note that OVMF is fully supported in RHEL 8. (BZ#653382) 7.14. RHEL in cloud environments Select Intel network adapters now support SR-IOV in RHEL guests on Hyper-V As a Technology Preview, Red Hat Enterprise Linux guest operating systems running on a Hyper-V hypervisor can now use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters supported by the ixgbevf and iavf drivers. This feature is enabled when the following conditions are met: SR-IOV support is enabled for the network interface controller (NIC) SR-IOV support is enabled for the virtual NIC SR-IOV support is enabled for the virtual switch The virtual function (VF) from the NIC is attached to the virtual machine The feature is currently supported with Microsoft Windows Server 2019 and 2016. (BZ#1348508)
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.9_release_notes/technology_previews
|
Chapter 5. Configuring pod topology spread constraints for monitoring
|
Chapter 5. Configuring pod topology spread constraints for monitoring You can use pod topology spread constraints to control how Prometheus, Thanos Ruler, and Alertmanager pods are spread across a network topology when OpenShift Container Platform pods are deployed in multiple availability zones. Pod topology spread constraints are suitable for controlling pod scheduling within hierarchical topologies in which nodes are spread across different infrastructure levels, such as regions and zones within those regions. Additionally, by being able to schedule pods in different zones, you can improve network latency in certain scenarios. Additional resources Controlling pod placement by using pod topology spread constraints Kubernetes Pod Topology Spread Constraints documentation 5.1. Setting up pod topology spread constraints for Prometheus For core OpenShift Container Platform platform monitoring, you can set up pod topology spread constraints for Prometheus to fine tune how pod replicas are scheduled to nodes across zones. Doing so helps ensure that Prometheus pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones. You configure pod topology spread constraints for Prometheus in the cluster-monitoring-config config map. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add values for the following settings under data/config.yaml/prometheusK8s to configure pod topology spread constraints: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: topologySpreadConstraints: - maxSkew: 1 1 topologyKey: monitoring 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: matchLabels: 4 app.kubernetes.io/name: prometheus 1 Specify a numeric value for maxSkew , which defines the degree to which pods are allowed to be unevenly distributed. This field is required, and the value must be greater than zero. The value specified has a different effect depending on what value you specify for whenUnsatisfiable . 2 Specify a key of node labels for topologyKey . This field is required. Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler will try to put a balanced number of pods into each domain. 3 Specify a value for whenUnsatisfiable . This field is required. Available options are DoNotSchedule and ScheduleAnyway . Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew. 4 Specify a value for matchLabels . This value is used to identify the set of matching pods to which to apply the constraints. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. 5.2. Setting up pod topology spread constraints for Alertmanager For core OpenShift Container Platform platform monitoring, you can set up pod topology spread constraints for Alertmanager to fine tune how pod replicas are scheduled to nodes across zones. Doing so helps ensure that Alertmanager pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones. You configure pod topology spread constraints for Alertmanager in the cluster-monitoring-config config map. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add values for the following settings under data/config.yaml/alertmanagermain to configure pod topology spread constraints: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: topologySpreadConstraints: - maxSkew: 1 1 topologyKey: monitoring 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: matchLabels: 4 app.kubernetes.io/name: alertmanager 1 Specify a numeric value for maxSkew , which defines the degree to which pods are allowed to be unevenly distributed. This field is required, and the value must be greater than zero. The value specified has a different effect depending on what value you specify for whenUnsatisfiable . 2 Specify a key of node labels for topologyKey . This field is required. Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler will try to put a balanced number of pods into each domain. 3 Specify a value for whenUnsatisfiable . This field is required. Available options are DoNotSchedule and ScheduleAnyway . Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew. 4 Specify a value for matchLabels . This value is used to identify the set of matching pods to which to apply the constraints. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. 5.3. Setting up pod topology spread constraints for Thanos Ruler For user-defined monitoring, you can set up pod topology spread constraints for Thanos Ruler to fine tune how pod replicas are scheduled to nodes across zones. Doing so helps ensure that Thanos Ruler pods are highly available and run more efficiently, because workloads are spread across nodes in different data centers or hierarchical infrastructure zones. You configure pod topology spread constraints for Thanos Ruler in the user-workload-monitoring-config config map. Prerequisites A cluster administrator has enabled monitoring for user-defined projects. You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. You have installed the OpenShift CLI ( oc ). Procedure Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add values for the following settings under data/config.yaml/thanosRuler to configure pod topology spread constraints: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: topologySpreadConstraints: - maxSkew: 1 1 topologyKey: monitoring 2 whenUnsatisfiable: ScheduleAnyway 3 labelSelector: matchLabels: 4 app.kubernetes.io/name: thanos-ruler 1 Specify a numeric value for maxSkew , which defines the degree to which pods are allowed to be unevenly distributed. This field is required, and the value must be greater than zero. The value specified has a different effect depending on what value you specify for whenUnsatisfiable . 2 Specify a key of node labels for topologyKey . This field is required. Nodes that have a label with this key and identical values are considered to be in the same topology. The scheduler will try to put a balanced number of pods into each domain. 3 Specify a value for whenUnsatisfiable . This field is required. Available options are DoNotSchedule and ScheduleAnyway . Specify DoNotSchedule if you want the maxSkew value to define the maximum difference allowed between the number of matching pods in the target topology and the global minimum. Specify ScheduleAnyway if you want the scheduler to still schedule the pod but to give higher priority to nodes that might reduce the skew. 4 Specify a value for matchLabels . This value is used to identify the set of matching pods to which to apply the constraints. Save the file to apply the changes automatically. Warning When you save changes to the user-workload-monitoring-config config map, the pods and other resources in the openshift-user-workload-monitoring project might be redeployed. The running monitoring processes in that project might also restart. 5.4. Setting log levels for monitoring components You can configure the log level for Alertmanager, Prometheus Operator, Prometheus, Thanos Querier, and Thanos Ruler. The following log levels can be applied to the relevant component in the cluster-monitoring-config and user-workload-monitoring-config ConfigMap objects: debug . Log debug, informational, warning, and error messages. info . Log informational, warning, and error messages. warn . Log warning and error messages only. error . Log error messages only. The default log level is info . Prerequisites If you are setting a log level for Alertmanager, Prometheus Operator, Prometheus, or Thanos Querier in the openshift-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are setting a log level for Prometheus Operator, Prometheus, or Thanos Ruler in the openshift-user-workload-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object: To set a log level for a component in the openshift-monitoring project : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add logLevel: <log_level> for a component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2 1 The monitoring stack component for which you are setting a log level. For default platform monitoring, available component values are prometheusK8s , alertmanagerMain , prometheusOperator , and thanosQuerier . 2 The log level to set for the component. The available values are error , warn , info , and debug . The default value is info . To set a log level for a component in the openshift-user-workload-monitoring project : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: <log_level> for a component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2 1 The monitoring stack component for which you are setting a log level. For user workload monitoring, available component values are alertmanager , prometheus , prometheusOperator , and thanosRuler . 2 The log level to apply to the component. The available values are error , warn , info , and debug . The default value is info . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Confirm that the log-level has been applied by reviewing the deployment or pod configuration in the related project. The following example checks the log level in the prometheus-operator deployment in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Check that the pods for the component are running. The following example lists the status of pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized logLevel value is included in the ConfigMap object, the pods for the component might not restart successfully. 5.5. Enabling the query log file for Prometheus You can configure Prometheus to write all queries that have been run by the engine to a log file. You can do so for default platform monitoring and for user-defined workload monitoring. Important Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. Prerequisites If you are enabling the query log file feature for Prometheus in the openshift-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are enabling the query log file feature for Prometheus in the openshift-user-workload-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure To set the query log file for Prometheus in the openshift-monitoring project : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add queryLogFile: <path> for prometheusK8s under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path> 1 1 The full path to the file in which queries will be logged. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Verify that the pods for the component are running. The following sample command lists the status of pods in the openshift-monitoring project: USD oc -n openshift-monitoring get pods Read the query log: USD oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path> Important Revert the setting in the config map after you have examined the logged query information. To set the query log file for Prometheus in the openshift-user-workload-monitoring project : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add queryLogFile: <path> for prometheus under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1 1 The full path to the file in which queries will be logged. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Verify that the pods for the component are running. The following example command lists the status of pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Read the query log: USD oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path> Important Revert the setting in the config map after you have examined the logged query information. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps See Enabling monitoring for user-defined projects for steps to enable user-defined monitoring. 5.6. Enabling query logging for Thanos Querier For default platform monitoring in the openshift-monitoring project, you can enable the Cluster Monitoring Operator (CMO) to log all queries run by Thanos Querier. Important Because log rotation is not supported, only enable this feature temporarily when you need to troubleshoot an issue. After you finish troubleshooting, disable query logging by reverting the changes you made to the ConfigMap object to enable the feature. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. Procedure You can enable query logging for Thanos Querier in the openshift-monitoring project: Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add a thanosQuerier section under data/config.yaml and add values as shown in the following example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value> 1 logLevel: <value> 2 1 Set the value to true to enable logging and false to disable logging. The default value is false . 2 Set the value to debug , info , warn , or error . If no value exists for logLevel , the log level defaults to error . Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Verification Verify that the Thanos Querier pods are running. The following sample command lists the status of pods in the openshift-monitoring project: USD oc -n openshift-monitoring get pods Run a test query using the following sample commands as a model: USD token=`oc create token prometheus-k8s -n openshift-monitoring` USD oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer USDtoken" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version' Run the following command to read the query log: USD oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query Note Because the thanos-querier pods are highly available (HA) pods, you might be able to see logs in only one pod. After you examine the logged query information, disable query logging by changing the enableRequestLogging value to false in the config map. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps. 5.7. Setting audit log levels for the Prometheus Adapter In default platform monitoring, you can configure the audit log level for the Prometheus Adapter. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. Procedure You can set an audit log level for the Prometheus Adapter in the default openshift-monitoring project: Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add profile: in the k8sPrometheusAdapter/audit section under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | k8sPrometheusAdapter: audit: profile: <audit_log_level> 1 1 The audit log level to apply to the Prometheus Adapter. Set the audit log level by using one of the following values for the profile: parameter: None : Do not log events. Metadata : Log only the metadata for the request, such as user, timestamp, and so forth. Do not log the request text and the response text. Metadata is the default audit log level. Request : Log only the metadata and the request text but not the response text. This option does not apply for non-resource requests. RequestResponse : Log event metadata, request text, and response text. This option does not apply for non-resource requests. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Verification In the config map, under k8sPrometheusAdapter/audit/profile , set the log level to Request and save the file. Confirm that the pods for the Prometheus Adapter are running. The following example lists the status of pods in the openshift-monitoring project: USD oc -n openshift-monitoring get pods Confirm that the audit log level and audit log file path are correctly configured: USD oc -n openshift-monitoring get deploy prometheus-adapter -o yaml Example output ... - --audit-policy-file=/etc/audit/request-profile.yaml - --audit-log-path=/var/log/adapter/audit.log Confirm that the correct log level has been applied in the prometheus-adapter deployment in the openshift-monitoring project: USD oc -n openshift-monitoring exec deploy/prometheus-adapter -c prometheus-adapter -- cat /etc/audit/request-profile.yaml Example output "apiVersion": "audit.k8s.io/v1" "kind": "Policy" "metadata": "name": "Request" "omitStages": - "RequestReceived" "rules": - "level": "Request" Note If you enter an unrecognized profile value for the Prometheus Adapter in the ConfigMap object, no changes are made to the Prometheus Adapter, and an error is logged by the Cluster Monitoring Operator. Review the audit log for the Prometheus Adapter: USD oc -n openshift-monitoring exec -c <prometheus_adapter_pod_name> -- cat /var/log/adapter/audit.log Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps. 5.8. Disabling the local Alertmanager A local Alertmanager that routes alerts from Prometheus instances is enabled by default in the openshift-monitoring project of the OpenShift Container Platform monitoring stack. If you do not need the local Alertmanager, you can disable it by configuring the cluster-monitoring-config config map in the openshift-monitoring project. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config config map. You have installed the OpenShift CLI ( oc ). Procedure Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add enabled: false for the alertmanagerMain component under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false Save the file to apply the changes. The Alertmanager instance is disabled automatically when you apply the change. Additional resources Prometheus Alertmanager documentation xref:[Managing alerts]
|
[
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: topologySpreadConstraints: - maxSkew: 1 1 topologyKey: monitoring 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: matchLabels: 4 app.kubernetes.io/name: prometheus",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: topologySpreadConstraints: - maxSkew: 1 1 topologyKey: monitoring 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: matchLabels: 4 app.kubernetes.io/name: alertmanager",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: topologySpreadConstraints: - maxSkew: 1 1 topologyKey: monitoring 2 whenUnsatisfiable: ScheduleAnyway 3 labelSelector: matchLabels: 4 app.kubernetes.io/name: thanos-ruler",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path> 1",
"oc -n openshift-monitoring get pods",
"oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path>",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1",
"oc -n openshift-user-workload-monitoring get pods",
"oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value> 1 logLevel: <value> 2",
"oc -n openshift-monitoring get pods",
"token=`oc create token prometheus-k8s -n openshift-monitoring` oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H \"Authorization: Bearer USDtoken\" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'",
"oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | k8sPrometheusAdapter: audit: profile: <audit_log_level> 1",
"oc -n openshift-monitoring get pods",
"oc -n openshift-monitoring get deploy prometheus-adapter -o yaml",
"- --audit-policy-file=/etc/audit/request-profile.yaml - --audit-log-path=/var/log/adapter/audit.log",
"oc -n openshift-monitoring exec deploy/prometheus-adapter -c prometheus-adapter -- cat /etc/audit/request-profile.yaml",
"\"apiVersion\": \"audit.k8s.io/v1\" \"kind\": \"Policy\" \"metadata\": \"name\": \"Request\" \"omitStages\": - \"RequestReceived\" \"rules\": - \"level\": \"Request\"",
"oc -n openshift-monitoring exec -c <prometheus_adapter_pod_name> -- cat /var/log/adapter/audit.log",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/monitoring/configuring_pod_topology_spread_constraintsfor_monitoring_configuring-the-monitoring-stack
|
2.5.4. The Sysstat Suite of Resource Monitoring Tools
|
2.5.4. The Sysstat Suite of Resource Monitoring Tools While the tools may be helpful for gaining more insight into system performance over very short time frames, they are of little use beyond providing a snapshot of system resource utilization. In addition, there are aspects of system performance that cannot be easily monitored using such simplistic tools. Therefore, a more sophisticated tool is necessary. Sysstat is such a tool. Sysstat contains the following tools related to collecting I/O and CPU statistics: iostat Displays an overview of CPU utilization, along with I/O statistics for one or more disk drives. mpstat Displays more in-depth CPU statistics. Sysstat also contains tools that collect system resource utilization data and create daily reports based on that data. These tools are: sadc Known as the system activity data collector, sadc collects system resource utilization information and writes it to a file. sar Producing reports from the files created by sadc , sar reports can be generated interactively or written to a file for more intensive analysis. The following sections explore each of these tools in more detail. 2.5.4.1. The iostat command The iostat command at its most basic provides an overview of CPU and disk I/O statistics: Below the first line (which contains the system's kernel version and hostname, along with the current date), iostat displays an overview of the system's average CPU utilization since the last reboot. The CPU utilization report includes the following percentages: Percentage of time spent in user mode (running applications, etc.) Percentage of time spent in user mode (for processes that have altered their scheduling priority using nice(2) ) Percentage of time spent in kernel mode Percentage of time spent idle Below the CPU utilization report is the device utilization report. This report contains one line for each active disk device on the system and includes the following information: The device specification, displayed as dev <major-number> - sequence-number , where <major-number> is the device's major number [6] , and <sequence-number> is a sequence number starting at zero. The number of transfers (or I/O operations) per second. The number of 512-byte blocks read per second. The number of 512-byte blocks written per second. The total number of 512-byte blocks read. The total number of 512-byte block written. This is just a sample of the information that can be obtained using iostat . For more information, refer to the iostat(1) man page. [6] Device major numbers can be found by using ls -l to display the desired device file in /dev/ . The major number appears after the device's group specification.
|
[
"Linux 2.4.20-1.1931.2.231.2.10.ent (pigdog.example.com) 07/11/2003 avg-cpu: %user %nice %sys %idle 6.11 2.56 2.15 89.18 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev3-0 1.68 15.69 22.42 31175836 44543290"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-resource-tools-sar
|
Index
|
Index Symbols /etc/multipath.conf package, Setting Up DM Multipath A active/active configuration definition, Overview of DM Multipath illustration, Overview of DM Multipath active/passive configuration definition, Overview of DM Multipath illustration, Overview of DM Multipath alias parameter , Multipaths Device Configuration Attributes configuration file, Multipath Device Identifiers alias_prefix parameter, Configuration File Devices all_devs parameter, Configuration File Devices all_tg_pt parameter, Configuration File Defaults , Configuration File Devices B blacklist configuration file, Configuration File Blacklist default devices, Blacklisting By Device Name device name, Blacklisting By Device Name device protocol, Blacklisting By Device Protocol (Red Hat Enterprise Linux 7.6 and Later) device type, Blacklisting By Device Type udev property, Blacklisting By udev Property (Red Hat Enterprise Linux 7.5 and Later) WWID, Blacklisting by WWID blacklist_exceptions section multipath.conf file, Blacklist Exceptions C checker_timeout parameter, Configuration File Defaults configuration file alias parameter, Multipaths Device Configuration Attributes alias_prefix parameter, Configuration File Devices all_devs parameter, Configuration File Devices all_tg_pt parameter, Configuration File Defaults , Configuration File Devices blacklist, Configuration File Blacklist checker_timeout parameter, Configuration File Defaults config_dir parameter, Configuration File Defaults deferred_remove parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices delay_wait_checks parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices delay_watch_checks parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices detect_path_checker parameter, Configuration File Defaults , Configuration File Devices detect_prio parameter, Configuration File Defaults , Multipaths Device Configuration Attributes dev_loss_tmo parameter, Configuration File Defaults , Configuration File Devices disable_changed_wwids parameter, Configuration File Defaults failback parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices fast_io_fail_tmo parameter, Configuration File Defaults , Configuration File Devices features parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices flush_on_last_del parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices force_sync parameter, Configuration File Defaults hardware_handler parameter, Configuration File Devices hw_string_match parameter, Configuration File Defaults ignore_new_boot_devs parameter, Configuration File Defaults log_checker_err parameter, Configuration File Defaults max_fds parameter, Configuration File Defaults max_sectors_kb parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices new_bindings_in_boot parameter, Configuration File Defaults no_path_retry parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices overview, Configuration File Overview path_checker parameter, Configuration File Defaults , Configuration File Devices path_grouping_policy parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices path_selector parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices polling-interval parameter, Configuration File Defaults prio parameter, Configuration File Defaults , Configuration File Devices prkeys_file parameter, Configuration File Defaults , Multipaths Device Configuration Attributes product parameter, Configuration File Devices product_blacklist parameter, Configuration File Devices queue_without_daemon parameter, Configuration File Defaults reassign_maps parameter, Configuration File Defaults remove_retries parameter, Configuration File Defaults retain_attached_hw_handler parameter, Configuration File Defaults , Multipaths Device Configuration Attributes retrigger_delay parameter, Configuration File Defaults retrigger_tries parameter, Configuration File Defaults revision parameter, Configuration File Devices rr_min_io parameter, Configuration File Defaults , Multipaths Device Configuration Attributes rr_weight parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices skip_kpartx parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices uid_attribute parameter, Configuration File Defaults , Configuration File Devices user_friendly_names parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices vendor parameter, Configuration File Devices verbosity parameter, Configuration File Defaults wwid parameter, Multipaths Device Configuration Attributes configuring DM Multipath, Setting Up DM Multipath config_dir parameter, Configuration File Defaults D defaults section multipath.conf file, Configuration File Defaults deferred_remove parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices delay_wait_checks parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices delay_watch_checks parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices detect_path_checker parameter, Configuration File Defaults , Configuration File Devices detect_prio parameter, Configuration File Defaults , Multipaths Device Configuration Attributes dev/mapper directory, Multipath Device Identifiers device name, Multipath Device Identifiers device-mapper-multipath package, Setting Up DM Multipath devices adding, Configuring Storage Devices , Configuration File Devices devices section multipath.conf file, Configuration File Devices dev_loss_tmo parameter, Configuration File Defaults , Configuration File Devices disable_changed_wwids parameter, Configuration File Defaults DM Multipath and LVM, Multipath Devices in Logical Volumes components, DM Multipath Components configuration file, The DM Multipath Configuration File configuring, Setting Up DM Multipath definition, Device Mapper Multipathing device name, Multipath Device Identifiers devices, Multipath Devices failover, Overview of DM Multipath overview, Overview of DM Multipath redundancy, Overview of DM Multipath setup, Setting Up DM Multipath setup, overview, DM Multipath Setup Overview dm-n devices, Multipath Device Identifiers dmsetup command, determining device mapper entries, Determining Device Mapper Entries with the dmsetup Command dm_multipath kernel module , DM Multipath Components F failback parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices failover, Overview of DM Multipath fast_io_fail_tmo parameter, Configuration File Defaults , Configuration File Devices features parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices features, new and changed, New and Changed Features flush_on_last_del parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices force_sync parameter, Configuration File Defaults H hardware_handler parameter, Configuration File Devices hw_string_match parameter, Configuration File Defaults I ignore_new_boot_devs parameter, Configuration File Defaults initramfs starting multipath, Setting Up Multipathing in the initramfs File System K kpartx command , DM Multipath Components L local disks, ignoring, Ignoring Local Disks when Generating Multipath Devices log_checker_err parameter, Configuration File Defaults LVM physical volumes multipath devices, Multipath Devices in Logical Volumes lvm.conf file , Multipath Devices in Logical Volumes M max_fds parameter, Configuration File Defaults max_sectors_kb parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices mpathconf command , DM Multipath Components multipath command , DM Multipath Components options, Multipath Command Options output, Multipath Command Output queries, Multipath Queries with multipath Command multipath daemon (multipathd), The Multipath Daemon multipath devices, Multipath Devices logical volumes, Multipath Devices in Logical Volumes LVM physical volumes, Multipath Devices in Logical Volumes Multipath Helper, Automatic Configuration File Generation with Multipath Helper multipath.conf file, Storage Array Support , The DM Multipath Configuration File blacklist_exceptions section, Blacklist Exceptions defaults section, Configuration File Defaults devices section, Configuration File Devices multipaths section, Multipaths Device Configuration Attributes multipathd command, Troubleshooting with the multipathd Interactive Console interactive console, Troubleshooting with the multipathd Interactive Console multipathd daemon , DM Multipath Components multipathd start command, Setting Up DM Multipath multipathed root file system, Moving root File Systems from a Single Path Device to a Multipath Device multipathed swap file system, Moving swap File Systems from a Single Path Device to a Multipath Device multipaths section multipath.conf file, Multipaths Device Configuration Attributes N new_bindings_in_boot parameter, Configuration File Defaults no_path_retry parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices O overview features, new and changed, New and Changed Features P path_checker parameter, Configuration File Defaults , Configuration File Devices path_grouping_policy parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices path_selector parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices polling_interval parameter, Configuration File Defaults prio parameter, Configuration File Defaults , Configuration File Devices prkeys_file parameter, Configuration File Defaults , Multipaths Device Configuration Attributes product parameter, Configuration File Devices product_blacklist parameter, Configuration File Devices Q queue_without_daemon parameter, Configuration File Defaults R reassign_maps parameter, Configuration File Defaults remove_retries parameter, Configuration File Defaults resizing a multipath device, Resizing an Online Multipath Device retain_attached_hw_handler parameter, Configuration File Defaults , Multipaths Device Configuration Attributes retrigger_delay parameter, Configuration File Defaults retrigger_tries parameter, Configuration File Defaults revision parameter, Configuration File Devices root file system, Moving root File Systems from a Single Path Device to a Multipath Device rr_min_io parameter, Configuration File Defaults , Multipaths Device Configuration Attributes rr_weight parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices S setup DM Multipath, Setting Up DM Multipath skip_kpartxr parameter, Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices storage array support, Storage Array Support storage arrays adding, Configuring Storage Devices , Configuration File Devices swap file system, Moving swap File Systems from a Single Path Device to a Multipath Device U uid_attribute parameter, Configuration File Defaults , Configuration File Devices user_friendly_names parameter , Multipath Device Identifiers , Configuration File Defaults , Multipaths Device Configuration Attributes , Configuration File Devices V vendor parameter, Configuration File Devices verbosity parameter, Configuration File Defaults W World Wide Identifier (WWID), Multipath Device Identifiers wwid parameter, Multipaths Device Configuration Attributes
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/ix01
|
18.12.8. References to Other Filters
|
18.12.8. References to Other Filters Any filter may hold references to other filters. Individual filters may be referenced multiple times in a filter tree but references between filters must not introduce loops. Example 18.7. An Example of a clean traffic filter The following shows the XML of the clean-traffic network filter referencing several other filters. To reference another filter, the XML node filterref needs to be provided inside a filter node. This node must have the attribute filter whose value contains the name of the filter to be referenced. New network filters can be defined at any time and may contain references to network filters that are not known to libvirt, yet. However, once a virtual machine is started or a network interface referencing a filter is to be hotplugged, all network filters in the filter tree must be available. Otherwise the virtual machine will not start or the network interface cannot be attached.
|
[
"<filter name='clean-traffic'> <uuid>6ef53069-ba34-94a0-d33d-17751b9b8cb1</uuid> <filterref filter='no-mac-spoofing'/> <filterref filter='no-ip-spoofing'/> <filterref filter='allow-incoming-ipv4'/> <filterref filter='no-arp-spoofing'/> <filterref filter='no-other-l2-traffic'/> <filterref filter='qemu-announce-self'/> </filter>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-ref-filter
|
Chapter 5. Enabling Windows container workloads
|
Chapter 5. Enabling Windows container workloads Before adding Windows workloads to your cluster, you must install the Windows Machine Config Operator (WMCO), which is available in the OpenShift Container Platform OperatorHub. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. Note Dual NIC is not supported on WMCO-managed Windows instances. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed your cluster using installer-provisioned infrastructure, or using user-provisioned infrastructure with the platform: none field set in your install-config.yaml file. You have configured hybrid networking with OVN-Kubernetes for your cluster. This must be completed during the installation of your cluster. For more information, see Configuring hybrid networking . You are running an OpenShift Container Platform cluster version 4.6.8 or later. Note Windows instances deployed by the WMCO are configured with the containerd container runtime. Because WMCO installs and manages the runtime, it is recommanded that you do not manually install containerd on nodes. Additional resources For the comprehensive prerequisites for the Windows Machine Config Operator, see Windows Machine Config Operator prerequisites . 5.1. Installing the Windows Machine Config Operator You can install the Windows Machine Config Operator using either the web console or OpenShift CLI ( oc ). Note The WMCO is not supported in clusters that use a cluster-wide proxy because the WMCO is not able to route traffic through the proxy connection for the workloads. Due to a limitation within the Windows operating system, clusterNetwork CIDR addresses of class E, such as 240.0.0.0 , are not compatible with Windows nodes. 5.1.1. Installing the Windows Machine Config Operator using the web console You can use the OpenShift Container Platform web console to install the Windows Machine Config Operator (WMCO). Note Dual NIC is not supported on WMCO-managed Windows instances. Procedure From the Administrator perspective in the OpenShift Container Platform web console, navigate to the Operators OperatorHub page. Use the Filter by keyword box to search for Windows Machine Config Operator in the catalog. Click the Windows Machine Config Operator tile. Review the information about the Operator and click Install . On the Install Operator page: Select the stable channel as the Update Channel . The stable channel enables the latest stable release of the WMCO to be installed. The Installation Mode is preconfigured because the WMCO must be available in a single namespace only. Choose the Installed Namespace for the WMCO. The default Operator recommended namespace is openshift-windows-machine-config-operator . Click the Enable Operator recommended cluster monitoring on the Namespace checkbox to enable cluster monitoring for the WMCO. Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . The WMCO is now listed on the Installed Operators page. Note The WMCO is installed automatically into the namespace you defined, like openshift-windows-machine-config-operator . Verify that the Status shows Succeeded to confirm successful installation of the WMCO. 5.1.2. Installing the Windows Machine Config Operator using the CLI You can use the OpenShift CLI ( oc ) to install the Windows Machine Config Operator (WMCO). Note Dual NIC is not supported on WMCO-managed Windows instances. Procedure Create a namespace for the WMCO. Create a Namespace object YAML file for the WMCO. For example, wmco-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: "true" 2 1 It is recommended to deploy the WMCO in the openshift-windows-machine-config-operator namespace. 2 This label is required for enabling cluster monitoring for the WMCO. Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-namespace.yaml Create the Operator group for the WMCO. Create an OperatorGroup object YAML file. For example, wmco-og.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator Create the Operator group: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-og.yaml Subscribe the namespace to the WMCO. Create a Subscription object YAML file. For example, wmco-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: "stable" 1 installPlanApproval: "Automatic" 2 name: "windows-machine-config-operator" source: "redhat-operators" 3 sourceNamespace: "openshift-marketplace" 4 1 Specify stable as the channel. 2 Set an approval strategy. You can set Automatic or Manual . 3 Specify the redhat-operators catalog source, which contains the windows-machine-config-operator package manifests. If your OpenShift Container Platform is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator LifeCycle Manager (OLM). 4 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-sub.yaml The WMCO is now installed to the openshift-windows-machine-config-operator . Verify the WMCO installation: USD oc get csv -n openshift-windows-machine-config-operator Example output NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded 5.2. Configuring a secret for the Windows Machine Config Operator To run the Windows Machine Config Operator (WMCO), you must create a secret in the WMCO namespace containing a private key. This is required to allow the WMCO to communicate with the Windows virtual machine (VM). Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You created a PEM-encoded file containing an RSA key. Procedure Define the secret required to access the Windows VMs: USD oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> \ -n openshift-windows-machine-config-operator 1 1 You must create the private key in the WMCO namespace, like openshift-windows-machine-config-operator . It is recommended to use a different private key than the one used when installing the cluster. 5.3. Using Windows containers in a proxy-enabled cluster The Windows Machine Config Operator (WMCO) can consume and use a cluster-wide egress proxy configuration when making external requests outside the cluster's internal network. This allows you to add Windows nodes and run workloads in a proxy-enabled cluster, allowing your Windows nodes to pull images from registries that are secured behind your proxy server or to make requests to off-cluster services and services that use a custom public key infrastructure. Note The cluster-wide proxy affects system components only, not user workloads. In proxy-enabled clusters, the WMCO is aware of the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY values that are set for the cluster. The WMCO periodically checks whether the proxy environment variables have changed. If there is a discrepancy, the WMCO reconciles and updates the proxy environment variables on the Windows instances. Windows workloads created on Windows nodes in proxy-enabled clusters do not inherit proxy settings from the node by default, the same as with Linux nodes. Also, by default PowerShell sessions do not inherit proxy settings on Windows nodes in proxy-enabled clusters. Additional resources Configuring the cluster-wide proxy . 5.4. Rebooting a node gracefully The Windows Machine Config Operator (WMCO) minimizes node reboots whenever possible. However, certain operations and updates require a reboot to ensure that changes are applied correctly and securely. To safely reboot your Windows nodes, use the graceful reboot process. For information on gracefully rebooting a standard OpenShift Container Platform node, see "Rebooting a node gracefully" in the Nodes documentation. Before rebooting a node, it is recommended to backup etcd data to avoid any data loss on the node. Note For single-node OpenShift clusters that require users to perform the oc login command rather than having the certificates in kubeconfig file to manage the cluster, the oc adm commands might not be available after cordoning and draining the node. This is because the openshift-oauth-apiserver pod is not running due to the cordon. You can use SSH to access the nodes as indicated in the following procedure. In a single-node OpenShift cluster, pods cannot be rescheduled when cordoning and draining. However, doing so gives the pods, especially your workload pods, time to properly stop and release associated resources. Procedure To perform a graceful restart of a node: Mark the node as unschedulable: USD oc adm cordon <node1> Drain the node to remove all the running pods: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force You might receive errors that pods associated with custom pod disruption budgets (PDB) cannot be evicted. Example error error when evicting pods/"rails-postgresql-example-1-72v2w" -n "rails" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. In this case, run the drain command again, adding the disable-eviction flag, which bypasses the PDB checks: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction SSH into the Windows node and enter PowerShell by running the following command: C:\> powershell Restart the node by running the following command: C:\> Restart-Computer -Force Windows nodes on Amazon Web Services (AWS) do not return to READY state after a graceful reboot due to an inconsistency with the EC2 instance metadata routes and the Host Network Service (HNS) networks. After the reboot, SSH into any Windows node on AWS and add the route by running the following command in a shell prompt: C:\> route add 169.254.169.254 mask 255.255.255.0 <gateway_ip> where: 169.254.169.254 Specifies the address of the EC2 instance metadata endpoint. 255.255.255.255 Specifies the network mask of the EC2 instance metadata endpoint. <gateway_ip> Specifies the corresponding IP address of the gateway in the Windows instance, which you can find by running the following command: C:\> ipconfig | findstr /C:"Default Gateway" After the reboot is complete, mark the node as schedulable by running the following command: USD oc adm uncordon <node1> Verify that the node is ready: USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8 Additional resources Rebooting a OpenShift Container Platform node gracefully Backing up etcd data 5.5. Additional resources Generating a key pair for cluster node SSH access Adding Operators to a cluster
|
[
"apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc create -f <file-name>.yaml",
"oc create -f wmco-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator",
"oc create -f <file-name>.yaml",
"oc create -f wmco-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: \"stable\" 1 installPlanApproval: \"Automatic\" 2 name: \"windows-machine-config-operator\" source: \"redhat-operators\" 3 sourceNamespace: \"openshift-marketplace\" 4",
"oc create -f <file-name>.yaml",
"oc create -f wmco-sub.yaml",
"oc get csv -n openshift-windows-machine-config-operator",
"NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded",
"oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> -n openshift-windows-machine-config-operator 1",
"oc adm cordon <node1>",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force",
"error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction",
"C:\\> powershell",
"C:\\> Restart-Computer -Force",
"C:\\> route add 169.254.169.254 mask 255.255.255.0 <gateway_ip>",
"C:\\> ipconfig | findstr /C:\"Default Gateway\"",
"oc adm uncordon <node1>",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/windows_container_support_for_openshift/enabling-windows-container-workloads
|
Chapter 1. Activating Red Hat Ansible Automation Platform
|
Chapter 1. Activating Red Hat Ansible Automation Platform Red Hat Ansible Automation Platform uses available subscriptions or a subscription manifest to authorize the use of Ansible Automation Platform. To obtain a subscription, you can do either of the following: Use your Red Hat customer or Satellite credentials when you launch Ansible Automation Platform. Upload a subscriptions manifest file either using the Red Hat Ansible Automation Platform interface or manually in an Ansible playbook. 1.1. Activate with credentials When Ansible Automation Platform launches for the first time, the Ansible Automation Platform Subscription screen automatically displays. You can use your Red Hat credentials to retrieve and import your subscription directly into Ansible Automation Platform. Procedures Enter your Red Hat username and password. Click Get Subscriptions . Note You can also use your Satellite username and password if your cluster nodes are registered to Satellite through Subscription Manager. Review the End User License Agreement and select I agree to the End User License Agreement . The Tracking and Analytics options are checked by default. These selections help Red Hat improve the product by delivering you a much better user experience. You can opt out by deselecting the options. Click Submit . Once your subscription has been accepted, the license screen displays and navigates you to the Dashboard of the Ansible Automation Platform interface. You can return to the license screen by clicking the Settings icon ⚙ and selecting the License tab from the Settings screen. 1.2. Activate with a manifest file If you have a subscriptions manifest, you can upload the manifest file either using the Red Hat Ansible Automation Platform interface or manually in an Ansible playbook. Prerequisites You must have a Red Hat Subscription Manifest file exported from the Red Hat Customer Portal. For more information, see Obtaining a manifest file . Uploading with the interface Complete steps to generate and download the manifest file Log in to Red Hat Ansible Automation Platform. If you are not immediately prompted for a manifest file, go to Settings License . Make sure the Username and Password fields are empty. Click Browse and select the manifest file. Click . Note If the BROWSE button is disabled on the License page, clear the USERNAME and PASSWORD fields. Uploading manually If you are unable to apply or update the subscription info using the Red Hat Ansible Automation Platform interface, you can upload the subscriptions manifest manually in an Ansible playbook using the license module in the ansible.controller collection. - name: Set the license using a file license: manifest: "/tmp/my_manifest.zip"
|
[
"- name: Set the license using a file license: manifest: \"/tmp/my_manifest.zip\""
] |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_operations_guide/assembly-aap-activate
|
Chapter 6. Monitoring the cluster on the Ceph dashboard
|
Chapter 6. Monitoring the cluster on the Ceph dashboard As a storage administrator, you can use Red Hat Ceph Storage Dashboard to monitor specific aspects of the cluster based on types of hosts, services, data access methods, and more. This section covers the following administrative tasks: Monitoring hosts of the Ceph cluster on the dashboard . Viewing and editing the configuration of the Ceph cluster on the dashboard . Viewing and editing the manager modules of the Ceph cluster on the dashboard . Monitoring monitors of the Ceph cluster on the dashboard . Monitoring services of the Ceph cluster on the dashboard . Monitoring Ceph OSDs on the dashboard . Monitoring HAProxy on the dashboard . Viewing the CRUSH map of the Ceph cluster on the dashboard . Filtering logs of the Ceph cluster on the dashboard . Viewing centralized logs of the Ceph cluster on the dashboard . Monitoring pools of the Ceph cluster on the dashboard. Monitoring Ceph file systems on the dashboard. Monitoring Ceph Object Gateway daemons on the dashboard. Monitoring block device images on the Ceph dashboard. 6.1. Monitoring hosts of the Ceph cluster on the dashboard You can monitor the hosts of the cluster on the Red Hat Ceph Storage Dashboard. The following are the different tabs on the hosts page. Each tab contains a table with the relavent information. The tables are searchable and customizable by column and row. To change the order of the columns, select the column name and drag to place within the table. To select which columns are displaying, click the toggle columns button and select or clear column names. Enter the number of rows to be displayed in the row selector field. Devices This tab has a table that details the device ID, state of the device health, life expectancy, device name, prediction creation date, and the daemons on the hosts. Physical Disks This tab has a table that details all disks attached to a selected host, as well as their type, size and others. It has details such as device path, type of device, available, vendor, model, size, and the OSDs deployed. To identify which disk is where on the physical device, select the device and click Identify . Select the duration of how long the LED should blink for to find the selected disk. Daemons This tab has a table that details all services that have been deployed on the selected host, which container they are running in, and their current status. The table has details such as daemon name, daemon version, status, when the daemon was last refreshed, CPU usage, memory usage (in MiB), and daemon events. Daemon actions can be run from this tab. For more details, see Daemon actions . Performance Details This tab has details such as OSDs deployed, CPU utilization, RAM usage, network load, network drop rate, and OSD disk performance statistics. View performance information through the embedded Grafana Dashboard. Device health For SMART-enabled devices, you can get the individual health status and SMART data only on the OSD deployed hosts. Prerequisites Before you begin, make sure that you have the following prerequisites in place: A running Red Hat Ceph Storage cluster. Dashboard is installed. Hosts are added to the storage cluster. All the services, monitor, manager, and OSD daemons are deployed on the storage cluster. Procedure From the dashboard navigation, go to Cluster->Hosts . On the Hosts List tab, expand the host row and select the host with the daemon to perform the action on. On the Daemons tab of the host, select the row with the daemon. Note The Daemons table can be searched and filtered. Select the action that needs to be run on the daemon. The options are Start , Stop , Restart , and Redeploy . Figure 6.1. Monitoring hosts of the Ceph cluster Additional Resources See the Ceph performance counters in the Red Hat Ceph Storage Administration Guide for more details. 6.2. Viewing and editing the configuration of the Ceph cluster on the dashboard You can view various configuration options of the Ceph cluster on the dashboard. You can edit only some configuration options. Prerequisites Before you begin, make sure that you have the following prerequisites in place: A running Red Hat Ceph Storage cluster. Dashboard is installed. All the services are deployed on the storage cluster. Procedure From the dashboard navigation, go to Administration->Configuration . To view the details of the configuration, expand the row contents. Figure 6.2. Configuration options Optional: Use the search field to find a configuration. Optional: You can filter for a specific configuration. Use the following filters: Level - Basic, advanced, or dev Service - Any, mon, mgr, osd, mds, common, mds_client, rgw, and similar filters. Source - Any, mon, and similar filters Modified - yes or no To edit a configuration, select the configuration row and click Edit . Use the Edit form to edit the required parameters, and click Update . A notification displays that the configuration was updated successfully. Additional Resources See the Ceph Network Configuration chapter in the Red Hat Ceph Storage Configuration Guide for more details. 6.3. Viewing and editing the manager modules of the Ceph cluster on the dashboard Manager modules are used to manage module-specific configuration settings. For example, you can enable alerts for the health of the cluster. You can view, enable or disable, and edit the manager modules of a cluster on the Red Hat Ceph Storage dashboard. Prerequisites Before you begin, make sure that you have the following prerequisites in place: A running Red Hat Ceph Storage cluster. Dashboard is installed. Viewing the manager modules From the dashboard navigation, go to Administration->Manager Modules . To view the details of a specific manager module, expand the row contents. Figure 6.3. Manager modules Enabling a manager module Select the row and click Enable from the action drop-down. Disabling a manager module Select the row and click Disable from the action drop-down. Editing a manager module Select the row: Note Not all modules have configurable parameters. If a module is not configurable, the Edit button is disabled. Edit the required parameters and click Update . A notification displays that the module was updated successfully. 6.4. Monitoring monitors of the Ceph cluster on the dashboard You can monitor the performance of the Ceph monitors on the landing page of the Red Hat Ceph Storage dashboard You can also view the details such as status, quorum, number of open session, and performance counters of the monitors in the Monitors panel. Prerequisites Before you begin, make sure that you have the following prerequisites in place: A running Red Hat Ceph Storage cluster. Dashboard is installed. Monitors are deployed in the storage cluster. Procedure From the dashboard navigation, go to Cluster->Monitors . The Monitors panel displays information about the overall monitor status and monitor hosts that are in and out of quorum. To see the number of open sessions, in the In Quorum table, hover the cursor over the Open Sessions . To see performance counters for any monitor, click the Name in the In Quorum and Not In Quorum tables. Figure 6.4. Viewing monitor Performance Counters Additional Resources See the Ceph monitors section in the Red Hat Ceph Storage Operations guide . See the Ceph performance counters in the Red Hat Ceph Storage Administration Guide for more details. 6.5. Monitoring services of the Ceph cluster on the dashboard You can monitor the services of the cluster on the Red Hat Ceph Storage Dashboard. You can view the details such as hostname, daemon type, daemon ID, container ID, container image name, container image ID, version status and last refreshed time. Prerequisites Before you begin, make sure that you have the following prerequisites in place: A running Red Hat Ceph Storage cluster. Dashboard is installed. Hosts are added to the storage cluster. All the services are deployed on the storage cluster. Procedure From the dashboard navigation, go to Administration->Services . Expand the service for more details. Figure 6.5. Monitoring services of the Ceph cluster Additional Resources See the Introduction to the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide for more details. 6.6. Monitoring Ceph OSDs on the dashboard You can monitor the status of the Ceph OSDs on the landing page of the Red Hat Ceph Storage Dashboard. You can also view the details such as host, status, device class, number of placement groups (PGs), size flags, usage, and read or write operations time in the OSDs tab. The following are the different tabs on the OSDs page: Devices - This tab has details such as Device ID, state of health, life expectancy, device name, and the daemons on the hosts. Attributes (OSD map) - This tab shows the cluster address, details of heartbeat, OSD state, and the other OSD attributes. Metadata - This tab shows the details of the OSD object store, the devices, the operating system, and the kernel details. Device health - For SMART-enabled devices, you can get the individual health status and SMART data. Performance counter - This tab gives details of the bytes written on the devices. Performance Details - This tab has details such as OSDs deployed, CPU utilization, RAM usage, network load, network drop rate, and OSD disk performance statistics. View performance information through the embedded Grafana Dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Hosts are added to the storage cluster. All the services including OSDs are deployed on the storage cluster. Procedure From the dashboard navigation, go to Cluster->OSDs . To view the details of a specific OSD, from the OSDs List tab, expand an OSD row. Figure 6.6. Monitoring OSDs of the Ceph cluster You can view additional details such as Devices , Attributes (OSD map) , Metadata , Device Health , Performance counter , and Performance Details , by clicking on the respective tabs. Additional Resources See the Introduction to the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide for more details. 6.7. Monitoring HAProxy on the dashboard The Ceph Object Gateway allows you to assign many instances of the object gateway to a single zone, so that you can scale out as load increases. Since each object gateway instance has its own IP address, you can use HAProxy to balance the load across Ceph Object Gateway servers. You can monitor the following HAProxy metrics on the dashboard: Total responses by HTTP code. Total requests/responses. Total number of connections. Current total number of incoming / outgoing bytes. You can also get the Grafana details by running the ceph dashboard get-grafana-api-url command. Prerequisites A running Red Hat Ceph Storage cluster. Admin level access on the storage dashboard. An existing Ceph Object Gateway service, without SSL. If you want SSL service, the certificate should be configured on the ingress service, not the Ceph Object Gateway service. Ingress service deployed using the Ceph Orchestrator. Monitoring stack components are created on the dashboard. Procedure Log in to the Grafana URL and select the RGW_Overview panel: Syntax Example Verify the HAProxy metrics on the Grafana URL. From the Ceph dashboard navigation, go to Object->Gateways . From the Overall Performance tab, verify the Ceph Object Gateway HAProxy metrics. Figure 6.7. HAProxy metrics Additional Resources See the Configuring high availability for the Ceph Object Gateway in the Red Hat Ceph Storage Object Gateway Guide for more details. 6.8. Viewing the CRUSH map of the Ceph cluster on the dashboard You can view the The CRUSH map that contains a list of OSDs and related information on the Red Hat Ceph Storage dashboard. Together, the CRUSH map and CRUSH algorithm determine how and where data is stored. The dashboard allows you to view different aspects of the CRUSH map, including OSD hosts, OSD daemons, ID numbers, device class, and more. The CRUSH map allows you to determine which host a specific OSD ID is running on. This is helpful if there is an issue with an OSD. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. OSD daemons deployed on the storage cluster. Procedure From the dashboard navigation, go to Cluster->CRUSH map . To view the details of the specific OSD, click it's row. Figure 6.8. CRUSH Map detail view Additional Resources For more information about the CRUSH map, see CRUSH admin overview in the Red Hat Ceph Storage Storage strategies guide . 6.9. Filtering logs of the Ceph cluster on the dashboard You can view and filter logs of the Red Hat Ceph Storage cluster on the dashboard based on several criteria. The criteria includes Priority , Keyword , Date , and Time range . You can download the logs to the system or copy the logs to the clipboard as well for further analysis. Prerequisites A running Red Hat Ceph Storage cluster. The Dashboard is installed. Log entries have been generated since the Ceph Monitor was last started. Note The Dashboard logging feature only displays the thirty latest high level events. The events are stored in memory by the Ceph Monitor. The entries disappear after restarting the Monitor. If you need to review detailed or older logs, refer to the file based logs. Procedure From the dashboard navigation, go to Observability->Logs . From the Cluster Logs tab, view cluster logs. Figure 6.9. Cluster logs Use the Priority filter to filter by Debug , Info , Warning , Error , or All . Use the Keyword field to enter text to search by keyword. Use the Date picker to filter by a specific date. Use the Time range fields to enter a range, using the HH:MM - HH:MM format. Hours must be entered using numbers 0 to 23 . To combine filters, set two or more filters. To save the logs, use the Download or Copy to Clipboard buttons. Additional Resources See the Configuring Logging chapter in the Red Hat Ceph StorageTroubleshooting Guide for more information. See the Understanding Ceph Logs section in the Red Hat Ceph Storage Troubleshooting Guide for more information. 6.10. Viewing centralized logs of the Ceph cluster on the dashboard Ceph Dashboard allows you to view logs from all the clients in a centralized space in the Red Hat Ceph Storage cluster for efficient monitoring. This is achieved through using Loki, a log aggregation system designed to store and query logs, and Promtail, an agent that ships the contents of local logs to a private Grafana Loki instance. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Grafana is configured and logged into on the cluster. Procedure From the dashboard navigation, go to Administration->Services . From Services , click Create . In the Create Service form, from the Type list, select loki . Fill in the remaining details, and click Create Service . Repeat the step to create the Promtail service. Select promtail from the Type list. The loki and promtail services are displayed in the Services table, after being created successfully. Figure 6.10. Creating Loki and Promtail services Note By default, Promtail service is deployed on all the running hosts. Enable logging to files. Go to Administration->Configuration . Select log_to_file and click Edit . In the Edit log_to_file form, set the global value to true . Figure 6.11. Configuring log files Click Update . The Updated config option log_to_file notification displays and you are returned to the Configuration table. Repeat these steps for mon_cluster_log_to_file , setting the global value to true . Note Both log_to_file and mon_cluster_log_to_file files need to be configured. Optional : To view the Ceph Object Gateway 'ops_log', rgw_enable_ops_log must be set to true by using the following command: To do it from the dashboard, follow the below steps: Go to Administration Configuration . Change level from 'basic' to 'Dev'. Search for rgw_enable_ops_log and edit the value to true . , under the Daemon Logs tab, locate the logs file in the filename field and run the query to view the ops log. To view the centralized logs, go to Observability->Logs and switch to the Daemon Logs tab. Use Log browser to select files and click Show logs to view the logs from that file. Figure 6.12. View centralized logs 6.11. Monitoring pools of the Ceph cluster on the dashboard You can view the details, performance details, configuration, and overall performance of the pools in a cluster on the Red Hat Ceph Storage Dashboard. A pool plays a critical role in how the Ceph storage cluster distributes and stores data. If you have deployed a cluster without creating a pool, Ceph uses the default pools for storing data. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Pools are created Procedure From the dashboard navigation, go to Cluster->Pools . View the Pools List tab, which gives the details of Data protection and the application for which the pool is enabled. Hover the mouse over Usage , Read bytes , and Write bytes for the required details. Expand the pool row for detailed information about a specific pool. Figure 6.13. Monitoring pools For general information, go to the Overall Performance tab. Additional Resources For more information about pools, see Ceph pools in the Red Hat Ceph Storage Architecture guide . See the Creating pools on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 6.12. Monitoring Ceph File Systems on the dashboard You can use the Red Hat Ceph Storage Dashboard to monitor Ceph File Systems (CephFS) and related components. For each File System listed, the following tabs are available: Details View the metadata servers (MDS) and their rank plus any standby daemons, pools and their usage,and performance counters. Directories View list of directories, their quotas and snapshots. Select a directory to set and unset maximum file and size quotas and to create and delete snapshots for the specific directory. Subvolumes Create, edit, and view subvolume information. These can be filtered by subvolume groups. Subvolume groups Create, edit, and view subvolume group information. Snapshots Create, clone, and view snapshot information. These can be filtered by subvolume groups and subvolumes. Snapshot schedules Enable, create, edit, and delete snapshot schedules. Clients View and evict Ceph File System client information. Performance Details View the performance of the file systems through the embedded Grafana Dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. MDS service is deployed on at least one of the hosts. Ceph File System is installed. Procedure From the dashboard navigation, go to File->File Systems . To view more information about an individual file system, expand the file system row. Additional Resources For more information, see the File System Guide . 6.13. Monitoring Ceph object gateway daemons on the dashboard You can use the Red Hat Ceph Storage Dashboard to monitor Ceph object gateway daemons. You can view the details, performance counters, and performance details of the Ceph object gateway daemons. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. At least one Ceph object gateway daemon configured in the storage cluster. Procedure From the dashboard navigation, go to Object->Gateways . View information about individual gateways, from the Gateways List tab. To view more information about an individual gateway, expand the gateway row. If you have configured multiple Ceph Object Gateway daemons, click on Sync Performance tab and view the multi-site performance counters. Additional Resources For more information, see the Red Hat Ceph Storage Ceph object gateway Guide . 6.14. Monitoring Block Device images on the Ceph dashboard You can use the Red Hat Ceph Storage Dashboard to monitor and manage Block Device images. You can view the details, snapshots, configuration details, and performance details of the images. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. A pool with the rbd application enabled is created. An image is created. Procedure From the dashboard navigation, go to Block->Images . Expand the image row to see detailed information. Figure 6.14. Monitoring Block Device images Additional Resources See the Creating images on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. .
|
[
"https:// DASHBOARD_URL :3000",
"https://dashboard_url:3000",
"ceph config set client.rgw rgw_enable_ops_log true"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/dashboard_guide/monitor-the-cluster-on-the-ceph-dashboard
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_ibm_cloud/making-open-source-more-inclusive
|
Chapter 5. Administering bare metal nodes
|
Chapter 5. Administering bare metal nodes After you deploy an overcloud that includes the Bare Metal Provisioning service (ironic), you can provision a physical machine on an enrolled bare metal node and launch bare metal instances in your overcloud. Prerequisites A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service . 5.1. Launching bare metal instances You can launch instances either from the command line or from the OpenStack dashboard. Prerequisites A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service . 5.1.1. Launching instances with the command line interface You can create a bare-metal instance by using the OpenStack Client CLI. Prerequisites A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service . Procedure Configure the shell to access the Identity service (keystone) as the administrative user: Create your bare-metal instance: Replace <network_uuid> with the unique identifier for the network that you created to use with the Bare Metal Provisioning service. Replace <image_uuid> with the unique identifier for the image that has the software profile that your instance requires. Check the status of the instance: 5.1.2. Launching instances with the dashboard Use the dashboard graphical user interface to deploy a bare metal instance. Prerequisites A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service . Procedure Log in to the dashboard at http[s]:// DASHBOARD_IP /dashboard . Click Project > Compute > Instances Click Launch Instance . In the Details tab, specify the Instance Name and select 1 for Count . In the Source tab, select an Image from Select Boot Source , then click the + (plus) symbol to select an operating system disk image. The image that you choose moves to Allocated . In the Flavor tab, select baremetal . In the Networks tab, use the + (plus) and - (minus) buttons to move required networks from Available to Allocated . Ensure that the shared network that you created for the Bare Metal Provisioning service is selected here. If you want to assign the instance to a security group, in the Security Groups tab, use the arrow to move the group to Allocated . Click Launch Instance . 5.2. Configuring port groups in the Bare Metal Provisioning service Note Port group functionality for bare metal nodes is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should be used only for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Port groups (bonds) provide a method to aggregate multiple network interfaces into a single 'bonded' interface. Port group configuration always takes precedence over an individual port configuration. If a port group has a physical network, then all the ports in that port group must have the same physical network. The Bare Metal Provisioning service uses configdrive to support configuration of port groups in the instances. Note Bare Metal Provisioning service API version 1.26 supports port group configuration. .Prerequisites A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service . 5.2.1. Configuring port groups on switches manually To configure port groups in a bare metal deployment, you must configure the port groups on the switches manually. You must ensure that the mode and properties on the switch correspond to the mode and properties on the bare metal side as the naming can vary on the switch. Note You cannot use port groups for provisioning and cleaning if you need to boot a deployment using iPXE. With port group fallback, all the ports in a port group can fallback to individual switch ports when a connection fails. Based on whether a switch supports port group fallback or not, you can use the --support-standalone-ports and --unsupport-standalone-ports options. Prerequisites A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service . 5.2.2. Configuring port groups in the Bare Metal Provisioning service Create a port group to aggregate multiple network interfaces into a single bonded interface . Prerequisites A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service . Procedure Create a port group by specifying the node to which it belongs, its name, address, mode, properties and whether it supports fallback to standalone ports. You can also use the openstack baremetal port group set command to update a port group. If you do not specify an address, the deployed instance port group address is the same as the OpenStack Networking port. If you do not attach the neutron port, the port group configuration fails. During interface attachment, port groups have a higher priority than the ports, so they are used first. Currently, it is not possible to specify whether a port group or a port is desired in an interface attachment request. Port groups that do not have any ports are ignored. Note You must configure port groups manually in standalone mode either in the image or by generating the configdrive and adding it to the node's instance_info . Ensure that you have cloud-init version 0.7.7 or later for the port group configuration to work. Associate a port with a port group: During port creation: During port update: Boot an instance by providing an image that has cloud-init or supports bonding. To check if the port group is configured properly, run the following command: Here, X is a number that cloud-init generates automatically for each configured port group, starting with a 0 and incremented by one for each configured port group. 5.3. Determining the host to IP address mapping Use the following commands to determine which IP addresses are assigned to which host and bare metal node. With these commands, you can view the host to IP mapping from the undercloud without accessing the hosts directly. Prerequisites A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service . Procedure Run the following command to display the IP address for each host: To filter a particular host, run the following command: To map the hosts to bare metal nodes, run the following command: 5.4. Attaching and detaching virtual network interfaces The Bare Metal Provisioning service has an API that you can use to manage the mapping between virtual network interfaces. For example, the interfaces in the OpenStack Networking service and your physical interfaces (NICs). You can configure these interfaces for each Bare Metal Provisioning node to set the virtual network interface (VIF) to physical network interface (PIF) mapping logic. To configure the interfaces, use the openstack baremetal node vif* commands. Prerequisites A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service . Procedure List the VIF IDs currently connected to the bare metal node: After the VIF is attached, the Bare Metal Provisioning service updates the virtual port in the OpenStack Networking service with the actual MAC address of the physical port. Check this port address: Create a new port on the network where you created the baremetal-0 node: Remove a port from the instance: Check that the IP address no longer exists on the list: Check if there are VIFs attached to the node: Add the newly created port: Verify that the new IP address shows the new port: Check if the VIF ID is the UUID of the new port: Check if the OpenStack Networking port MAC address is updated and matches one of the Bare Metal Provisioning service ports: Reboot the bare metal node so that it recognizes the new IP address: After you detach or attach interfaces, the bare metal OS removes, adds, or modifies the network interfaces that have changed. When you replace a port, a DHCP request obtains the new IP address, but this might take some time because the old DHCP lease is still valid. To initiate these changes immediately, reboot the bare metal host. 5.5. Configuring notifications for the Bare Metal Provisioning service You can configure the Bare Metal Provisioning service (ironic) to display notifications for different events that occur within the service. External services can use these notifications for billing purposes, monitoring a data store, and other purposes. To enable notifications for the Bare Metal Provisioning service, you must set the following options in your ironic.conf configuration file. Prerequisites A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service . Procedure The notification_level option in the [DEFAULT] section determines the minimum priority level for which notifications are sent. You can set the values for this option to debug , info , warning , error , or critical . If the option is set to warning , all notifications with priority level warning , error , or critical are sent, but not notifications with priority level debug or info . If this option is not set, no notifications are sent. The priority level of each available notification is documented below. The transport_url option in the [oslo_messaging_notifications] section determines the message bus used when sending notifications. If this is not set, the default transport used for RPC is used. All notifications are emitted on the ironic_versioned_notifications topic in the message bus. Generally, each type of message that traverses the message bus is associated with a topic that describes the contents of the message. 5.6. Configuring automatic power fault recovery The Bare Metal Provisioning service (ironic) has a string field fault that records power, cleaning, and rescue abort failures for nodes. Table 5.1. Ironic node faults Fault Description power failure The node is in maintenance mode due to power sync failures that exceed the maximum number of retries. clean failure The node is in maintenance mode due to the failure of a cleaning operation. rescue abort failure The node is in maintenance mode due to the failure of a cleaning operation during rescue abort. none There is no fault present. Conductor checks the value of this field periodically. If the conductor detects a power failure state and can successfully restore power to the node, the node is removed from maintenance mode and restored to operation. Note If the operator places a node in maintenance mode manually, the conductor does not automatically remove the node from maintenance mode. The default interval is 300 seconds, however, you can configure this interval with director using hieradata. Prerequisites A successful overcloud deployment that includes the Bare Metal Provisioning service. For more information, see Deploying an overcloud with the Bare Metal Provisioning service . Procedure Include the following hieradata to configure a custom recovery interval: To disable automatic power fault recovery, set the value to 0 . 5.7. Introspecting overcloud nodes Perform introspection of overcloud nodes to identify and store the specification of your nodes in director. Procedure Log in to the undercloud host as the stack user. Source the overcloudrc credentials file: Run the introspection command: Replace <NODENAME> with the name or UUID of the node that you want to inspect. Check the introspection status: Replace <NODENAME> with the name or UUID of the node. steps Extract introspection data: Replace <NODENAME> with the name or UUID of the node.
|
[
"source ~/overcloudrc",
"openstack server create --nic net-id=<network_uuid> --flavor baremetal --image <image_uuid> myBareMetalInstance",
"openstack server list --name myBareMetalInstance",
"openstack baremetal port group create --node NODE_UUID --name NAME --address MAC_ADDRESS --mode MODE --property miimon=100 --property xmit_hash_policy=\"layer2+3\" --support-standalone-ports",
"openstack baremetal port create --node NODE_UUID --address MAC_ADDRESS --port-group test",
"openstack baremetal port set PORT_UUID --port-group PORT_GROUP_UUID",
"cat /proc/net/bonding/bondX",
"(undercloud) [stack@host01 ~]USD openstack stack output show overcloud HostsEntry --max-width 80 +--------------+---------------------------------------------------------------+ | Field | Value | +--------------+---------------------------------------------------------------+ | description | The content that should be appended to your /etc/hosts if you | | | want to get | | | hostname-based access to the deployed nodes (useful for | | | testing without | | | setting up a DNS). | | | | | output_key | HostsEntry | | output_value | 172.17.0.10 overcloud-controller-0.localdomain overcloud- | | | controller-0 | | | 10.8.145.18 overcloud-controller-0.external.localdomain | | | overcloud-controller-0.external | | | 172.17.0.10 overcloud-controller-0.internalapi.localdomain | | | overcloud-controller-0.internalapi | | | 172.18.0.15 overcloud-controller-0.storage.localdomain | | | overcloud-controller-0.storage | | | 172.21.2.12 overcloud-controller-0.storagemgmt.localdomain | | | overcloud-controller-0.storagemgmt | | | 172.16.0.15 overcloud-controller-0.tenant.localdomain | | | overcloud-controller-0.tenant | | | 10.8.146.13 overcloud-controller-0.management.localdomain | | | overcloud-controller-0.management | | | 10.8.146.13 overcloud-controller-0.ctlplane.localdomain | | | overcloud-controller-0.ctlplane | | | | | | 172.17.0.21 overcloud-compute-0.localdomain overcloud- | | | compute-0 | | | 10.8.146.12 overcloud-compute-0.external.localdomain | | | overcloud-compute-0.external | | | 172.17.0.21 overcloud-compute-0.internalapi.localdomain | | | overcloud-compute-0.internalapi | | | 172.18.0.20 overcloud-compute-0.storage.localdomain | | | overcloud-compute-0.storage | | | 10.8.146.12 overcloud-compute-0.storagemgmt.localdomain | | | overcloud-compute-0.storagemgmt | | | 172.16.0.16 overcloud-compute-0.tenant.localdomain overcloud- | | | compute-0.tenant | | | 10.8.146.12 overcloud-compute-0.management.localdomain | | | overcloud-compute-0.management | | | 10.8.146.12 overcloud-compute-0.ctlplane.localdomain | | | overcloud-compute-0.ctlplane | | | | | | | | | | | | | | | 10.8.145.16 overcloud.localdomain | | | 10.8.146.7 overcloud.ctlplane.localdomain | | | 172.17.0.19 overcloud.internalapi.localdomain | | | 172.18.0.19 overcloud.storage.localdomain | | | 172.21.2.16 overcloud.storagemgmt.localdomain | +--------------+---------------------------------------------------------------+",
"(undercloud) [stack@host01 ~]USD openstack stack output show overcloud HostsEntry -c output_value -f value | grep overcloud-controller-0 172.17.0.12 overcloud-controller-0.localdomain overcloud-controller-0 10.8.145.18 overcloud-controller-0.external.localdomain overcloud-controller-0.external 172.17.0.12 overcloud-controller-0.internalapi.localdomain overcloud-controller-0.internalapi 172.18.0.12 overcloud-controller-0.storage.localdomain overcloud-controller-0.storage 172.21.2.13 overcloud-controller-0.storagemgmt.localdomain overcloud-controller-0.storagemgmt 172.16.0.19 overcloud-controller-0.tenant.localdomain overcloud-controller-0.tenant 10.8.146.13 overcloud-controller-0.management.localdomain overcloud-controller-0.management 10.8.146.13 overcloud-controller-0.ctlplane.localdomain overcloud-controller-0.ctlplane",
"(undercloud) [stack@host01 ~]USD openstack baremetal node list --fields uuid name instance_info -f json [ { \"UUID\": \"c0d2568e-1825-4d34-96ec-f08bbf0ba7ae\", \"Instance Info\": { \"root_gb\": \"40\", \"display_name\": \"overcloud-compute-0\", \"image_source\": \"24a33990-e65a-4235-9620-9243bcff67a2\", \"capabilities\": \"{\\\"boot_option\\\": \\\"local\\\"}\", \"memory_mb\": \"4096\", \"vcpus\": \"1\", \"local_gb\": \"557\", \"configdrive\": \"******\", \"swap_mb\": \"0\", \"nova_host_id\": \"host01.lab.local\" }, \"Name\": \"host2\" }, { \"UUID\": \"8c3faec8-bc05-401c-8956-99c40cdea97d\", \"Instance Info\": { \"root_gb\": \"40\", \"display_name\": \"overcloud-controller-0\", \"image_source\": \"24a33990-e65a-4235-9620-9243bcff67a2\", \"capabilities\": \"{\\\"boot_option\\\": \\\"local\\\"}\", \"memory_mb\": \"4096\", \"vcpus\": \"1\", \"local_gb\": \"557\", \"configdrive\": \"******\", \"swap_mb\": \"0\", \"nova_host_id\": \"host01.lab.local\" }, \"Name\": \"host3\" } ]",
"openstack baremetal node vif list baremetal-0 +--------------------------------------+ | ID | +--------------------------------------+ | 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16 | +--------------------------------------+",
"openstack port show 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16 -c mac_address -c fixed_ips +-------------+-----------------------------------------------------------------------------+ | Field | Value | +-------------+-----------------------------------------------------------------------------+ | fixed_ips | ip_address='192.168.24.9', subnet_id='1d11c677-5946-4733-87c3-23a9e06077aa' | | mac_address | 00:2d:28:2f:8d:95 | +-------------+-----------------------------------------------------------------------------+",
"openstack port create --network baremetal --fixed-ip ip-address=192.168.24.24 baremetal-0-extra",
"openstack server remove port overcloud-baremetal-0 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16",
"openstack server list",
"openstack baremetal node vif list baremetal-0 openstack port list",
"openstack server add port overcloud-baremetal-0 baremetal-0-extra",
"openstack server list +--------------------------------------+-------------------------+--------+------------------------+----------------+---------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-------------------------+--------+------------------------+----------------+---------+ | 53095a64-1646-4dd1-bbf3-b51cbcc38789 | overcloud-controller-2 | ACTIVE | ctlplane=192.168.24.7 | overcloud-full | control | | 3a1bc89c-5d0d-44c7-a569-f2a3b4c73d65 | overcloud-controller-0 | ACTIVE | ctlplane=192.168.24.8 | overcloud-full | control | | 6b01531a-f55d-40e9-b3a2-6d02be0b915b | overcloud-controller-1 | ACTIVE | ctlplane=192.168.24.16 | overcloud-full | control | | c61cc52b-cc48-4903-a971-073c60f53091 | overcloud-novacompute-0overcloud-baremetal-0 | ACTIVE | ctlplane=192.168.24.24 | overcloud-full | compute | +--------------------------------------+-------------------------+--------+------------------------+----------------+---------+",
"openstack baremetal node vif list baremetal-0 +--------------------------------------+ | ID | +--------------------------------------+ | 6181c089-7e33-4f1c-b8fe-2523ff431ffc | +--------------------------------------+",
"openstack port show 6181c089-7e33-4f1c-b8fe-2523ff431ffc -c mac_address -c fixed_ips +-------------+------------------------------------------------------------------------------+ | Field | Value | +-------------+------------------------------------------------------------------------------+ | fixed_ips | ip_address='192.168.24.24', subnet_id='1d11c677-5946-4733-87c3-23a9e06077aa' | | mac_address | 00:2d:28:2f:8d:95 | +-------------+------------------------------------------------------------------------------+",
"openstack server reboot overcloud-baremetal-0",
"ironic::conductor::power_failure_recovery_interval",
"source ~/overcloudrc",
"openstack baremetal introspection start [--wait] <NODENAME>",
"openstack baremetal introspection status <NODENAME>",
"openstack baremetal introspection data save <NODE-UUID>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/bare_metal_provisioning/administering-bare-metal-nodes
|
5.6.4. Use TCP Wrappers To Control Access
|
5.6.4. Use TCP Wrappers To Control Access Use TCP wrappers to control access to either FTP daemon as outlined in Section 5.1.1, "Enhancing Security With TCP Wrappers" .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-server-ftp-tcpw
|
4.3. Editing an Image Builder blueprint in the web console interface
|
4.3. Editing an Image Builder blueprint in the web console interface To change the specifications for a custom system image, edit the corresponding blueprint. Prerequisites You have opened the Image Builder interface of the RHEL 7 web console in a browser. A blueprint exists. Procedure 1. Locate the blueprint that you want to edit by entering its name or a part of it into the search box at top left, and press Enter . The search is added to the list of filters under the text entry field, and the list of blueprints below is reduced to these that match the search. If the list of blueprints is too long, add further search terms in the same way. 2. On the right side of the blueprint, press the Edit Blueprint button that belongs to the blueprint. The view changes to the blueprint editing screen. 3. Remove unwanted components by clicking the ⫶ button at the far right of its entry in the right pane, and select Remove in the menu. 4. Change version of existing components: i. On the Blueprint Components search field, enter component name or a part of it into the field under the heading Blueprint Components and press Enter . The search is added to the list of filters under the text entry field, and the list of components below is reduced to these that match the search. If the list of components is too long, add further search terms in the same way. ii. Click the ⫶ button at the far right of the component entry, and select View in the menu. A component details screen opens in the right pane. iii. Select the desired version in the Version Release drop-down menu and click Apply Change in top right. The change is saved and the right pane returns to listing the blueprint components. 5. Add new components: i. On the left, enter component name or a part of it into the field under the heading Available Components and press Enter. The search is added to the list of filters under the text entry field, and the list of components below is reduced to these that match the search. If the list of components is too long, add further search terms in the same way. ii. The list of components is paged. To move to other result pages, use the arrows and entry field above the component list. iii. Click on the name of the component you intend to use to display its details. The right pane fills with details of the components, such as its version and dependencies. iv. Select the version you want to use in the Component Options box, with the Version Release drop-down menu. v. Click Add in the top right. vi. If you added a component by mistake, remove it by clicking the ⫶ button at the far right of its entry in the right pane, and select Remove in the menu. Note If you do not intend to select version for some components, you can skip the component details screen and version selection by clicking the + buttons on the right side of the component list. 1. Commit a new version of the blueprint with your changes: i. Click the Commit button in top right. A pop-up window with a summary of your changes appears. ii. Review your changes and confirm them by clicking Commit. A small pop-up on the right informs you of the saving progress and then result. A new version of the blueprint is created. iii. In the top left, click Back to Blueprints to exit the editing screen. The Image Builder view opens, listing existing blueprints.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/image_builder_guide/sect-documentation-image_builder-chapter4-section_3
|
4.9. Configuring Global Cluster Resources
|
4.9. Configuring Global Cluster Resources You can configure global resources that can be used by any service running in the cluster, and you can configure resources that are available only to a specific service. To add a global cluster resource, follow the steps in this section. You can add a resource that is local to a particular service when you configure the service, as described in Section 4.10, "Adding a Cluster Service to the Cluster" . From the cluster-specific page, you can add resources to that cluster by clicking on Resources along the top of the cluster display. This displays the resources that have been configured for that cluster. Click Add . This displays the Add Resource to Cluster drop-down menu. Click the drop-down box under Add Resource to Cluster and select the type of resource to configure. Enter the resource parameters for the resource you are adding. Appendix B, HA Resource Parameters describes resource parameters. Click Submit . Clicking Submit returns to the resources page that displays the display of Resources , which displays the added resource (and other resources). To modify an existing resource, perform the following steps. From the luci Resources page, click on the name of the resource to modify. This displays the parameters for that resource. Edit the resource parameters. Click Apply . To delete an existing resource, perform the following steps. From the luci Resources page, click the check box for any resources to delete. Click Delete . As of the Red Hat Enterprise Linux 6.6 release, you can sort the columns in a resource list by clicking on the header for the sort category. Clicking on the Name/IP header once sorts the resources alphabetically, according to resource name. Clicking on the Name/IP header a second time sourts the resources in reverse alphabetic order, according to resource name. Clicking on the Type header once sorts the resources alphabetically, according to resource type. Clicking on the Type header a second time sourts the resources in reverse alphabetic order, according to resource type. Clicking on the In Use header once sorts the resources so that they are grouped according to whether they are in use or not.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-add-resource-conga-CA
|
9.3.6. Tuning Domain Process CPU Pinning with virsh
|
9.3.6. Tuning Domain Process CPU Pinning with virsh Important These are example commands only. You will need to substitute values according to your environment. The emulatorpin option applies CPU affinity settings to threads that are associated with each domain process. For complete pinning, you must use both virsh vcpupin (as shown previously) and virsh emulatorpin for each guest. For example:
|
[
"% virsh emulatorpin rhel6u4 3-4"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-numa-numa_and_libvirt-domain_cpu_pinning_with_virsh
|
Providing feedback on Red Hat build of OpenJDK documentation
|
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.6/proc-providing-feedback-on-redhat-documentation
|
3.16. Example: Check System Events
|
3.16. Example: Check System Events The start action for the vm1 creates several entries in the events collection. This example lists the events collection and identifies events specific to the API starting a virtual machine. Example 3.19. List the events collection Request: cURL command: Result: The following events occur: id="101" - The API authenticates with the admin user's user name and password. id="102" - The API, acting as the admin user, starts vm1 on the hypervisor host. id="103" - The API logs out of the admin user account.
|
[
"GET /ovirt-engine/api/events HTTP/1.1 Accept: application/xml",
"curl -X GET -H \"Accept: application/xml\" -u [USER:PASS] --cacert [CERT] https:// [RHEVM Host] :443/ovirt-engine/api/events",
"<events> <event id=\"103\" href=\"/ovirt-engine/api/events/103\"> <description>User admin logged out.</description> <code>31</code> <severity>normal</severity> <time>2011-06-29T17:42:41.544+10:00</time> <user id=\"80b71bae-98a1-11e0-8f20-525400866c73\" href=\"/ovirt-engine/api/users/80b71bae-98a1-11e0-8f20-525400866c73\"/> </event> <event id=\"102\" href=\"/ovirt-engine/api/events/102\"> <description>vm1 was started by admin (Host: hypervisor).</description> <code>153</code> <severity>normal</severity> <time>2011-06-29T17:42:41.499+10:00</time> <user id=\"80b71bae-98a1-11e0-8f20-525400866c73\" href=\"/ovirt-engine/api/users/80b71bae-98a1-11e0-8f20-525400866c73\"/> <vm id=\"6efc0cfa-8495-4a96-93e5-ee490328cf48\" href=\"/ovirt-engine/api/vms/6efc0cfa-8495-4a96-93e5-ee490328cf48\"/> <host id=\"0656f432-923a-11e0-ad20-5254004ac988\" href=\"/ovirt-engine/api/hosts/0656f432-923a-11e0-ad20-5254004ac988\"/> </event> <event id=\"101\" href=\"/ovirt-engine/api/events/101\"> <description>User admin logged in.</description> <code>30</code> <severity>normal</severity> <time>2011-06-29T17:42:40.505+10:00</time> <user id=\"80b71bae-98a1-11e0-8f20-525400866c73\" href=\"/ovirt-engine/api/users/80b71bae-98a1-11e0-8f20-525400866c73\"/> </event> </events>"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/example_check_system_events
|
Chapter 3. About Kafka
|
Chapter 3. About Kafka Apache Kafka is an open-source distributed publish-subscribe messaging system for fault-tolerant real-time data feeds. For more information about Apache Kafka, see the Apache Kafka documentation . 3.1. How Kafka operates as a message broker To maximise your experience of using Streams for Apache Kafka, you need to understand how Kafka operates as a message broker. A Kafka cluster comprises multiple nodes. Nodes operating as brokers contain topics that receive and store data. Topics are split by partitions, where the data is written. Partitions are replicated across brokers for fault tolerance. Kafka brokers and topics Broker A broker orchestrates the storage and passing of messages. Topic A topic provides a destination for the storage of data. Each topic is split into one or more partitions. Cluster A group of broker instances. Partition The number of topic partitions is defined by a topic partition count . Partition leader A partition leader handles all producer requests for a topic. Partition follower A partition follower replicates the partition data of a partition leader, optionally handling consumer requests. Topics use a replication factor to configure the number of replicas of each partition within the cluster. A topic comprises at least one partition. An in-sync replica has the same number of messages as the leader. Configuration defines how many replicas must be in-sync to be able to produce messages, ensuring that a message is committed only after it has been successfully copied to the replica partition. In this way, if the leader fails the message is not lost. In the Kafka brokers and topics diagram, we can see each numbered partition has a leader and two followers in replicated topics. 3.2. Producers and consumers Producers and consumers send and receive messages (publish and subscribe) through brokers. Messages comprise an optional key and a value that contains the message data, plus headers and related metadata. The key is used to identify the subject of the message, or a property of the message. Messages are delivered in batches, and batches and records contain headers and metadata that provide details that are useful for filtering and routing by clients, such as the timestamp and offset position for the record. Producers and consumers Producer A producer sends messages to a broker topic to be written to the end offset of a partition. Messages are written to partitions by a producer on a round robin basis, or to a specific partition based on the message key. Consumer A consumer subscribes to a topic and reads messages according to topic, partition and offset. Consumer group Consumer groups are used to share a typically large data stream generated by multiple producers from a given topic. Consumers are grouped using a group.id , allowing messages to be spread across the members. Consumers within a group do not read data from the same partition, but can receive data from one or more partitions. Offsets Offsets describe the position of messages within a partition. Each message in a given partition has a unique offset, which helps identify the position of a consumer within the partition to track the number of records that have been consumed. Committed offsets are written to an offset commit log. A __consumer_offsets topic stores information on committed offsets, the position of last and offset, according to consumer group. Producing and consuming data
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_on_openshift_overview/kafka-concepts_str
|
Appendix C. Journaler configuration reference
|
Appendix C. Journaler configuration reference Reference of the list commands that can be used for journaler configuration. journaler_write_head_interval Description How frequently to update the journal head object. Type Integer Required No Default 15 journaler_prefetch_periods Description How many stripe periods to read ahead on journal replay. Type Integer Required No Default 10 journaler_prezero_periods Description How many stripe periods to zero ahead of write position. Type Integer Required No Default 10 journaler_batch_interval Description Maximum additional latency in seconds to incur artificially. Type Double Required No Default .001 journaler_batch_max Description Maximum bytes that will be delayed flushing. Type 64-bit Unsigned Integer Required No Default 0
| null |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/file_system_guide/journaler-configuration-reference_fs
|
Chapter 8. HostFirmwareSettings [metal3.io/v1alpha1]
|
Chapter 8. HostFirmwareSettings [metal3.io/v1alpha1] Description HostFirmwareSettings is the Schema for the hostfirmwaresettings API. Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HostFirmwareSettingsSpec defines the desired state of HostFirmwareSettings. status object HostFirmwareSettingsStatus defines the observed state of HostFirmwareSettings. 8.1.1. .spec Description HostFirmwareSettingsSpec defines the desired state of HostFirmwareSettings. Type object Required settings Property Type Description settings integer-or-string Settings are the desired firmware settings stored as name/value pairs. 8.1.2. .status Description HostFirmwareSettingsStatus defines the observed state of HostFirmwareSettings. Type object Required settings Property Type Description conditions array Track whether settings stored in the spec are valid based on the schema conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } lastUpdated string Time that the status was last updated schema object FirmwareSchema is a reference to the Schema used to describe each FirmwareSetting. By default, this will be a Schema in the same Namespace as the settings but it can be overwritten in the Spec settings object (string) Settings are the firmware settings stored as name/value pairs 8.1.3. .status.conditions Description Track whether settings stored in the spec are valid based on the schema Type array 8.1.4. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 8.1.5. .status.schema Description FirmwareSchema is a reference to the Schema used to describe each FirmwareSetting. By default, this will be a Schema in the same Namespace as the settings but it can be overwritten in the Spec Type object Required name namespace Property Type Description name string name is the reference to the schema. namespace string namespace is the namespace of the where the schema is stored. 8.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/hostfirmwaresettings GET : list objects of kind HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings DELETE : delete collection of HostFirmwareSettings GET : list objects of kind HostFirmwareSettings POST : create HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name} DELETE : delete HostFirmwareSettings GET : read the specified HostFirmwareSettings PATCH : partially update the specified HostFirmwareSettings PUT : replace the specified HostFirmwareSettings /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name}/status GET : read status of the specified HostFirmwareSettings PATCH : partially update status of the specified HostFirmwareSettings PUT : replace status of the specified HostFirmwareSettings 8.2.1. /apis/metal3.io/v1alpha1/hostfirmwaresettings HTTP method GET Description list objects of kind HostFirmwareSettings Table 8.1. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettingsList schema 401 - Unauthorized Empty 8.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings HTTP method DELETE Description delete collection of HostFirmwareSettings Table 8.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HostFirmwareSettings Table 8.3. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettingsList schema 401 - Unauthorized Empty HTTP method POST Description create HostFirmwareSettings Table 8.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.5. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 8.6. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 202 - Accepted HostFirmwareSettings schema 401 - Unauthorized Empty 8.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name} Table 8.7. Global path parameters Parameter Type Description name string name of the HostFirmwareSettings HTTP method DELETE Description delete HostFirmwareSettings Table 8.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HostFirmwareSettings Table 8.10. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HostFirmwareSettings Table 8.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.12. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HostFirmwareSettings Table 8.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.14. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 8.15. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 401 - Unauthorized Empty 8.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwaresettings/{name}/status Table 8.16. Global path parameters Parameter Type Description name string name of the HostFirmwareSettings HTTP method GET Description read status of the specified HostFirmwareSettings Table 8.17. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HostFirmwareSettings Table 8.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.19. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HostFirmwareSettings Table 8.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.21. Body parameters Parameter Type Description body HostFirmwareSettings schema Table 8.22. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareSettings schema 201 - Created HostFirmwareSettings schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/provisioning_apis/hostfirmwaresettings-metal3-io-v1alpha1
|
7.4. Displaying Constraints
|
7.4. Displaying Constraints There are a several commands you can use to display constraints that have been configured. The following command lists all current location, order, and colocation constraints. The following command lists all current location constraints. If resources is specified, location constraints are displayed per resource. This is the default behavior. If nodes is specified, location constraints are displayed per node. If specific resources or nodes are specified, then only information about those resources or nodes is displayed. The following command lists all current ordering constraints. If the --full option is specified, show the internal constraint IDs. The following command lists all current colocation constraints. If the --full option is specified, show the internal constraint IDs. The following command lists the constraints that reference specific resources.
|
[
"pcs constraint list|show",
"pcs constraint location [show resources|nodes [ specific nodes | resources ]] [--full]",
"pcs constraint order show [--full]",
"pcs constraint colocation show [--full]",
"pcs constraint ref resource"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-constraintlist-haar
|
4.2. Deployment Considerations for Replicas
|
4.2. Deployment Considerations for Replicas 4.2.1. Distribution of Server Services in the Topology IdM servers can run a number of services, such as a certificate authority (CA) or DNS. A replica can run the same services as the server it was created from, but it is not necessary. For example, you can install a replica without DNS services, even if the initial server runs DNS. Similarly, you can set up a replica as a DNS server even if the initial server was installed without DNS. Figure 4.2. Replicas with Different Services CA Services on Replicas If you set up a replica without a CA, it will forward all requests for certificate operations to the CA server in your topology. Warning Red Hat strongly recommends to keep the CA services installed on more than one server. For information on installing a replica of the initial server including the CA services, see Section 4.5.4, "Installing a Replica with a CA" . If you install the CA on only one server, you risk losing the CA configuration without a chance of recovery if the CA server fails. See Section B.2.6, "Recovering a Lost CA Server" for details. If you set up a CA on the replica, its configuration must mirror the CA configuration of the initial server. For example, if the server includes an integrated IdM CA as the root CA, the replica must also be installed with an integrated CA as the root CA. See Section 2.3.2, "Determining What CA Configuration to Use" for the supported CA configuration options. 4.2.2. Replica Topology Recommendations Red Hat recommends to follow these guidelines: Configure no more than 60 replicas in a single IdM domain Red Hat guarantees to support environments with 60 replicas or less. Configure at least two , but no more than four replication agreements per each replica Configuring additional replication agreements ensures that information is replicated not just between the initial replica and the master server, but between other replicas as well. If you create replica B from server A and then replica C from server A, replicas B and C are not directly joined, so data from replica B must first be replicated to server A before propagating to replica C. Figure 4.3. Replicas B and C Are Not Joined in a Replication Agreement Setting up an additional replication agreement between replica B and replica C ensures the data is replicated directly, which improves data availability, consistency, failover tolerance, and performance. Figure 4.4. Replicas B and C Are Joined in a Replication Agreement See Chapter 6, Managing Replication Topology for details on managing replication agreements. Configuring more than four replication agreements per replica is unnecessary. A large number of replication agreements per server does not bring significant additional benefits, because one consumer server can only be updated by one master at a time, so the other agreements are meanwhile idle and waiting. Additionally, configuring too many replication agreements can have a negative impact on overall performance. Note The ipa topologysuffix-verify command checks if your topology meets the most important recommendations. Run ipa topologysuffix-verify --help for details. The command requires you to specify the topology suffix. See Section 6.1, "Explaining Replication Agreements, Topology Suffixes, and Topology Segments" for details. Figure 4.5. Topology Example 4.2.2.1. Tight Cell Topology One of the most resilient topologies is to create a cell configuration for the servers and replicas with a small number of servers in a cell: Each of the cells is a tight cell , where all servers have replication agreements with each other. Each server has one replication agreement with another server outside the cell. This ensures that every cell is loosely coupled to every other cell in the domain. To accomplish a tight cell topology: Have at least one IdM server in each main office, data center, or locality. Preferably, have two IdM servers. Do not have more than four servers per data center. In small offices, rather than using a replica, use SSSD to cache credentials and an off-site IdM server as the data back end. 4.2.3. The Hidden Replica Mode By default, when you set up a new replica, the installer automatically creates service ( SRV ) resource records in DNS. These records enables clients to auto-discover the replica and its services. A hidden replica is an IdM server that has all services running and available. However, it has no SRV records in DNS, and LDAP server roles are not enabled. Therefore, clients cannot use service discovery to detect these hidden replicas. Note The hidden replica feature is available in Red Hat Enterprise Linux 7.7 and later as a Technology Preview and, therefore, not supported. Hidden replicas are primarily designed for dedicated services that can otherwise disrupt clients. For example, a full backup of IdM requires to shut down all IdM services on the master or replica. Since no clients use a hidden replica, administrators can temporarily shut down the services on this host without affecting any clients. Other use cases include high-load operations on the IdM API or the LDAP server, such as a mass import or extensive queries. To install a replica as hidden, pass the --hidden-replica parameter to the ipa-replica-install command. For further details about installing a replica, see Section 4.5, "Creating the Replica: Introduction" . Alternatively, you can change the state of an existing replica. For details, see Section 6.5.4, "Demotion and Promotion of Hidden Replicas" .
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/replica-considerations
|
Chapter 6. Using the dynamic plugins cache
|
Chapter 6. Using the dynamic plugins cache The dynamic plugins cache in Red Hat Developer Hub (RHDH) enhances the installation process and reduces platform boot time by storing previously installed plugins. If the configuration remains unchanged, this feature prevents the need to re-download plugins on subsequent boots. When you enable dynamic plugins cache: The system calculates a checksum of each plugin's YAML configuration (excluding pluginConfig ). The checksum is stored in a file named dynamic-plugin-config.hash within the plugin's directory. During boot, if a plugin's package reference matches the installation and the checksum is unchanged, the download is skipped. Plugins that are disabled since the boot are automatically removed. 6.1. Enabling the dynamic plugins cache To enable the dynamic plugins cache in RHDH, the plugins directory dynamic-plugins-root must be a persistent volume. For Helm chart installations, a persistent volume named dynamic-plugins-root is automatically created. For operator-based installations, you must manually create the PersistentVolumeClaim (PVC) as follows: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: dynamic-plugins-root spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi --- apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: volumes: - USDpatch: replace name: dynamic-plugins-root persistentVolumeClaim: claimName: dynamic-plugins-root Note Future versions of the RHDH operator are planned to automatically create the PVC. 6.2. Configuring the dynamic plugins cache You can set the following optional dynamic plugin cache parameters: forceDownload : Set to true to force a reinstall of the plugin, bypassing the cache. Default is false . For example, modify your dynamic-plugins.yaml file as follows: plugins: - disabled: false forceDownload: true package: 'oci://quay.io/example-org/example-plugin:v1.0.0!internal-backstage-plugin-example'
|
[
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: dynamic-plugins-root spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi --- apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: volumes: - USDpatch: replace name: dynamic-plugins-root persistentVolumeClaim: claimName: dynamic-plugins-root",
"plugins: - disabled: false forceDownload: true package: 'oci://quay.io/example-org/example-plugin:v1.0.0!internal-backstage-plugin-example'"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/configuring/con-dynamic-plugin-cache_running-behind-a-proxy
|
Chapter 3. JMC Agent Plugin
|
Chapter 3. JMC Agent Plugin You can use the JMC Agent Plugin to add the JMC Agent implementation to Cryostat. You can then use the JMC Agent to add custom JFR events into a running target JVM application. This operation does not require you to restart your JVM application. Additional resources See Using JDK Flight Recorder with Red Hat build of Cryostat 3.1. Adding custom events by using the JMC Agent Plugin Cryostat in combination with the JMC Agent can provide you with more information when you need to diagnose issues with your running JVM application. The JMC Agent JAR file must be in the same Red Hat OpenShift container as the target JVM application. Otherwise, Cryostat cannot use the JMC Agent functionality on the application. From the Cryostat web console, you can upload probe templates and then insert these templates into the JVM application. You can remove these template probes at a later stage, if required. A probe template describes a set of objects that Cryostat can process, so that Cryostat can complete a sequence of JMC Agent operations on the JVM application. When you start a target JVM application with the JMC Agent, Cryostat automatically detects if the application is running with the JMC Agent. Important For RHEL, the JMC package is provided by the CodeReady Linux Builder (CRB), also known as Builder , repository. You must enable the CRB repository on RHEL, so that you can install JMC on RHEL. CRB packages are built with the Source Red Hat Package Manager (SRPM) as productized RHEL packages, so CRB packages regularly receive updates. See, Downloading and installing JDK Mission Control (JMC) (Using JDK Flight Recorder with Red Hat build of Cryostat) Prerequisites Downloaded and installed the jmc package. Downloaded the Adoptium Agent JAR file. See adoptium/jmc-build (GitHub). Started your Java application with the --add-opens=java.base/jdk.internal.misc=ALL-UNNAMED flag. For example, ./<your_application> --add-opens=java.base/jdk.internal.misc=ALL-UNNAMED . Started the JMC Agent for your Java application. See Starting a JDK Mission Control (JMC) Agent (Using JDK Flight Recorder with Red Hat build of Cryostat). Procedure From your Cryostat web console, go to the Events menu. If the JMC Agent was successfully added to your Cryostat instance then a Probe Templates pane opens under the Event Templates pane. Figure 3.1. The Probe Templates tab on the Cryostat web console Note An Authentication Required dialog box might open on your web console. If prompted, enter your Username and Password in the Authentication Required dialog box, and click Save to provide your JMX credentials to the target JVM. Use your preferred text editor to create an XML configuration file. Populate the file with JFR event information, such as what events Cryostat must perform on the application. The following example shows a custom probe template XML file that contains JFR event information. When this file is uploaded to Cryostat, Cryostat can add a custom JFR event, called Cryostat Agent Plugin Demo Event, to an application. Cryostat starts the JFR event when the retrieveEventProbes method of the JMC Agent is called. Click the Upload button to add a custom event template to Cryostat. A Create Custom Probe Template opens on your Cryostat web console. Figure 3.2. The Create Custom Probe Template window on the Cryostat web console Tip Click the Clear button if you want to remove the uploaded file from this Template XML field. Click the Browse button to locate your XML file. After you upload the file, click Submit . Your custom probe template file opens in the Probe Templates table. Click the overflow menu that is to your probe template. Click Insert Probes . The probes display in the table under the Probe Templates tab and the table under the Live Configuration tab. Optional: Go to the Live Configuration tab, where you can view information, such as Name , Class , and so on, for each active probe. Optional: From the Live Configuration tab, you can click Remove All Probes to delete probes that are listed in the table. Verification From the Events menu, click the Event Types tab. Check that the named JFR event from your XML configuration is listed in the table. For the example used in this procedure, Cryostat Agent Plugin Demo Event displays in the table. Additional resources See Using JDK Flight Recorder with Red Hat build of Cryostat Revised on 2023-12-12 18:07:31 UTC
|
[
"<jfragent> <!-- Global configuration options --> <config> <classprefix>__JFREvent</classprefix> <allowtostring>true</allowtostring> <allowconverter>true</allowconverter> </config> <events> <event id=\"cryostat.demo.jfr.event9\"> <label>Cryostat Agent Plugin Demo Event</label> <description>Event for the agent plugin demo</description> <path>io/cryostat/demo/events</path> <stacktrace>true</stacktrace> <class>io.cryostat.core.agent.AgentJMXHelper</class> <method> <name>retrieveEventProbes</name> <descriptor>()Ljava/lang/String;</descriptor> </method> <location>WRAP</location> </event> </events> </jfragent>"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/configuring_advanced_cryostat_configurations/jmc_agent_plugin_assembly_graphql-api
|
Images
|
Images OpenShift Container Platform 4.18 Creating and managing images and imagestreams in OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/images/index
|
Part I. Setting Up a Development Workstation
|
Part I. Setting Up a Development Workstation Red Hat Enterprise Linux 7 supports development of custom applications. To allow developers to do so, the system must be set up with the required tools and utilities. This chapter lists the most common use cases for development and the items to install.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/developer_guide/setting-up
|
Chapter 5. Troubleshooting problems related to SELinux
|
Chapter 5. Troubleshooting problems related to SELinux If you plan to enable SELinux on systems where it has been previously disabled or if you run a service in a non-standard configuration, you might need to troubleshoot situations potentially blocked by SELinux. Note that in most cases, SELinux denials are signs of misconfiguration. 5.1. Identifying SELinux denials Follow only the necessary steps from this procedure; in most cases, you need to perform just step 1. Procedure When your scenario is blocked by SELinux, the /var/log/audit/audit.log file is the first place to check for more information about a denial. To query Audit logs, use the ausearch tool. Because the SELinux decisions, such as allowing or disallowing access, are cached and this cache is known as the Access Vector Cache (AVC), use the AVC and USER_AVC values for the message type parameter, for example: If there are no matches, check if the Audit daemon is running. If it does not, repeat the denied scenario after you start auditd and check the Audit log again. In case auditd is running, but there are no matches in the output of ausearch , check messages provided by the systemd Journal: If SELinux is active and the Audit daemon is not running on your system, then search for certain SELinux messages in the output of the dmesg command: Even after the three checks, it is still possible that you have not found anything. In this case, AVC denials can be silenced because of dontaudit rules. To temporarily disable dontaudit rules, allowing all denials to be logged: After re-running your denied scenario and finding denial messages using the steps, the following command enables dontaudit rules in the policy again: If you apply all four steps, and the problem still remains unidentified, consider if SELinux really blocks your scenario: Switch to permissive mode: Repeat your scenario. If the problem still occurs, something different than SELinux is blocking your scenario. 5.2. Analyzing SELinux denial messages After identifying that SELinux is blocking your scenario, you might need to analyze the root cause before you choose a fix. Prerequisites The policycoreutils-python-utils and setroubleshoot-server packages are installed on your system. Procedure List more details about a logged denial using the sealert command, for example: If the output obtained in the step does not contain clear suggestions: Enable full-path auditing to see full paths to accessed objects and to make additional Linux Audit event fields visible: Clear the setroubleshoot cache: Reproduce the problem. Repeat step 1. After you finish the process, disable full-path auditing: If sealert returns only catchall suggestions or suggests adding a new rule using the audit2allow tool, match your problem with examples listed and explained in SELinux denials in the Audit log . Additional resources sealert(8) man page on your system 5.3. Fixing analyzed SELinux denials In most cases, suggestions provided by the sealert tool give you the right guidance about how to fix problems related to the SELinux policy. See Analyzing SELinux denial messages for information how to use sealert to analyze SELinux denials. Be careful when the tool suggests using the audit2allow tool for configuration changes. You should not use audit2allow to generate a local policy module as your first option when you see an SELinux denial. Troubleshooting should start with a check if there is a labeling problem. The second most often case is that you have changed a process configuration, and you forgot to tell SELinux about it. Labeling problems A common cause of labeling problems is when a non-standard directory is used for a service. For example, instead of using /var/www/html/ for a website, an administrator might want to use /srv/myweb/ . On Red Hat Enterprise Linux, the /srv directory is labeled with the var_t type. Files and directories created in /srv inherit this type. Also, newly-created objects in top-level directories, such as /myserver , can be labeled with the default_t type. SELinux prevents the Apache HTTP Server ( httpd ) from accessing both of these types. To allow access, SELinux must know that the files in /srv/myweb/ are to be accessible by httpd : This semanage command adds the context for the /srv/myweb/ directory and all files and directories under it to the SELinux file-context configuration. The semanage utility does not change the context. As root, use the restorecon utility to apply the changes: Incorrect context The matchpathcon utility checks the context of a file path and compares it to the default label for that path. The following example demonstrates the use of matchpathcon on a directory that contains incorrectly labeled files: USD matchpathcon -V /var/www/html/* /var/www/html/index.html has context unconfined_u:object_r:user_home_t:s0, should be system_u:object_r:httpd_sys_content_t:s0 /var/www/html/page1.html has context unconfined_u:object_r:user_home_t:s0, should be system_u:object_r:httpd_sys_content_t:s0 In this example, the index.html and page1.html files are labeled with the user_home_t type. This type is used for files in user home directories. Using the mv command to move files from your home directory may result in files being labeled with the user_home_t type. This type should not exist outside of home directories. Use the restorecon utility to restore such files to their correct type: # restorecon -v /var/www/html/index.html restorecon reset /var/www/html/index.html context unconfined_u:object_r:user_home_t:s0->system_u:object_r:httpd_sys_content_t:s0 To restore the context for all files under a directory, use the -R option: # restorecon -R -v /var/www/html/ restorecon reset /var/www/html/page1.html context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /var/www/html/index.html context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:httpd_sys_content_t:s0 Confined applications configured in non-standard ways Services can be run in a variety of ways. To account for that, you need to specify how you run your services. You can achieve this through SELinux booleans that allow parts of SELinux policy to be changed at runtime. This enables changes, such as allowing services access to NFS volumes, without reloading or recompiling SELinux policy. Also, running services on non-default port numbers requires policy configuration to be updated using the semanage command. For example, to allow the Apache HTTP Server to communicate with MariaDB, enable the httpd_can_network_connect_db boolean: Note that the -P option makes the setting persistent across reboots of the system. If access is denied for a particular service, use the getsebool and grep utilities to see if any booleans are available to allow access. For example, use the getsebool -a | grep ftp command to search for FTP related booleans: To get a list of booleans and to find out if they are enabled or disabled, use the getsebool -a command. To get a list of booleans including their meaning, and to find out if they are enabled or disabled, install the selinux-policy-devel package and use the semanage boolean -l command as root. Port numbers Depending on policy configuration, services can only be allowed to run on certain port numbers. Attempting to change the port a service runs on without changing policy may result in the service failing to start. For example, run the semanage port -l | grep http command as root to list http related ports: # semanage port -l | grep http http_cache_port_t tcp 3128, 8080, 8118 http_cache_port_t udp 3130 http_port_t tcp 80, 443, 488, 8008, 8009, 8443 pegasus_http_port_t tcp 5988 pegasus_https_port_t tcp 5989 The http_port_t port type defines the ports Apache HTTP Server can listen on, which in this case, are TCP ports 80, 443, 488, 8008, 8009, and 8443. If an administrator configures httpd.conf so that httpd listens on port 9876 ( Listen 9876 ), but policy is not updated to reflect this, the following command fails: # systemctl start httpd.service Job for httpd.service failed. See 'systemctl status httpd.service' and 'journalctl -xn' for details. # systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: failed (Result: exit-code) since Thu 2013-08-15 09:57:05 CEST; 59s ago Process: 16874 ExecStop=/usr/sbin/httpd USDOPTIONS -k graceful-stop (code=exited, status=0/SUCCESS) Process: 16870 ExecStart=/usr/sbin/httpd USDOPTIONS -DFOREGROUND (code=exited, status=1/FAILURE) An SELinux denial message similar to the following is logged to /var/log/audit/audit.log : To allow httpd to listen on a port that is not listed for the http_port_t port type, use the semanage port command to assign a different label to the port: The -a option adds a new record; the -t option defines a type; and the -p option defines a protocol. The last argument is the port number to add. Corner cases, evolving or broken applications, and compromised systems Applications may contain bugs, causing SELinux to deny access. Also, SELinux rules are evolving - SELinux may not have seen an application running in a certain way, possibly causing it to deny access, even though the application is working as expected. For example, if a new version of PostgreSQL is released, it may perform actions the current policy does not account for, causing access to be denied, even though access should be allowed. For these situations, after access is denied, use the audit2allow utility to create a custom policy module to allow access. You can report missing rules in the SELinux policy in Red Hat Bugzilla . For Red Hat Enterprise Linux 8, create bugs against the Red Hat Enterprise Linux 8 product, and select the selinux-policy component. Include the output of the audit2allow -w -a and audit2allow -a commands in such bug reports. If an application asks for major security privileges, it could be a signal that the application is compromised. Use intrusion detection tools to inspect such suspicious behavior. The Solution Engine on the Red Hat Customer Portal can also provide guidance in the form of an article containing a possible solution for the same or very similar problem you have. Select the relevant product and version and use SELinux-related keywords, such as selinux or avc , together with the name of your blocked service or application, for example: selinux samba . 5.4. Creating a local SELinux policy module Adding specific SELinux policy modules to an active SELinux policy can fix certain problems with the SELinux policy. You can use this procedure to fix a specific Known Issue described in Red Hat release notes , or to implement a specific Red Hat Solution . Warning Use only rules provided by Red Hat. Red Hat does not support creating SELinux policy modules with custom rules, because this falls outside of the Production Support Scope of Coverage . If you are not an expert, contact your Red Hat sales representative and request consulting services. Prerequisites The setools-console and audit packages for verification. Procedure Open a new .cil file with a text editor, for example: To keep your local modules better organized, use the local_ prefix in the names of local SELinux policy modules. Insert the custom rules from a Known Issue or a Red Hat Solution. Important Do not write your own rules. Use only the rules provided in a specific Known Issue or Red Hat Solution. For example, to implement the SELinux denies cups-lpd read access to cups.sock in RHEL solution, insert the following rule: The example solution has been fixed permanently for RHEL in RHBA-2021:4420 . Therefore, the parts of this procedure specific to this solution have no effect on updated RHEL 8 and 9 systems, and are included only as examples of syntax. You can use either of the two SELinux rule syntaxes, Common Intermediate Language (CIL) and m4. For example, (allow cupsd_lpd_t cupsd_var_run_t (sock_file (read))) in CIL is equivalent to the following in m4: Save and close the file. Install the policy module: If you want to remove a local policy module which you created by using semodule -i , refer to the module name without the .cil suffix. To remove a local policy module, use semodule -r <local_module> . Restart any services related to the rules: Verification List the local modules installed in your SELinux policy: Because local modules have priority 400 , you can filter them from the list also by using that value, for example, by using the semodule -lfull | grep -v ^100 command. Search the SELinux policy for the relevant allow rules: Where <SOURCENAME> is the source SELinux type, <TARGETNAME> is the target SELinux type, <CLASSNAME> is the security class or object class name, and <P1> and <P2> are the specific permissions of the rule. For example, for the SELinux denies cups-lpd read access to cups.sock in RHEL solution: The last line should now include the read operation. Verify that the relevant service runs confined by SELinux: Identify the process related to the relevant service: Check the SELinux context of the process listed in the output of the command: Verify that the service does not cause any SELinux denials: The -i option interprets the numeric values into human-readable text. Additional resources How to create custom SELinux policy module wisely Knowledgebase article 5.5. SELinux denials in the Audit log The Linux Audit system stores log entries in the /var/log/audit/audit.log file by default. To list only SELinux-related records, use the ausearch command with the message type parameter set to AVC and AVC_USER at a minimum, for example: An SELinux denial entry in the Audit log file can look as follows: The most important parts of this entry are: avc: denied - the action performed by SELinux and recorded in Access Vector Cache (AVC) { read } - the denied action pid=6591 - the process identifier of the subject that tried to perform the denied action comm="httpd" - the name of the command that was used to invoke the analyzed process httpd_t - the SELinux type of the process nfs_t - the SELinux type of the object affected by the process action tclass=dir - the target object class The log entry can be translated to: SELinux denied the httpd process with PID 6591 and the httpd_t type to read from a directory with the nfs_t type. The following SELinux denial message occurs when the Apache HTTP Server attempts to access a directory labeled with a type for the Samba suite: { getattr } - the getattr entry indicates the source process was trying to read the target file's status information. This occurs before reading files. SELinux denies this action because the process accesses the file and it does not have an appropriate label. Commonly seen permissions include getattr , read , and write . path="/var/www/html/file1" - the path to the object (target) the process attempted to access. scontext="unconfined_u:system_r:httpd_t:s0" - the SELinux context of the process (source) that attempted the denied action. In this case, it is the SELinux context of the Apache HTTP Server, which is running with the httpd_t type. tcontext="unconfined_u:object_r:samba_share_t:s0" - the SELinux context of the object (target) the process attempted to access. In this case, it is the SELinux context of file1 . This SELinux denial can be translated to: SELinux denied the httpd process with PID 2465 to access the /var/www/html/file1 file with the samba_share_t type, which is not accessible to processes running in the httpd_t domain unless configured otherwise. Additional resources auditd(8) and ausearch(8) man pages on your system 5.6. Additional resources Basic SELinux Troubleshooting in CLI What is SELinux trying to tell me? The 4 key causes of SELinux errors
|
[
"ausearch -m AVC,USER_AVC,SELINUX_ERR,USER_SELINUX_ERR -ts recent",
"journalctl -t setroubleshoot",
"dmesg | grep -i -e type=1300 -e type=1400",
"semodule -DB",
"semodule -B",
"setenforce 0 getenforce Permissive",
"sealert -l \"*\" SELinux is preventing /usr/bin/passwd from write access on the file /root/test. ***** Plugin leaks (86.2 confidence) suggests ***************************** If you want to ignore passwd trying to write access the test file, because you believe it should not need this access. Then you should report this as a bug. You can generate a local policy module to dontaudit this access. Do ausearch -x /usr/bin/passwd --raw | audit2allow -D -M my-passwd semodule -X 300 -i my-passwd.pp ***** Plugin catchall (14.7 confidence) suggests ************************** Raw Audit Messages type=AVC msg=audit(1553609555.619:127): avc: denied { write } for pid=4097 comm=\"passwd\" path=\"/root/test\" dev=\"dm-0\" ino=17142697 scontext=unconfined_u:unconfined_r:passwd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:admin_home_t:s0 tclass=file permissive=0 Hash: passwd,passwd_t,admin_home_t,file,write",
"auditctl -w /etc/shadow -p w -k shadow-write",
"rm -f /var/lib/setroubleshoot/setroubleshoot.xml",
"auditctl -W /etc/shadow -p w -k shadow-write",
"semanage fcontext -a -t httpd_sys_content_t \"/srv/myweb(/.*)?\"",
"restorecon -R -v /srv/myweb",
"USD matchpathcon -V /var/www/html/* /var/www/html/index.html has context unconfined_u:object_r:user_home_t:s0, should be system_u:object_r:httpd_sys_content_t:s0 /var/www/html/page1.html has context unconfined_u:object_r:user_home_t:s0, should be system_u:object_r:httpd_sys_content_t:s0",
"# restorecon -v /var/www/html/index.html restorecon reset /var/www/html/index.html context unconfined_u:object_r:user_home_t:s0->system_u:object_r:httpd_sys_content_t:s0",
"# restorecon -R -v /var/www/html/ restorecon reset /var/www/html/page1.html context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /var/www/html/index.html context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:httpd_sys_content_t:s0",
"# setsebool -P httpd_can_network_connect_db on",
"USD getsebool -a | grep ftp ftpd_anon_write --> off ftpd_full_access --> off ftpd_use_cifs --> off ftpd_use_nfs --> off ftpd_connect_db --> off httpd_enable_ftp_server --> off tftp_anon_write --> off",
"# semanage port -l | grep http http_cache_port_t tcp 3128, 8080, 8118 http_cache_port_t udp 3130 http_port_t tcp 80, 443, 488, 8008, 8009, 8443 pegasus_http_port_t tcp 5988 pegasus_https_port_t tcp 5989",
"# systemctl start httpd.service Job for httpd.service failed. See 'systemctl status httpd.service' and 'journalctl -xn' for details. # systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: failed (Result: exit-code) since Thu 2013-08-15 09:57:05 CEST; 59s ago Process: 16874 ExecStop=/usr/sbin/httpd USDOPTIONS -k graceful-stop (code=exited, status=0/SUCCESS) Process: 16870 ExecStart=/usr/sbin/httpd USDOPTIONS -DFOREGROUND (code=exited, status=1/FAILURE)",
"type=AVC msg=audit(1225948455.061:294): avc: denied { name_bind } for pid=4997 comm=\"httpd\" src=9876 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=system_u:object_r:port_t:s0 tclass=tcp_socket",
"# semanage port -a -t http_port_t -p tcp 9876",
"vim <local_module> .cil",
"(allow cupsd_lpd_t cupsd_var_run_t (sock_file (read)))",
"module local_cupslpd-read-cupssock 1.0; require { type cupsd_var_run_t; type cupsd_lpd_t; class sock_file read; } #============= cupsd_lpd_t ============== allow cupsd_lpd_t cupsd_var_run_t:sock_file read;",
"semodule -i <local_module> .cil",
"systemctl restart <service-name>",
"semodule -lfull | grep \"local_\" 400 local_module cil",
"sesearch -A --source= <SOURCENAME> --target= <TARGETNAME> --class= <CLASSNAME> --perm= <P1> , <P2>",
"sesearch -A --source=cupsd_lpd_t --target=cupsd_var_run_t --class=sock_file --perm=read allow cupsd_lpd_t cupsd_var_run_t:sock_file { append getattr open read write };",
"systemctl status <service-name>",
"ps -efZ | grep <process-name>",
"ausearch -m AVC -i -ts recent <no matches>",
"ausearch -m AVC,USER_AVC,SELINUX_ERR,USER_SELINUX_ERR",
"type=AVC msg=audit(1395177286.929:1638): avc: denied { read } for pid=6591 comm=\"httpd\" name=\"webpages\" dev=\"0:37\" ino=2112 scontext=system_u:system_r:httpd_t:s0 tcontext=system_u:object_r:nfs_t:s0 tclass=dir",
"type=AVC msg=audit(1226874073.147:96): avc: denied { getattr } for pid=2465 comm=\"httpd\" path=\"/var/www/html/file1\" dev=dm-0 ino=284133 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:samba_share_t:s0 tclass=file"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_selinux/troubleshooting-problems-related-to-selinux_using-selinux
|
Chapter 6. SubjectAccessReview [authorization.openshift.io/v1]
|
Chapter 6. SubjectAccessReview [authorization.openshift.io/v1] Description SubjectAccessReview is an object for requesting information about whether a user or group can perform an action Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required namespace verb resourceAPIGroup resourceAPIVersion resource resourceName path isNonResourceURL user groups scopes 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources content RawExtension Content is the actual content of the request for create and update groups array (string) GroupsSlice is optional. Groups is the list of groups to which the User belongs. isNonResourceURL boolean IsNonResourceURL is true if this is a request for a non-resource URL (outside of the resource hierarchy) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces path string Path is the path of a non resource URL resource string Resource is one of the existing resource types resourceAPIGroup string Group is the API group of the resource Serialized as resourceAPIGroup to avoid confusion with the 'groups' field when inlined resourceAPIVersion string Version is the API version of the resource Serialized as resourceAPIVersion to avoid confusion with TypeMeta.apiVersion and ObjectMeta.resourceVersion when inlined resourceName string ResourceName is the name of the resource being requested for a "get" or deleted for a "delete" scopes array (string) Scopes to use for the evaluation. Empty means "use the unscoped (full) permissions of the user/groups". Nil for a self-SAR, means "use the scopes on this request". Nil for a regular SAR, means the same as empty. user string User is optional. If both User and Groups are empty, the current authenticated user is used. verb string Verb is one of: get, list, watch, create, update, delete 6.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/subjectaccessreviews POST : create a SubjectAccessReview 6.2.1. /apis/authorization.openshift.io/v1/subjectaccessreviews Table 6.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a SubjectAccessReview Table 6.2. Body parameters Parameter Type Description body SubjectAccessReview schema Table 6.3. HTTP responses HTTP code Reponse body 200 - OK SubjectAccessReview schema 201 - Created SubjectAccessReview schema 202 - Accepted SubjectAccessReview schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/authorization_apis/subjectaccessreview-authorization-openshift-io-v1
|
Chapter 2. Configuring the overcloud for IPv6
|
Chapter 2. Configuring the overcloud for IPv6 The following chapter provides the configuration required before running the openstack overcloud deploy command. This includes preparing nodes for provisioning, configuring an IPv6 address on the undercloud, and creating a network environment file to define the IPv6 parameters for the overcloud. Prerequisites A successful undercloud installation. For more information, see Installing director . Your network supports IPv6-native VLANs as well as IPv4-native VLANs. 2.1. Configuring an IPv6 address on the undercloud The undercloud requires access to the overcloud Public API, which is on the External network. To accomplish this, the undercloud host requires an IPv6 address on the interface that connects to the External network. Prerequisites A successful undercloud installation. For more information, see Installing director . Your network supports IPv6-native VLANs as well as IPv4-native VLANs. An IPv6 address available to the undercloud. Native VLAN or dedicated interface If the undercloud uses a native VLAN or a dedicated interface attached to the External network, use the ip command to add an IPv6 address to the interface. In this example, the dedicated interface is eth0 : Trunked VLAN interface If the undercloud uses a trunked VLAN on the same interface as the control plane bridge ( br-ctlplane ) to access the External network, create a new VLAN interface, attach it to the control plane, and add an IPv6 address to the VLAN. In this example, the External network VLAN ID is 100 : Confirming the IPv6 address Confirm the addition of the IPv6 address with the ip command: The IPv6 address appears on the chosen interface. Setting a persistent IPv6 address To make the IPv6 address permanent, modify or create the appropriate interface file in /etc/sysconfig/network-scripts/ . In this example, include the following lines in either ifcfg-eth0 or ifcfg-vlan100 : For more information, see How do I configure a network interface for IPv6? on the Red Hat Customer Portal. 2.2. Registering and inspecting nodes for IPv6 deployment A node definition template ( instackenv.json ) is a JSON format file that contains the hardware and power management details for registering nodes. For example: Prerequisites A successful undercloud installation. For more information, see Installing director . Nodes available for overcloud deployment. Procedure After you create the node definition template, save the file to the home directory of the stack user ( /home/stack/instackenv.json ), then import it into the director: This command imports the template and registers each node from the template into director. Assign the kernel and ramdisk images to all nodes: The nodes are now registered and configured in director. Verification steps After registering the nodes, inspect the hardware attribute of each node: Important The nodes must be in the manageable state. Make sure this process runs to completion. This process usually takes 15 minutes for bare metal nodes. 2.3. Tagging nodes for IPv6 deployment After you register and inspect the hardware of your nodes, tag each node into a specific profile. These profile tags map your nodes to flavors, and in turn the flavors are assigned to a deployment role. Prerequisites A successful undercloud installation. For more information, see Installing director . Procedure Retrieve a list of your nodes to identify their UUIDs: Add a profile option to the properties/capabilities parameter for each node. For example, to tag three nodes to use a controller profile and three nodes to use a compute profile, use the following commands: The addition of the profile:control and profile:compute options tag the nodes into each respective profile. Note As an alternative to manual tagging, use the automatic profile tagging to tag larger numbers of nodes based on benchmarking data. 2.4. Configuring IPv6 networking By default, the overcloud uses Internet Protocol version 4 (IPv4) to configure the service endpoints. However, the overcloud also supports Internet Protocol version 6 (IPv6) endpoints, which is useful for organizations that support IPv6 infrastructure. Director includes a set of environment files that you can use to create IPv6-based Overclouds. For more information about configuring IPv6 in the Overcloud, see the dedicated IPv6 Networking for the Overcloud guide for full instructions. 2.4.1. Configuring composable IPv6 networking Prerequisites A successful undercloud installation. For more information, see Installing director . Your network supports IPv6-native VLANs as well as IPv4-native VLANs. Procedure Copy the default network_data file: Edit the local copy of the network_data.yaml file and modify the parameters to suit your IPv6 networking requirements. For example, the External network contains the following default network details: name is the only mandatory value, however you can also use name_lower to normalize names for readability. For example, changing InternalApi to internal_api . vip: true creates a virtual IP address (VIP) on the new network with the remaining parameters setting the defaults for the new network. ipv6 defines whether to enable IPv6. ipv6_subnet and ipv6_allocation_pools , and gateway_ip6 set the default IPv6 subnet and IP range for the network. Include the custom network_data file with your deployment using the -n option. Without the -n option, the deployment command uses the default network details. 2.4.2. IPv6 network isolation in the overcloud The overcloud assigns services to the provisioning network by default. However, director can divide overcloud network traffic into isolated networks. These networks are defined in a file that you include in the deployment command line, by default named network_data.yaml . When services are listening on networks using IPv6 addresses, you must provide parameter defaults to indicate that the service is running on an IPv6 network. The network that each service runs on is defined by the file network/service_net_map.yaml , and can be overridden by declaring parameter defaults for individual ServiceNetMap entries. These services require the parameter default to be set in an environment file: The environments/network-isolation.j2.yaml file in the core heat templates is a Jinja2 file that defines all ports and VIPs for each IPv6 network in your composable network file. When rendered, it results in a network-isolation.yaml file in the same location with the full resource registry. 2.4.3. Configuring the IPv6 isolated network The default heat template collection contains a Jinja2-based environment file for the default networking configuration. This file is environments/network-environment.j2.yaml . When rendered with our network_data file, it results in a standard YAML file called network-environment.yaml . Some parts of this file might require overrides. Prerequisites A successful undercloud installation. For more information, see Installing director . Your network supports IPv6-native VLANs as well as IPv4-native VLANs. Procedure Create a custom environment file ( /home/stack/network-environment.yaml ) with the following details: The parameter_defaults section contains the customization for certain services that remain on IPv4. 2.4.4. IPv6 network interface templates The overcloud requires a set of network interface templates. Director contains a set of Jinja2-based Heat templates, which render based on your network_data file: NIC directory Description Environment file single-nic-vlans Single NIC ( nic1 ) with control plane and VLANs attached to default Open vSwitch bridge. environments/net-single-nic-with-vlans-v6.j2.yaml single-nic-linux-bridge-vlans Single NIC ( nic1 ) with control plane and VLANs attached to default Linux bridge. environments/net-single-nic-linux-bridge-with-vlans-v6.yaml bond-with-vlans Control plane attached to nic1 . Default Open vSwitch bridge with bonded NIC configuration ( nic2 and nic3 ) and VLANs attached. environments/net-bond-with-vlans-v6.yaml multiple-nics Control plane attached to nic1 . Assigns each sequential NIC to each network defined in the network_data file. By default, this is Storage to nic2 , Storage Management to nic3 , Internal API to nic4 , Tenant to nic5 on the br-tenant bridge, and External to nic6 on the default Open vSwitch bridge. environments/net-multiple-nics-v6.yaml 2.5. Deploying an IPv6 overcloud To deploy an overcloud that uses IPv6 networking, you must include additional arguments in the deployment command. Prerequisites A successful undercloud installation. For more information, see Installing director . Procedure The above command uses the following options: --templates - Creates the overcloud from the default heat template collection. -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml - Adds an additional environment file to the overcloud deployment. In this case, it is an environment file that initializes network isolation configuration for IPv6. -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml - Adds an additional environment file to the overcloud deployment. In this case, it is an environment file that initializes network isolation configuration for IPv6. -e /home/stack/network-environment.yaml - Adds an additional environment file to the overcloud deployment. In this case, it includes overrides related to IPv6. Ensure that the network_data.yaml file includes the setting ipv6: true . versions of Red Hat OpenStack director included two routes: one for IPv6 on the External network (default) and one for IPv4 on the Control Plane. To use both default routes, ensure that the Controller definition in the roles_data.yaml file contains both networks in the default_route_networks parameter. For example, default_route_networks: ['External', 'ControlPlane'] . --ntp-server pool.ntp.org - Sets the NTP server. The overcloud creation process begins and director provisions the overcloud nodes. This process takes some time to complete. To view the status of the overcloud creation, open a separate terminal as the stack user and run: Accessing the overcloud Director generates a script to configure and help authenticate interactions with your overcloud from the director host. The director saves this file ( overcloudrc ) in the home directory of the stack user. Run the following command to use this file: This loads the necessary environment variables to interact with your overcloud from the director host CLI. To return to interacting with the director host, run the following command:
|
[
"sudo ip link set dev eth0 up; sudo ip addr add 2001:db8::1/64 dev eth0",
"sudo ovs-vsctl add-port br-ctlplane vlan100 tag=100 -- set interface vlan100 type=internal sudo ip l set dev vlan100 up; sudo ip addr add 2001:db8::1/64 dev vlan100",
"ip addr",
"IPV6INIT=yes IPV6ADDR=2001:db8::1/64",
"{ \"nodes\":[ { \"mac\":[ \"bb:bb:bb:bb:bb:bb\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.205\" }, { \"mac\":[ \"cc:cc:cc:cc:cc:cc\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.206\" }, { \"mac\":[ \"dd:dd:dd:dd:dd:dd\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.207\" }, { \"mac\":[ \"ee:ee:ee:ee:ee:ee\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.208\" } { \"mac\":[ \"ff:ff:ff:ff:ff:ff\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.209\" } { \"mac\":[ \"gg:gg:gg:gg:gg:gg\" ], \"cpu\":\"4\", \"memory\":\"6144\", \"disk\":\"40\", \"arch\":\"x86_64\", \"pm_type\":\"ipmi\", \"pm_user\":\"admin\", \"pm_password\":\"p@55w0rd!\", \"pm_addr\":\"192.0.2.210\" } ] }",
"openstack overcloud node import ~/instackenv.json",
"openstack overcloud node configure",
"openstack overcloud node introspect --all-manageable",
"ironic node-list",
"ironic node-update 1a4e30da-b6dc-499d-ba87-0bd8a3819bc0 add properties/capabilities='profile:control,boot_option:local' ironic node-update 6faba1a9-e2d8-4b7c-95a2-c7fbdc12129a add properties/capabilities='profile:control,boot_option:local' ironic node-update 5e3b2f50-fcd9-4404-b0a2-59d79924b38e add properties/capabilities='profile:control,boot_option:local' ironic node-update 484587b2-b3b3-40d5-925b-a26a2fa3036f add properties/capabilities='profile:compute,boot_option:local' ironic node-update d010460b-38f2-4800-9cc4-d69f0d067efe add properties/capabilities='profile:compute,boot_option:local' ironic node-update d930e613-3e14-44b9-8240-4f3559801ea6 add properties/capabilities='profile:compute,boot_option:local'",
"cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml /home/stack/.",
"- name: External vip: true name_lower: external vlan: 10 ipv6: true ipv6_subnet: '2001:db8:fd00:1000::/64' ipv6_allocation_pools: [{'start': '2001:db8:fd00:1000::10', 'end': '2001:db8:fd00:1000:ffff:ffff:ffff:fffe'}] gateway_ipv6: '2001:db8:fd00:1000::1'",
"parameter_defaults: # Enable IPv6 for Ceph. CephIPv6: True # Enable IPv6 for Corosync. This is required when Corosync is using an IPv6 IP in the cluster. CorosyncIPv6: True # Enable IPv6 for MongoDB. This is required when MongoDB is using an IPv6 IP. MongoDbIPv6: True # Enable various IPv6 features in Nova. NovaIPv6: True # Enable IPv6 environment for RabbitMQ. RabbitIPv6: True # Enable IPv6 environment for Memcached. MemcachedIPv6: True # Enable IPv6 environment for MySQL. MysqlIPv6: True # Enable IPv6 environment for Manila ManilaIPv6: True # Enable IPv6 environment for Redis. RedisIPv6: True",
"parameter_defaults: ControlPlaneDefaultRoute: 192.0.2.1 ControlPlaneSubnetCidr: \"24\"",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/templates/network-environment.yaml --ntp-server pool.ntp.org [ADDITIONAL OPTIONS]",
"source ~/stackrc heat stack-list --show-nested",
"source ~/overcloudrc",
"source ~/stackrc"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/ipv6_networking_for_the_overcloud/assembly_configuring-the-overcloud-for-ipv6
|
E.3. Installing GRUB
|
E.3. Installing GRUB In a vast majority of cases, GRUB is installed and configured by default during the installation of Red Hat Enterprise Linux. However, if for some reason GRUB is not installed, or if you need to install it again, it is possible to install grub manually. On systems without UEFI firmware, a valid GRUB configuration file must be present at /boot/grub/grub.conf . You can use the grub-install script (part of the grub package) to install GRUB. For example: Replace disk with the device name of your system's boot drive such as /dev/sda . On systems with UEFI firmware, a valid GRUB configuration file must be present at /boot/efi/EFI/redhat/grub.conf . An image of GRUB's first-stage boot loader is available on the EFI System Partitition in the directory EFI/redhat/ with the filename grubx64.efi , and you can use the efibootmgr command to install this image into your system's EFI System Partition. For example: Replace disk with the name of the device containing the EFI System Partition (such as /dev/sda ) and partition_number with the partition number of your EFI System Partition (the default value is 1, meaning the first partition on the disk). Important The grub package does not automatically update the system boot loader when the package is updated using Yum or RPM . Therefore, updating the package will not automatically update the actual boot loader on your system. Use the grub-install command manually every time after the package is updated. For additional information about installing GRUB , see the GNU GRUB Manual and the grub-install(8) man page. For information about the EFI System Partition, see Section 9.18.1, "Advanced Boot Loader Configuration" . For information about the efibootmgr tool, see the efibootmgr(8) man page.
|
[
"grub-install disk",
"efibootmgr -c -d disk -p partition_number -l /EFI/redhat/grubx64.efi -L \"grub_uefi\""
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-grub-installing
|
Chapter 1. Migrating applications to Red Hat build of Quarkus 3.8
|
Chapter 1. Migrating applications to Red Hat build of Quarkus 3.8 As an application developer, you can migrate applications that are based on earlier versions of Red Hat build of Quarkus to version 3.8 by using the Quarkus CLI's update command . Important The Quarkus CLI is intended for dev mode only. Red Hat does not support using the Quarkus CLI in production environments. 1.1. Updating projects to the latest Red Hat build of Quarkus version You can update or upgrade your Red Hat build of Quarkus projects to the latest version by using an update command. The update command primarily employs OpenRewrite recipes to automate updates for most project dependencies, source code, and documentation. Although these recipes perform many migration tasks, they do not cover all the tasks detailed in the migration guide. Post-update, if expected updates are missing, consider the following reasons: The recipe applied by the update command might not include a migration task that your project requires. Your project might use an extension that is incompatible with the latest Red Hat build of Quarkus version. Important For projects that use Hibernate ORM or Hibernate Reactive, review the Hibernate ORM 5 to 6 migration quick reference. The following update command covers only a subset of this guide. 1.1.1. Prerequisites Roughly 30 minutes An IDE JDK 11+ installed with JAVA_HOME configured appropriately Apache Maven 3.8.6 or later Optionally, the Red Hat build of Quarkus CLI if you want to use it Optionally Mandrel or GraalVM installed and configured appropriately if you want to build a native executable (or Docker if you use a native container build) A project based on Red Hat build of Quarkus version 2.13 or later. 1.1.2. Procedure Create a working branch for your project by using your version control system. To use the Red Hat build of Quarkus CLI in the step, install the latest version of the Red Hat build of Quarkus CLI . Confirm the version number using quarkus -v . Configure your extension registry client as described in the Configuring Red Hat build of Quarkus extension registry client section of the Quarkus "Getting Started" guide. To update using the Red Hat build of Quarkus CLI, go to the project directory and update the project to the latest stream: quarkus update Optional: By default, this command updates to the latest current version. To update to a specific stream instead of latest current version, add the stream option to this command followed by the version; for example: --stream=3.2 To update using Maven instead of the Red Hat build of Quarkus CLI, go to the project directory and update the project to the latest stream: Ensure that the Red Hat build of Quarkus Maven plugin version aligns with the latest supported Red Hat build of Quarkus version. Configure your project according to the guidelines provided in the Getting started with Quarkus guide. mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.8.6.SP3-redhat-00002:update Optional: By default, this command updates to the latest current version. To update to a specific stream instead of latest current version, add the stream option to this command followed by the version; for example: -Dstream=3.2 Analyze the update command output for potential instructions and perform the suggested tasks if necessary. Use a diff tool to inspect all changes. Review the migration guide for items that were not updated by the update command. If your project has such items, implement the additional steps advised in these topics. Ensure the project builds without errors, all tests pass, and the application functions as required before deploying to production. Before deploying your updated Red Hat build of Quarkus application to production, ensure the following: The project builds without errors. All tests pass. The application functions as required. 1.2. Changes that affect compatibility with earlier versions This section describes changes in Red Hat build of Quarkus 3.8 that affect the compatibility of applications built with earlier product versions. Review these breaking changes and take the steps required to ensure that your applications continue functioning after you update them to Red Hat build of Quarkus 3.8. To automate many of these changes, use the quarkus update command to update your projects to the latest Red Hat build of Quarkus version . 1.2.1. Core 1.2.1.1. Changes in Stork load-balancer configuration You can no longer use the configuration names stork."service-name".load-balancer and quarkus.stork."service-name".load-balancer for configuring the Stork load balancer. Instead, use quarkus.stork."service-name".load-balancer.type for configuration settings. 1.2.1.2. Dependency management update for OkHttp and Okio OkHttp and Okio have been removed from the Quarkus Platform BOM, and their versions are no longer enforced, addressing issues related to outdated dependencies. This change affects test framework dependencies and streamlines runtime dependencies. Developers using these dependencies must now specify their versions in build files. Additionally, the quarkus-test-infinispan-client artifact has been removed due to the availability of robust Dev Services support for Infinispan. 1.2.1.3. Java version requirement update Beginning with this version of Red Hat build of Quarkus, support for Java 11, deprecated in the version, has been removed. Java 21 is now the recommended version, although Java 17 is also supported. 1.2.1.4. JAXB limitations with collections in RESTEasy Reactive In Red Hat build of Quarkus, using RESTEasy Reactive with Java Architecture for XML Binding (JAXB) does not support using collections, arrays, and maps as parameters or return types in REST methods. To overcome this limitation of JAXB, encapsulate these types within a class annotated with @XmlRootElement . 1.2.1.5. Mandatory specification of @StaticInitSafe at build time During the static initialization phase, Red Hat build of Quarkus collects the configuration to inject in CDI beans. The collected values are then compared with their runtime initialization counterparts, and if a mismatch is detected, the application startup fails. With Red Hat build of Quarkus 3.8, you can now annotate configuration objects with the @io.quarkus.runtime.annotations.StaticInitSafe annotation to inform users that the injected configuration: is set at build time cannot be changed is safe to be used at runtime, instructing Red Hat build of Quarkus to not fail the startup on configuration mismatch 1.2.1.6. Qute: Isolated execution of tag templates by default User tags in templates are now executed in isolation by default, restricting access to the calling template's context. This update can alter data handling within tag templates, potentially impacting their current functionality. To bypass this isolation and maintain access to the parent context, include _isolated=false or _unisolated in the tag call, for example, # itemDetail item showImage=true _isolated=false . This approach allows tags to access data from the parent context as before. This change minimizes unintended data exposure from the parent context to the tag, enhancing template data integrity. However, it might necessitate updates to existing templates reliant on shared context access, representing a notable change that could affect users unfamiliar with this isolation mechanism. 1.2.1.7. Qute: Resolving type pollution issues ResultNode class is updated to be an abstract class, not an interface, and should not be user-implemented despite being in the public API. The Qute API now limits CompletionStage implementations to java.util.concurrent.CompletableFuture and io.quarkus.qute.CompletedStage by default, a restriction alterable with -Dquarkus.qute.unrestricted-completion-stage-support=true . 1.2.1.8. quarkus-rest-client extensions renamed to quarkus-resteasy-client With Red Hat build of Quarkus 3.8, the following quarkus-rest-client extensions are renamed: Old name New name quarkus-rest-client quarkus-resteasy-client quarkus-rest-client-mutiny quarkus-resteasy-client-mutiny quarkus-rest-client-jackson quarkus-resteasy-client-jackson quarkus-rest-client-jaxb quarkus-resteasy-client-jaxb quarkus-rest-client-jsonb quarkus-resteasy-client-jsonb 1.2.1.9. Removing URI validation when @TestHTTPResource is injected The @TestHTTPResource annotation now supports path parameters. Validation as a URI string is no longer applied due to non-compliance with the URI format. 1.2.1.10. Updates to GraalVM SDK 23.1.2 with dependency adjustments The GraalVM SDK version has been updated to 23.1.2 in Red Hat build of Quarkus 3.8. Developers using extensions requiring GraalVM substitutions should switch from org.graalvm.sdk:graal-sdk to org.graalvm.sdk:nativeimage to access necessary classes. For those that use org.graalvm.js:js , replace this dependency with org.graalvm.polyglot:js-community for the community version. For the enterprise version, replace this dependency with org.graalvm.polyglot:js . The adjustment for the graal-sdk is automated with quarkus update . However, changes to the js dependency must be made manually. Even though it is highly unlikely, this change could affect users who depend on: org.graalvm.sdk:collections org.graalvm.sdk:word 1.2.1.11. Various adjustments to QuarkusComponentTest In this release, QuarkusComponentTest has undergone several adjustments. It remains experimental and is not supported by Red Hat build of Quarkus. This experimental status indicates that the API might change at any time, reflecting feedback received. The QuarkusComponentTestExtension is now immutable, requiring programmatic registration through the simplified constructor QuarkusComponentTestExtension(Class...) or the QuarkusComponentTestExtension.builder() method. The test instance lifecycle, either Lifecycle#PER_METHOD (default) or Lifecycle#PER_CLASS , dictates when the CDI container starts and stops; PER_METHOD starts the container before each test and stops it afterward, whereas PER_CLASS starts it before all tests and stops it after all tests. This represents a change from versions, where the container always started before and stopped after all tests. 1.2.2. Data 1.2.2.1. Hibernate ORM upgraded to 6.4 In Red Hat build of Quarkus 3.8, Hibernate Object-Relational Mapping (ORM) was upgraded to version 6.4 and introduced the following breaking changes: Compatibility with some older database versions is dropped. For more information about supported versions, see Supported dialects . Numeric literals are now interpreted as defined in Jakarta Persistence 3.2. For more information, see the Hibernate ORM 6.4 migration guide. 1.2.2.2. Hibernate Search upgraded to 7.0 In Red Hat build of Quarkus 3.8, Hibernate Search was upgraded to version 7.0 and introduced the following breaking changes: The values accepted by the quarkus.hibernate-search-orm.coordination.entity-mapping.outbox-event.uuid-type and quarkus.hibernate-search-orm.coordination.entity-mapping.agent.uuid-type configuration properties changed: uuid-binary is deprecated in favor of binary uuid-char is deprecated in favor of char The default value for the quarkus.hibernate-search-orm.elasticsearch.query.shard-failure.ignore property changed from true to false , meaning that Hibernate Search now throws an exception if at least one shard fails during a search operation. To get the behavior, set this configuration property to true . Note If you define multiple backends, you must set this configuration property for each Elasticsearch backend. The complement operator (~) in the regular expression predicate was removed with no alternative to replace it. Hibernate Search dependencies no longer have an -orm6 suffix in their artifact ID; for example, applications now depend on the hibernate-search-mapper-orm module instead of hibernate-search-mapper-orm-orm6 . For more information, see the following resources: Hibernate Search documentation Hibernate Search 7.0.0.Final: Migration guide from 6.2 1.2.2.3. SQL Server Dev Services upgraded to 2022-latest Dev Services for SQL Server updated its default image from mcr.microsoft.com/mssql/server:2019-latest to mcr.microsoft.com/mssql/server:2022-latest . Users preferring the version can specify an alternative by using the config property detailed in the References section in the Red Hat build of Quarkus "Configure data sources" guide. 1.2.2.4. Upgrade to Flyway adds additional dependency for Oracle users In Red Hat build of Quarkus 3.8, the Flyway extension is upgraded to Flyway 9.20.0, which delivers an additional dependency, flyway-database-oracle , for Oracle users. Oracle users must update the pom.xml file to include the flyway-database-oracle dependency. To do so, do the following: <dependency> <groupId>org.flywaydb</groupId> <artifactId>flyway-database-oracle</artifactId> </dependency> For more information, see the Quarkus Using Flyway guide. 1.2.3. Native 1.2.3.1. Strimzi OAuth support issue in the Kafka extension The Kafka extension's Strimzi OAuth support in quarkus-bom now uses io.strimzi:strimzi-kafka-oauth version 0.14.0, introducing a known issue that leads to native build failures. The error, Substitution target for `io.smallrye.reactive.kafka.graal.Target_com_jayway_jsonpath_internal_DefaultsImpl is not loaded can be bypassed by adding io.strimzi:kafka-oauth-common to your project's classpath. 1.2.4. Observability 1.2.4.1. @AddingSpanAttributes annotation added When using Opentelemetry (oTel) instrumentation with Red Hat build of Quarkus 3.8, you can now annotate a method in any Context Dependency Injection (CDI)-aware bean by using the io.opentelemetry.instrumentation.annotations.AddingSpanAttributes annotation, which does not create a new span but adds annotated method parameters to attributes in the current span. Note If you mistakenly annotate a method with both @AddingSpanAttributes and @WithSpan annotations, the @WithSpan annotation takes precedence. For more information, see the CDI section of the Quarkus "Using OpenTelemetry" guide. 1.2.4.2. quarkus-smallrye-metrics extension no longer supported With Red Hat build of Quarkus 3.8, the quarkus-smallrye-metrics extension is no longer supported. Now, it is available as a community extension only. Its use in production environments is discouraged. From Red Hat build of Quarkus 3.8, quarkus-smallrye-metrics is replaced by the fully supported quarkus-micrometer extension. 1.2.4.3. quarkus-smallrye-opentracing extension no longer supported With Red Hat build of Quarkus 3.8, SmallRye OpenTracing is no longer supported. To continue using distributed tracing, migrate your applications to SmallRye OpenTelemetry, which is now fully supported with this release and no longer a Technology Preview feature. If you still need to use quarkus-smallrye-opentracing , adjust your application to use the extensions from Quarkiverse by updating the groupId and specifying the version manually. 1.2.4.4. Refactoring of Scheduler and OpenTelemetry Tracing extensions In Red Hat build of Quarkus 3.8, integration of OpenTelemetry Tracing and the quarkus-scheduler extension has been refactored. Before this update, only @Scheduled methods had a new io.opentelemetry.api.trace.Span class, which is associated automatically when you enable tracing. That is, when the quarkus.scheduler.tracing.enabled configuration property is set to true , and the quarkus-opentelemetry extension is available. With this 3.8 release, all scheduled jobs, including those that are scheduled programmatically, have a Span associated automatically when tracing is enabled. The unique job identifier for each scheduled method is either generated, is specified by setting the io.quarkus.scheduler.Scheduled#identity attribute or with the JobDefinition method. Before this update, span names followed the <simpleclassname>.<methodName> format. For more information, see the following Quarkus resources: Scheduler reference Using OpenTelemetry 1.2.5. Security 1.2.5.1. Enhanced Security with mTLS and HTTP Restrictions When mTLS client authentication ( quarkus.http.ssl.client-auth ) is set to required , Red Hat build of Quarkus automatically disables plain HTTP ports to ensure that only secure HTTPS requests are accepted. To enable plain HTTP, configure quarkus.http.ssl.client-auth to request or set both quarkus.http.ssl.client-auth=required and quarkus.http.insecure-requests=enabled . 1.2.5.2. JWT extension removes unnecessary Reactive Routes dependency The JWT extension no longer transitively depends on the Reactive Routes extension. If your application uses both JWT and Reactive Routes features but does not declare an explicit dependency on Reactive Routes, you must add this dependency. 1.2.5.3. Keycloak Authorization dropped the keycloak-adapter-core dependency The quarkus-keycloak-authorization extension no longer includes the org.keycloak:keycloak-adapter-core dependency due to its update to Keycloak 22.0.0 and its irrelevance to the extension's functionality. In future Keycloak versions, it is planned to remove the Keycloak Java adapters code. If your application requires this dependency, manually add it to your project's pom.xml . 1.2.5.4. Using CDI interceptors to resolve OIDC tenants in RESTEasy Classic no longer supported You can no longer use Context and Dependency Injection (CDI) annotations and interceptors to resolve tenant OIDC configuration for RESTEasy Classic applications. Due to security checks that are enforced before CDI interceptors and checks requiring authentication are triggered, using CDI interceptors to resolve multiple OIDC provider configuration identifiers no longer works. Use @Tenant annotation or custom io.quarkus.oidc.TenantResolver instead. For more information, see the Resolve with annotations section of the Quarkus "Using OIDC multitenancy guide". 1.2.5.5. Using OIDC @Tenant annotation to bind OIDC features to tenants no longer possible In Red Hat build of Quarkus 3.8, you must now use the quarkus.oidc.TenantFeature annotation instead of quarkus.oidc.Tenant to bind OpenID Connect (OIDC) features to OIDC tenants. The quarkus.oidc.Tenant annotation is now used for resolving tenant configuration. 1.2.5.6. Security profile flexibility enhancement Red Hat build of Quarkus 3.8 allows runtime configuration of HTTP permissions and roles, enabling flexible security settings across profiles. This resolves the issue of native executables locking to build-time security configurations. Security can now be dynamically adjusted per profile, applicable in both JVM and native modes. 1.2.6. Standards 1.2.6.1. Correction in GraphQL directive application The application of annotation-based GraphQL directives has been corrected to ensure they are only applied to the schema element types for which they are declared. For example, if a directive was declared to apply to the GraphQL element type FIELD but was erroneously applied to a different element type, it was still visible in the schema on the element where it should not be applicable, leading to an invalid schema. This was now corrected, and directives have their usage checked against their applicability declaration. If you had directives applied incorrectly in this way, they will no longer appear in the schema, and Red Hat build of Quarkus 3.8 will log a warning during the build. 1.2.7. OpenAPI standardizes content type defaults for POJOs and primitives This change has standardized the default content type for generating OpenAPI documentation when a @ContentType annotation is not provided. Previously, the default content type varied across different extensions, such as RestEasy Reactive, RestEasy Classic, Spring Web, and OpenAPI. For instance, OpenAPI always used JSON as the default, whereas RestEasy used JSON for object types and text for primitive types. Now, all extensions have adopted uniform default settings, ensuring consistency: Primitive types are now uniformly set to text/plain . Complex POJO (Plain Old Java Object) types default to application/json . This unification ensures that while the behavior across extensions is consistent, it differentiates appropriately based on the type of data, with primitives using text/plain and POJOs using application/json . This approach does not imply that the same content type is used for all Java types but rather that all extensions now handle content types in the same manner, tailored to the nature of the data. 1.2.8. Web 1.2.8.1. Improved SSE handling in REST Client Red Hat build of Quarkus 3.8 has enhanced its REST Client's Server-Sent Events (SSE) capabilities, enabling complete event returns and filtering. These updates and new descriptions in REST Client provide developers with increased control and flexibility in managing real-time data streams. 1.2.8.2. Manual addition of the Reactive Routes dependency Until version 3.8, the Red Hat build of Quarkus SmallRye JWT automatically incorporated quarkus-reactive-routes , a feature discontinued from version 3.8 onwards. To ensure continued functionality, manually add quarkus-reactive-routes as a dependency in your build configuration. 1.3. Additional resources Release notes for Red Hat build of Quarkus version 3.2
|
[
"quarkus update",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.8.6.SP3-redhat-00002:update",
"<dependency> <groupId>org.flywaydb</groupId> <artifactId>flyway-database-oracle</artifactId> </dependency>"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/migrating_applications_to_red_hat_build_of_quarkus_3.8/assembly_migrating-to-quarkus-3_quarkus-migration
|
Appendix A. Using your subscription
|
Appendix A. Using your subscription Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Streams for Apache Kafka for Apache Kafka entries in the INTEGRATION AND AUTOMATION category. Select the desired Streams for Apache Kafka product. The Software Downloads page opens. Click the Download link for your component. Installing packages with DNF To install a package and all the package dependencies, use: dnf install <package_name> To install a previously-downloaded package from a local directory, use: dnf install <path_to_download_package> Revised on 2025-03-05 17:05:35 UTC
|
[
"dnf install <package_name>",
"dnf install <path_to_download_package>"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/using_your_subscription
|
Chapter 6. Creating Ansible playbooks with the all-in-one Red Hat OpenStack Platform environment
|
Chapter 6. Creating Ansible playbooks with the all-in-one Red Hat OpenStack Platform environment The deployment command applies Ansible playbooks to the environment automatically. However, you can modify the deployment command to generate Ansible playbooks without applying them to the deployment, and run the playbooks later. Include the --output-only option in the deploy command to generate the standalone-ansible-XXXXX directory. This directory contains a set of Ansible playbooks that you can run on other hosts. To generate the Ansible playbook directory, run the deploy command with the option --output-only : To run the Ansible playbooks, run the ansible-playbook command, and include the inventory.yaml file and the deploy_steps_playbook.yaml file:
|
[
"[stack@all-in-one]USD sudo openstack tripleo deploy --templates --local-ip=USDIP/USDNETMASK -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml -r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml -e USDHOME/containers-prepare-parameters.yaml -e USDHOME/standalone_parameters.yaml --output-dir USDHOME --standalone --output-only",
"[stack@all-in-one]USD cd standalone-ansible-XXXXX [stack@all-in-one]USD sudo ansible-playbook -i inventory.yaml deploy_steps_playbook.yaml"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/standalone_deployment_guide/creating-ansible-playbooks
|
18.11. Security Policy
|
18.11. Security Policy The Security Policy spoke allows you to configure the installed system following restrictions and recommendations ( compliance policies ) defined by the Security Content Automation Protocol (SCAP) standard. This functionality is provided by an add-on which has been enabled by default since Red Hat Enterprise Linux 7.2. When enabled, the packages necessary to provide this functionality will automatically be installed. However, by default, no policies are enforced, meaning that no checks are performed during or after installation unless specifically configured. The Red Hat Enterprise Linux 7 Security Guide provides detailed information about security compliance including background information, practical examples, and additional resources. Important Applying a security policy is not necessary on all systems. This screen should only be used when a specific policy is mandated by your organization rules or government regulations. If you apply a security policy to the system, it will be installed using restrictions and recommendations defined in the selected profile. The openscap-scanner package will also be added to your package selection, providing a preinstalled tool for compliance and vulnerability scanning. After the installation finishes, the system will be automatically scanned to verify compliance. The results of this scan will be saved to the /root/openscap_data directory on the installed system. Pre-defined policies which are available in this screen are provided by SCAP Security Guide . See the OpenSCAP Portal for links to detailed information about each available profile. You can also load additional profiles from an HTTP, HTTPS or FTP server. Figure 18.7. Security policy selection screen To configure the use of security policies on the system, first enable configuration by setting the Apply security policy switch to ON . If the switch is in the OFF position, controls in the rest of this screen have no effect. After enabling security policy configuration using the switch, select one of the profiles listed in the top window of the screen, and click the Select profile below. When a profile is selected, a green check mark will appear on the right side, and the bottom field will display whether any changes will be made before beginning the installation. Note None of the profiles available by default perform any changes before the installation begins. However, loading a custom profile as described below can require some pre-installation actions. To use a custom profile, click the Change content button in the top left corner. This will open another screen where you can enter an URL of a valid security content. To go back to the default security content selection screen, click Use SCAP Security Guide in the top left corner. Custom profiles can be loaded from an HTTP , HTTPS or FTP server. Use the full address of the content, including the protocol (such as http:// ). A network connection must be active (enabled in Section 18.13, "Network & Hostname" ) before you can load a custom profile. The content type will be detected automatically by the installer. After you select a profile, or if you want to leave the screen, click Done in the top left corner to return to Section 18.7, "The Installation Summary Screen" .
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-security-policy-s390
|
Chapter 1. Introduction to Control Groups (Cgroups)
|
Chapter 1. Introduction to Control Groups (Cgroups) Red Hat Enterprise Linux 6 provides a new kernel feature: control groups , which are called by their shorter name cgroups in this guide. Cgroups allow you to allocate resources - such as CPU time, system memory, network bandwidth, or combinations of these resources - among user-defined groups of tasks (processes) running on a system. You can monitor the cgroups you configure, deny cgroups access to certain resources, and even reconfigure your cgroups dynamically on a running system. The cgconfig ( control group config ) service can be configured to start up at boot time and reestablish your predefined cgroups, thus making them persistent across reboots. By using cgroups, system administrators gain fine-grained control over allocating, prioritizing, denying, managing, and monitoring system resources. Hardware resources can be appropriately divided up among tasks and users, increasing overall efficiency. 1.1. How Control Groups Are Organized Cgroups are organized hierarchically, like processes, and child cgroups inherit some of the attributes of their parents. However, there are differences between the two models. The Linux Process Model All processes on a Linux system are child processes of a common parent: the init process, which is executed by the kernel at boot time and starts other processes (which may in turn start child processes of their own). Because all processes descend from a single parent, the Linux process model is a single hierarchy, or tree. Additionally, every Linux process except init inherits the environment (such as the PATH variable) [1] and certain other attributes (such as open file descriptors) of its parent process. The Cgroup Model Cgroups are similar to processes in that: they are hierarchical, and child cgroups inherit certain attributes from their parent cgroup. The fundamental difference is that many different hierarchies of cgroups can exist simultaneously on a system. If the Linux process model is a single tree of processes, then the cgroup model is one or more separate, unconnected trees of tasks (i.e. processes). Multiple separate hierarchies of cgroups are necessary because each hierarchy is attached to one or more subsystems . A subsystem [2] represents a single resource, such as CPU time or memory. Red Hat Enterprise Linux 6 provides ten cgroup subsystems, listed below by name and function. Available Subsystems in Red Hat Enterprise Linux blkio - this subsystem sets limits on input/output access to and from block devices such as physical drives (disk, solid state, or USB). cpu - this subsystem uses the scheduler to provide cgroup tasks access to the CPU. cpuacct - this subsystem generates automatic reports on CPU resources used by tasks in a cgroup. cpuset - this subsystem assigns individual CPUs (on a multicore system) and memory nodes to tasks in a cgroup. devices - this subsystem allows or denies access to devices by tasks in a cgroup. freezer - this subsystem suspends or resumes tasks in a cgroup. memory - this subsystem sets limits on memory use by tasks in a cgroup and generates automatic reports on memory resources used by those tasks. net_cls - this subsystem tags network packets with a class identifier (classid) that allows the Linux traffic controller ( tc ) to identify packets originating from a particular cgroup task. net_prio - this subsystem provides a way to dynamically set the priority of network traffic per network interface. ns - the namespace subsystem. perf_event - this subsystem identifies cgroup membership of tasks and can be used for performance analysis. Note You may come across the term resource controller or simply controller in cgroup literature such as the man pages or kernel documentation. Both of these terms are synonymous with " subsystem " and arise from the fact that a subsystem typically schedules a resource or applies a limit to the cgroups in the hierarchy it is attached to. The definition of a subsystem (resource controller) is quite general: it is something that acts upon a group of tasks, i.e. processes. [1] The parent process is able to alter the environment before passing it to a child process. [2] You should be aware that subsystems are also called resource controllers , or simply controllers , in the libcgroup man pages and other documentation.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/ch01
|
10.2. SELinux and journald
|
10.2. SELinux and journald In systemd , the journald daemon (also known as systemd-journal ) is the alternative for the syslog utility, which is a system service that collects and stores logging data. It creates and maintains structured and indexed journals based on logging information that is received from the kernel, from user processes using the libc syslog() function, from standard and error output of system services, or using its native API. It implicitly collects numerous metadata fields for each log message in a secure way. The systemd-journal service can be used with SELinux to increase security. SELinux controls processes by only allowing them to do what they were designed to do; sometimes even less, depending on the security goals of the policy writer. For example, SELinux prevents a compromised ntpd process from doing anything other than handle Network Time. However, the ntpd process sends syslog messages, so that SELinux would allow the compromised process to continue to send those messages. The compromised ntpd could format syslog messages to match other daemons and potentially mislead an administrator, or even worse, a utility that reads the syslog file into compromising the whole system. The systemd-journal daemon verifies all log messages and, among other things, adds SELinux labels to them. It is then easy to detect inconsistencies in log messages and prevent an attack of this type before it occurs. You can use the journalctl utility to query logs of systemd journals. If no command-line arguments are specified, running this utility lists the full content of the journal, starting from the oldest entries. To see all logs generated on the system, including logs for system components, execute journalctl as root. If you execute it as a non-root user, the output will be limited only to logs related to the currently logged-in user. Example 10.2. Listing Logs with journalctl It is possible to use journalctl for listing all logs related to a particular SELinux label. For example, the following command lists all logs logged under the system_u:system_r:policykit_t:s0 label: For more information about journalctl , see the journalctl (1) manual page.
|
[
"~]# journalctl _SELINUX_CONTEXT=system_u:system_r:policykit_t:s0 Oct 21 10:22:42 localhost.localdomain polkitd[647]: Started polkitd version 0.112 Oct 21 10:22:44 localhost.localdomain polkitd[647]: Loading rules from directory /etc/polkit-1/rules.d Oct 21 10:22:44 localhost.localdomain polkitd[647]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 21 10:22:44 localhost.localdomain polkitd[647]: Finished loading, compiling and executing 5 rules Oct 21 10:22:44 localhost.localdomain polkitd[647]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 21 10:23:10 localhost polkitd[647]: Registered Authentication Agent for unix-session:c1 (system bus name :1.49, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus) Oct 21 10:23:35 localhost polkitd[647]: Unregistered Authentication Agent for unix-session:c1 (system bus name :1.80 [/usr/bin/gnome-shell --mode=classic], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.utf8)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sec-systemd_access_control-journald
|
Preface
|
Preface Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To provide feedback, highlight text in a document and add comments. Prerequisites You are logged in to the Red Hat Customer Portal. In the Red Hat Customer Portal, the document is in the HTML viewing format. Procedure To provide your feedback, perform the following steps: Click the Feedback button in the top-right corner of the document to see existing feedback. Note The feedback feature is enabled only in the HTML format. Highlight the section of the document where you want to provide feedback. Click the Add Feedback pop-up that appears near the highlighted text. A text box appears in the feedback section on the right side of the page. Enter your feedback in the text box and click Submit . A documentation issue is created. To view the issue, click the issue link in the feedback view.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/installing_debezium_on_openshift/pr01
|
Chapter 8. Monitoring your cluster using JMX
|
Chapter 8. Monitoring your cluster using JMX ZooKeeper, the Kafka broker, Kafka Connect, and the Kafka clients all expose management information using Java Management Extensions (JMX). Most management information is in the form of metrics that are useful for monitoring the condition and performance of your Kafka cluster. Like other Java applications, Kafka provides this management information through managed beans or MBeans. JMX works at the level of the JVM (Java Virtual Machine). To obtain management information, external tools can connect to the JVM that is running ZooKeeper, the Kafka broker, and so on. By default, only tools on the same machine and running as the same user as the JVM are able to connect. Note Management information for ZooKeeper is not documented here. You can view ZooKeeper metrics in JConsole. For more information, see Monitoring using JConsole . 8.1. JMX configuration options You configure JMX using JVM system properties. The scripts provided with AMQ Streams ( bin/kafka-server-start.sh and bin/connect-distributed.sh , and so on) use the KAFKA_JMX_OPTS environment variable to set these system properties. The system properties for configuring JMX are the same, even though Kafka producer, consumer, and streams applications typically start the JVM in different ways. 8.2. Disabling the JMX agent You can prevent local JMX tools from connecting to the JVM (for example, for compliance reasons) by disabling the JMX agent for an AMQ Streams component. The following procedure explains how to disable the JMX agent for a Kafka broker. Procedure Use the KAFKA_JMX_OPTS environment variable to set com.sun.management.jmxremote to false . export KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote=false bin/kafka-server-start.sh Start the JVM. 8.3. Connecting to the JVM from a different machine You can connect to the JVM from a different machine by configuring the port that the JMX agent listens on. This is insecure because it allows JMX tools to connect from anywhere, with no authentication. Procedure Use the KAFKA_JMX_OPTS environment variable to set -Dcom.sun.management.jmxremote.port= <port> . For <port> , enter the name of the port on which you want the Kafka broker to listen for JMX connections. export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port= <port> -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false" bin/kafka-server-start.sh Start the JVM. Important It is recommended that you configure authentication and SSL to ensure that the remote JMX connection is secure. For more information about the system properties needed to do this, see the JMX documentation . 8.4. Monitoring using JConsole The JConsole tool is distributed with the Java Development Kit (JDK). You can use JConsole to connect to a local or remote JVM and discover and display management information from Java applications. If using JConsole to connect to a local JVM, the names of the JVM processes corresponding to the different components of AMQ Streams. Table 8.1. JVM processes for AMQ Streams components AMQ Streams component JVM process ZooKeeper org.apache.zookeeper.server.quorum.QuorumPeerMain Kafka broker kafka.Kafka Kafka Connect standalone org.apache.kafka.connect.cli.ConnectStandalone Kafka Connect distributed org.apache.kafka.connect.cli.ConnectDistributed A Kafka producer, consumer, or Streams application The name of the class containing the main method for the application. When using JConsole to connect to a remote JVM, use the appropriate host name and JMX port. Many other tools and monitoring products can be used to fetch the metrics using JMX and provide monitoring and alerting based on those metrics. Refer to the product documentation for those tools. 8.5. Important Kafka broker metrics Kafka provides many MBeans for monitoring the performance of the brokers in your Kafka cluster. These apply to an individual broker rather than the entire cluster. The following tables present a selection of these broker-level MBeans organized into server, network, logging, and controller metrics. 8.5.1. Kafka server metrics The following table shows a selection of metrics that report information about the Kafka server. Table 8.2. Metrics for the Kafka server Metric MBean Description Expected value Messages in per second kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec The rate at which individual messages are consumed by the broker. Approximately the same as the other brokers in the cluster. Bytes in per second kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec The rate at which data sent from producers is consumed by the broker. Approximately the same as the other brokers in the cluster. Replication bytes in per second kafka.server:type=BrokerTopicMetrics,name=ReplicationBytesInPerSec The rate at which data sent from other brokers is consumed by the follower broker. N/A Bytes out per second kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec The rate at which data is fetched and read from the broker by consumers. N/A Replication bytes out per second kafka.server:type=BrokerTopicMetrics,name=ReplicationBytesOutPerSec The rate at which data is sent from the broker to other brokers. This metric is useful to monitor if the broker is a leader for a group of partitions. N/A Under-replicated partitions kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions The number of partitions that have not been fully replicated in the follower replicas. Zero Under minimum ISR partition count kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount The number of partitions under the minimum In-Sync Replica (ISR) count. The ISR count indicates the set of replicas that are up-to-date with the leader. Zero Partition count kafka.server:type=ReplicaManager,name=PartitionCount The number of partitions in the broker. Approximately even when compared with the other brokers. Leader count kafka.server:type=ReplicaManager,name=LeaderCount The number of replicas for which this broker is the leader. Approximately the same as the other brokers in the cluster. ISR shrinks per second kafka.server:type=ReplicaManager,name=IsrShrinksPerSec The rate at which the number of ISRs in the broker decreases Zero ISR expands per second kafka.server:type=ReplicaManager,name=IsrExpandsPerSec The rate at which the number of ISRs in the broker increases. Zero Maximum lag kafka.server:type=ReplicaFetcherManager,name=MaxLag,clientId=Replica The maximum lag between the time that messages are received by the leader replica and by the follower replicas. Proportional to the maximum batch size of a produce request. Requests in producer purgatory kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Produce The number of send requests in the producer purgatory. N/A Requests in fetch purgatory kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Fetch The number of fetch requests in the fetch purgatory. N/A Request handler average idle percent kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent Indicates the percentage of time that the request handler (IO) threads are not in use. A lower value indicates that the workload of the broker is high. Request (Requests exempt from throttling) kafka.server:type=Request The number of requests that are exempt from throttling. N/A ZooKeeper request latency in milliseconds kafka.server:type=ZooKeeperClientMetrics,name=ZooKeeperRequestLatencyMs The latency for ZooKeeper requests from the broker, in milliseconds. N/A ZooKeeper session state kafka.server:type=SessionExpireListener,name=SessionState The status of the broker's connection to ZooKeeper. CONNECTED 8.5.2. Kafka network metrics The following table shows a selection of metrics that report information about requests. Metric MBean Description Expected value Requests per second kafka.network:type=RequestMetrics,name=RequestsPerSec,request={Produce|FetchConsumer|FetchFollower} The total number of requests made for the request type per second. The Produce , FetchConsumer , and FetchFollower request types each have their own MBeans. N/A Request bytes (request size in bytes) kafka.network:type=RequestMetrics,name=RequestBytes,request=([-.\w]+) The size of requests, in bytes, made for the request type identified by the request property of the MBean name. Separate MBeans for all available request types are listed under the RequestBytes node. N/A Temporary memory size in bytes kafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request={Produce|Fetch} The amount of temporary memory used for converting message formats and decompressing messages. N/A Message conversions time kafka.network:type=RequestMetrics,name=MessageConversionsTimeMs,request={Produce|Fetch} Time, in milliseconds, spent on converting message formats. N/A Total request time in milliseconds kafka.network:type=RequestMetrics,name=TotalTimeMs,request={Produce|FetchConsumer|FetchFollower} Total time, in milliseconds, spent processing requests. N/A Request queue time in milliseconds kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request={Produce|FetchConsumer|FetchFollower} The time, in milliseconds, that a request currently spends in the queue for the request type given in the request property. N/A Local time (leader local processing time) in milliseconds kafka.network:type=RequestMetrics,name=LocalTimeMs,request={Produce|FetchConsumer|FetchFollower} The time taken, in milliseconds, for the leader to process the request. N/A Remote time (leader remote processing time) in milliseconds kafka.network:type=RequestMetrics,name=RemoteTimeMs,request={Produce|FetchConsumer|FetchFollower} The length of time, in milliseconds, that the request waits for the follower. Separate MBeans for all available request types are listed under the RemoteTimeMs node. N/A Response queue time in milliseconds kafka.network:type=RequestMetrics,name=ResponseQueueTimeMs,request={Produce|FetchConsumer|FetchFollower} The length of time, in milliseconds, that the request waits in the response queue. N/A Response send time in milliseconds kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request={Produce|FetchConsumer|FetchFollower} The time taken, in milliseconds, to send the response. N/A Network processor average idle percent kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent The average percentage of time that the network processors are idle. Between zero and one. 8.5.3. Kafka log metrics The following table shows a selection of metrics that report information about logging. Metric MBean Description Expected Value Log flush rate and time in milliseconds kafka.log:type=LogFlushStats,name=LogFlushRateAndTimeMs The rate at which log data is written to disk, in milliseconds. N/A Offline log directory count kafka.log:type=LogManager,name=OfflineLogDirectoryCount The number of offline log directories (for example, after a hardware failure). Zero 8.5.4. Kafka controller metrics The following table shows a selection of metrics that report information about the controller of the cluster. Metric MBean Description Expected Value Active controller count kafka.controller:type=KafkaController,name=ActiveControllerCount The number of brokers designated as controllers. One indicates that the broker is the controller for the cluster. Leader election rate and time in milliseconds kafka.controller:type=ControllerStats,name=LeaderElectionRateAndTimeMs The rate at which new leader replicas are elected. Zero 8.5.5. Yammer metrics Metrics that express a rate or unit of time are provided as Yammer metrics. The class name of an MBean that uses Yammer metrics is prefixed with com.yammer.metrics . Yammer rate metrics have the following attributes for monitoring requests: Count EventType (Bytes) FifteenMinuteRate RateUnit (Seconds) MeanRate OneMinuteRate FiveMinuteRate Yammer time metrics have the following attributes for monitoring requests: Max Min Mean StdDev 75/95/98/99/99.9 th Percentile 8.6. Producer MBeans The following MBeans will exist in Kafka producer applications, including Kafka Streams applications and Kafka Connect with source connectors. 8.6.1. MBeans matching kafka.producer:type=producer-metrics,client-id=* These are metrics at the producer level. Attribute Description batch-size-avg The average number of bytes sent per partition per-request. batch-size-max The max number of bytes sent per partition per-request. batch-split-rate The average number of batch splits per second. batch-split-total The total number of batch splits. buffer-available-bytes The total amount of buffer memory that is not being used (either unallocated or in the free list). buffer-total-bytes The maximum amount of buffer memory the client can use (whether or not it is currently used). bufferpool-wait-time The fraction of time an appender waits for space allocation. compression-rate-avg The average compression rate of record batches. connection-close-rate Connections closed per second in the window. connection-count The current number of active connections. connection-creation-rate New connections established per second in the window. failed-authentication-rate Connections that failed authentication. incoming-byte-rate Bytes/second read off all sockets. io-ratio The fraction of time the I/O thread spent doing I/O. io-time-ns-avg The average length of time for I/O per select call in nanoseconds. io-wait-ratio The fraction of time the I/O thread spent waiting. io-wait-time-ns-avg The average length of time the I/O thread spent waiting for a socket ready for reads or writes in nanoseconds. metadata-age The age in seconds of the current producer metadata being used. network-io-rate The average number of network operations (reads or writes) on all connections per second. outgoing-byte-rate The average number of outgoing bytes sent per second to all servers. produce-throttle-time-avg The average time in ms a request was throttled by a broker. produce-throttle-time-max The maximum time in ms a request was throttled by a broker. record-error-rate The average per-second number of record sends that resulted in errors. record-error-total The total number of record sends that resulted in errors. record-queue-time-avg The average time in ms record batches spent in the send buffer. record-queue-time-max The maximum time in ms record batches spent in the send buffer. record-retry-rate The average per-second number of retried record sends. record-retry-total The total number of retried record sends. record-send-rate The average number of records sent per second. record-send-total The total number of records sent. record-size-avg The average record size. record-size-max The maximum record size. records-per-request-avg The average number of records per request. request-latency-avg The average request latency in ms. request-latency-max The maximum request latency in ms. request-rate The average number of requests sent per second. request-size-avg The average size of all requests in the window. request-size-max The maximum size of any request sent in the window. requests-in-flight The current number of in-flight requests awaiting a response. response-rate Responses received sent per second. select-rate Number of times the I/O layer checked for new I/O to perform per second. successful-authentication-rate Connections that were successfully authenticated using SASL or SSL. waiting-threads The number of user threads blocked waiting for buffer memory to enqueue their records. 8.6.2. MBeans matching kafka.producer:type=producer-metrics,client-id=*,node-id=* These are metrics at the producer level about connection to each broker. Attribute Description incoming-byte-rate The average number of responses received per second for a node. outgoing-byte-rate The average number of outgoing bytes sent per second for a node. request-latency-avg The average request latency in ms for a node. request-latency-max The maximum request latency in ms for a node. request-rate The average number of requests sent per second for a node. request-size-avg The average size of all requests in the window for a node. request-size-max The maximum size of any request sent in the window for a node. response-rate Responses received sent per second for a node. 8.6.3. MBeans matching kafka.producer:type=producer-topic-metrics,client-id=*,topic=* These are metrics at the topic level about topics the producer is sending messages to. Attribute Description byte-rate The average number of bytes sent per second for a topic. byte-total The total number of bytes sent for a topic. compression-rate The average compression rate of record batches for a topic. record-error-rate The average per-second number of record sends that resulted in errors for a topic. record-error-total The total number of record sends that resulted in errors for a topic. record-retry-rate The average per-second number of retried record sends for a topic. record-retry-total The total number of retried record sends for a topic. record-send-rate The average number of records sent per second for a topic. record-send-total The total number of records sent for a topic. 8.7. Consumer MBeans The following MBeans will exist in Kafka consumer applications, including Kafka Streams applications and Kafka Connect with sink connectors. 8.7.1. MBeans matching kafka.consumer:type=consumer-metrics,client-id=* These are metrics at the consumer level. Attribute Description connection-close-rate Connections closed per second in the window. connection-count The current number of active connections. connection-creation-rate New connections established per second in the window. failed-authentication-rate Connections that failed authentication. incoming-byte-rate Bytes/second read off all sockets. io-ratio The fraction of time the I/O thread spent doing I/O. io-time-ns-avg The average length of time for I/O per select call in nanoseconds. io-wait-ratio The fraction of time the I/O thread spent waiting. io-wait-time-ns-avg The average length of time the I/O thread spent waiting for a socket ready for reads or writes in nanoseconds. network-io-rate The average number of network operations (reads or writes) on all connections per second. outgoing-byte-rate The average number of outgoing bytes sent per second to all servers. request-rate The average number of requests sent per second. request-size-avg The average size of all requests in the window. request-size-max The maximum size of any request sent in the window. response-rate Responses received sent per second. select-rate Number of times the I/O layer checked for new I/O to perform per second. successful-authentication-rate Connections that were successfully authenticated using SASL or SSL. 8.7.2. MBeans matching kafka.consumer:type=consumer-metrics,client-id=*,node-id=* These are metrics at the consumer level about connection to each broker. Attribute Description incoming-byte-rate The average number of responses received per second for a node. outgoing-byte-rate The average number of outgoing bytes sent per second for a node. request-latency-avg The average request latency in ms for a node. request-latency-max The maximum request latency in ms for a node. request-rate The average number of requests sent per second for a node. request-size-avg The average size of all requests in the window for a node. request-size-max The maximum size of any request sent in the window for a node. response-rate Responses received sent per second for a node. 8.7.3. MBeans matching kafka.consumer:type=consumer-coordinator-metrics,client-id=* These are metrics at the consumer level about the consumer group. Attribute Description assigned-partitions The number of partitions currently assigned to this consumer. commit-latency-avg The average time taken for a commit request. commit-latency-max The max time taken for a commit request. commit-rate The number of commit calls per second. heartbeat-rate The average number of heartbeats per second. heartbeat-response-time-max The max time taken to receive a response to a heartbeat request. join-rate The number of group joins per second. join-time-avg The average time taken for a group rejoin. join-time-max The max time taken for a group rejoin. last-heartbeat-seconds-ago The number of seconds since the last controller heartbeat. sync-rate The number of group syncs per second. sync-time-avg The average time taken for a group sync. sync-time-max The max time taken for a group sync. 8.7.4. MBeans matching kafka.consumer:type=consumer-fetch-manager-metrics,client-id=* These are metrics at the consumer level about the consumer's fetcher. Attribute Description bytes-consumed-rate The average number of bytes consumed per second. bytes-consumed-total The total number of bytes consumed. fetch-latency-avg The average time taken for a fetch request. fetch-latency-max The max time taken for any fetch request. fetch-rate The number of fetch requests per second. fetch-size-avg The average number of bytes fetched per request. fetch-size-max The maximum number of bytes fetched per request. fetch-throttle-time-avg The average throttle time in ms. fetch-throttle-time-max The maximum throttle time in ms. fetch-total The total number of fetch requests. records-consumed-rate The average number of records consumed per second. records-consumed-total The total number of records consumed. records-lag-max The maximum lag in terms of number of records for any partition in this window. records-lead-min The minimum lead in terms of number of records for any partition in this window. records-per-request-avg The average number of records in each request. 8.7.5. MBeans matching kafka.consumer:type=consumer-fetch-manager-metrics,client-id=*,topic=* These are metrics at the topic level about the consumer's fetcher. Attribute Description bytes-consumed-rate The average number of bytes consumed per second for a topic. bytes-consumed-total The total number of bytes consumed for a topic. fetch-size-avg The average number of bytes fetched per request for a topic. fetch-size-max The maximum number of bytes fetched per request for a topic. records-consumed-rate The average number of records consumed per second for a topic. records-consumed-total The total number of records consumed for a topic. records-per-request-avg The average number of records in each request for a topic. 8.7.6. MBeans matching kafka.consumer:type=consumer-fetch-manager-metrics,client-id=*,topic=*,partition=* These are metrics at the partition level about the consumer's fetcher. Attribute Description preferred-read-replica The current read replica for the partition, or -1 if reading from leader. records-lag The latest lag of the partition. records-lag-avg The average lag of the partition. records-lag-max The max lag of the partition. records-lead The latest lead of the partition. records-lead-avg The average lead of the partition. records-lead-min The min lead of the partition. 8.8. Kafka Connect MBeans Note Kafka Connect will contain the producer MBeans for source connectors and consumer MBeans for sink connectors in addition to those documented here. 8.8.1. MBeans matching kafka.connect:type=connect-metrics,client-id=* These are metrics at the connect level. Attribute Description connection-close-rate Connections closed per second in the window. connection-count The current number of active connections. connection-creation-rate New connections established per second in the window. failed-authentication-rate Connections that failed authentication. incoming-byte-rate Bytes/second read off all sockets. io-ratio The fraction of time the I/O thread spent doing I/O. io-time-ns-avg The average length of time for I/O per select call in nanoseconds. io-wait-ratio The fraction of time the I/O thread spent waiting. io-wait-time-ns-avg The average length of time the I/O thread spent waiting for a socket ready for reads or writes in nanoseconds. network-io-rate The average number of network operations (reads or writes) on all connections per second. outgoing-byte-rate The average number of outgoing bytes sent per second to all servers. request-rate The average number of requests sent per second. request-size-avg The average size of all requests in the window. request-size-max The maximum size of any request sent in the window. response-rate Responses received sent per second. select-rate Number of times the I/O layer checked for new I/O to perform per second. successful-authentication-rate Connections that were successfully authenticated using SASL or SSL. 8.8.2. MBeans matching kafka.connect:type=connect-metrics,client-id=*,node-id=* These are metrics at the connect level about connection to each broker. Attribute Description incoming-byte-rate The average number of responses received per second for a node. outgoing-byte-rate The average number of outgoing bytes sent per second for a node. request-latency-avg The average request latency in ms for a node. request-latency-max The maximum request latency in ms for a node. request-rate The average number of requests sent per second for a node. request-size-avg The average size of all requests in the window for a node. request-size-max The maximum size of any request sent in the window for a node. response-rate Responses received sent per second for a node. 8.8.3. MBeans matching kafka.connect:type=connect-worker-metrics These are metrics at the connect level. Attribute Description connector-count The number of connectors run in this worker. connector-startup-attempts-total The total number of connector startups that this worker has attempted. connector-startup-failure-percentage The average percentage of this worker's connectors starts that failed. connector-startup-failure-total The total number of connector starts that failed. connector-startup-success-percentage The average percentage of this worker's connectors starts that succeeded. connector-startup-success-total The total number of connector starts that succeeded. task-count The number of tasks run in this worker. task-startup-attempts-total The total number of task startups that this worker has attempted. task-startup-failure-percentage The average percentage of this worker's tasks starts that failed. task-startup-failure-total The total number of task starts that failed. task-startup-success-percentage The average percentage of this worker's tasks starts that succeeded. task-startup-success-total The total number of task starts that succeeded. 8.8.4. MBeans matching kafka.connect:type=connect-worker-rebalance-metrics Attribute Description completed-rebalances-total The total number of rebalances completed by this worker. connect-protocol The Connect protocol used by this cluster. epoch The epoch or generation number of this worker. leader-name The name of the group leader. rebalance-avg-time-ms The average time in milliseconds spent by this worker to rebalance. rebalance-max-time-ms The maximum time in milliseconds spent by this worker to rebalance. rebalancing Whether this worker is currently rebalancing. time-since-last-rebalance-ms The time in milliseconds since this worker completed the most recent rebalance. 8.8.5. MBeans matching kafka.connect:type=connector-metrics,connector=* Attribute Description connector-class The name of the connector class. connector-type The type of the connector. One of 'source' or 'sink'. connector-version The version of the connector class, as reported by the connector. status The status of the connector. One of 'unassigned', 'running', 'paused', 'failed', or 'destroyed'. 8.8.6. MBeans matching kafka.connect:type=connector-task-metrics,connector=*,task=* Attribute Description batch-size-avg The average size of the batches processed by the connector. batch-size-max The maximum size of the batches processed by the connector. offset-commit-avg-time-ms The average time in milliseconds taken by this task to commit offsets. offset-commit-failure-percentage The average percentage of this task's offset commit attempts that failed. offset-commit-max-time-ms The maximum time in milliseconds taken by this task to commit offsets. offset-commit-success-percentage The average percentage of this task's offset commit attempts that succeeded. pause-ratio The fraction of time this task has spent in the pause state. running-ratio The fraction of time this task has spent in the running state. status The status of the connector task. One of 'unassigned', 'running', 'paused', 'failed', or 'destroyed'. 8.8.7. MBeans matching kafka.connect:type=sink-task-metrics,connector=*,task=* Attribute Description offset-commit-completion-rate The average per-second number of offset commit completions that were completed successfully. offset-commit-completion-total The total number of offset commit completions that were completed successfully. offset-commit-seq-no The current sequence number for offset commits. offset-commit-skip-rate The average per-second number of offset commit completions that were received too late and skipped/ignored. offset-commit-skip-total The total number of offset commit completions that were received too late and skipped/ignored. partition-count The number of topic partitions assigned to this task belonging to the named sink connector in this worker. put-batch-avg-time-ms The average time taken by this task to put a batch of sinks records. put-batch-max-time-ms The maximum time taken by this task to put a batch of sinks records. sink-record-active-count The number of records that have been read from Kafka but not yet completely committed/flushed/acknowledged by the sink task. sink-record-active-count-avg The average number of records that have been read from Kafka but not yet completely committed/flushed/acknowledged by the sink task. sink-record-active-count-max The maximum number of records that have been read from Kafka but not yet completely committed/flushed/acknowledged by the sink task. sink-record-lag-max The maximum lag in terms of number of records that the sink task is behind the consumer's position for any topic partitions. sink-record-read-rate The average per-second number of records read from Kafka for this task belonging to the named sink connector in this worker. This is before transformations are applied. sink-record-read-total The total number of records read from Kafka by this task belonging to the named sink connector in this worker, since the task was last restarted. sink-record-send-rate The average per-second number of records output from the transformations and sent/put to this task belonging to the named sink connector in this worker. This is after transformations are applied and excludes any records filtered out by the transformations. sink-record-send-total The total number of records output from the transformations and sent/put to this task belonging to the named sink connector in this worker, since the task was last restarted. 8.8.8. MBeans matching kafka.connect:type=source-task-metrics,connector=*,task=* Attribute Description poll-batch-avg-time-ms The average time in milliseconds taken by this task to poll for a batch of source records. poll-batch-max-time-ms The maximum time in milliseconds taken by this task to poll for a batch of source records. source-record-active-count The number of records that have been produced by this task but not yet completely written to Kafka. source-record-active-count-avg The average number of records that have been produced by this task but not yet completely written to Kafka. source-record-active-count-max The maximum number of records that have been produced by this task but not yet completely written to Kafka. source-record-poll-rate The average per-second number of records produced/polled (before transformation) by this task belonging to the named source connector in this worker. source-record-poll-total The total number of records produced/polled (before transformation) by this task belonging to the named source connector in this worker. source-record-write-rate The average per-second number of records output from the transformations and written to Kafka for this task belonging to the named source connector in this worker. This is after transformations are applied and excludes any records filtered out by the transformations. source-record-write-total The number of records output from the transformations and written to Kafka for this task belonging to the named source connector in this worker, since the task was last restarted. 8.8.9. MBeans matching kafka.connect:type=task-error-metrics,connector=*,task=* Attribute Description deadletterqueue-produce-failures The number of failed writes to the dead letter queue. deadletterqueue-produce-requests The number of attempted writes to the dead letter queue. last-error-timestamp The epoch timestamp when this task last encountered an error. total-errors-logged The number of errors that were logged. total-record-errors The number of record processing errors in this task. total-record-failures The number of record processing failures in this task. total-records-skipped The number of records skipped due to errors. total-retries The number of operations retried. 8.9. Kafka Streams MBeans Note A Streams application will contain the producer and consumer MBeans in addition to those documented here. 8.9.1. MBeans matching kafka.streams:type=stream-metrics,client-id=* These metrics are collected when the metrics.recording.level configuration parameter is info or debug . Attribute Description commit-latency-avg The average execution time in ms for committing, across all running tasks of this thread. commit-latency-max The maximum execution time in ms for committing across all running tasks of this thread. commit-rate The average number of commits per second. commit-total The total number of commit calls across all tasks. poll-latency-avg The average execution time in ms for polling, across all running tasks of this thread. poll-latency-max The maximum execution time in ms for polling across all running tasks of this thread. poll-rate The average number of polls per second. poll-total The total number of poll calls across all tasks. process-latency-avg The average execution time in ms for processing, across all running tasks of this thread. process-latency-max The maximum execution time in ms for processing across all running tasks of this thread. process-rate The average number of process calls per second. process-total The total number of process calls across all tasks. punctuate-latency-avg The average execution time in ms for punctuating, across all running tasks of this thread. punctuate-latency-max The maximum execution time in ms for punctuating across all running tasks of this thread. punctuate-rate The average number of punctuates per second. punctuate-total The total number of punctuate calls across all tasks. skipped-records-rate The average number of skipped records per second. skipped-records-total The total number of skipped records. task-closed-rate The average number of tasks closed per second. task-closed-total The total number of tasks closed. task-created-rate The average number of newly created tasks per second. task-created-total The total number of tasks created. 8.9.2. MBeans matching kafka.streams:type=stream-task-metrics,client-id=*,task-id=* Task metrics. These metrics are collected when the metrics.recording.level configuration parameter is debug . Attribute Description commit-latency-avg The average commit time in ns for this task. commit-latency-max The maximum commit time in ns for this task. commit-rate The average number of commit calls per second. commit-total The total number of commit calls. 8.9.3. MBeans matching kafka.streams:type=stream-processor-node-metrics,client-id=*,task-id=*,processor-node-id=* Processor node metrics. These metrics are collected when the metrics.recording.level configuration parameter is debug . Attribute Description create-latency-avg The average create execution time in ns. create-latency-max The maximum create execution time in ns. create-rate The average number of create operations per second. create-total The total number of create operations called. destroy-latency-avg The average destroy execution time in ns. destroy-latency-max The maximum destroy execution time in ns. destroy-rate The average number of destroy operations per second. destroy-total The total number of destroy operations called. forward-rate The average rate of records being forwarded downstream, from source nodes only, per second. forward-total The total number of of records being forwarded downstream, from source nodes only. process-latency-avg The average process execution time in ns. process-latency-max The maximum process execution time in ns. process-rate The average number of process operations per second. process-total The total number of process operations called. punctuate-latency-avg The average punctuate execution time in ns. punctuate-latency-max The maximum punctuate execution time in ns. punctuate-rate The average number of punctuate operations per second. punctuate-total The total number of punctuate operations called. 8.9.4. MBeans matching kafka.streams:type=stream-[store-scope]-metrics,client-id=*,task-id=*,[store-scope]-id=* State store metrics. These metrics are collected when the metrics.recording.level configuration parameter is debug . Attribute Description all-latency-avg The average all operation execution time in ns. all-latency-max The maximum all operation execution time in ns. all-rate The average all operation rate for this store. all-total The total number of all operation calls for this store. delete-latency-avg The average delete execution time in ns. delete-latency-max The maximum delete execution time in ns. delete-rate The average delete rate for this store. delete-total The total number of delete calls for this store. flush-latency-avg The average flush execution time in ns. flush-latency-max The maximum flush execution time in ns. flush-rate The average flush rate for this store. flush-total The total number of flush calls for this store. get-latency-avg The average get execution time in ns. get-latency-max The maximum get execution time in ns. get-rate The average get rate for this store. get-total The total number of get calls for this store. put-all-latency-avg The average put-all execution time in ns. put-all-latency-max The maximum put-all execution time in ns. put-all-rate The average put-all rate for this store. put-all-total The total number of put-all calls for this store. put-if-absent-latency-avg The average put-if-absent execution time in ns. put-if-absent-latency-max The maximum put-if-absent execution time in ns. put-if-absent-rate The average put-if-absent rate for this store. put-if-absent-total The total number of put-if-absent calls for this store. put-latency-avg The average put execution time in ns. put-latency-max The maximum put execution time in ns. put-rate The average put rate for this store. put-total The total number of put calls for this store. range-latency-avg The average range execution time in ns. range-latency-max The maximum range execution time in ns. range-rate The average range rate for this store. range-total The total number of range calls for this store. restore-latency-avg The average restore execution time in ns. restore-latency-max The maximum restore execution time in ns. restore-rate The average restore rate for this store. restore-total The total number of restore calls for this store. 8.9.5. MBeans matching kafka.streams:type=stream-record-cache-metrics,client-id=*,task-id=*,record-cache-id=* Record cache metrics. These metrics are collected when the metrics.recording.level configuration parameter is debug . Attribute Description hitRatio-avg The average cache hit ratio defined as the ratio of cache read hits over the total cache read requests. hitRatio-max The maximum cache hit ratio. hitRatio-min The mininum cache hit ratio.
|
[
"export KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote=false bin/kafka-server-start.sh",
"export KAFKA_JMX_OPTS=\"-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port= <port> -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false\" bin/kafka-server-start.sh"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_rhel/monitoring-str
|
Chapter 2. Projects
|
Chapter 2. Projects 2.1. Working with projects A project allows a community of users to organize and manage their content in isolation from other communities. Note Projects starting with openshift- and kube- are default projects . These projects host cluster components that run as pods and other infrastructure components. As such, OpenShift Container Platform does not allow you to create projects starting with openshift- or kube- using the oc new-project command. Cluster administrators can create these projects using the oc adm new-project command. Note You cannot assign an SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , and openshift . You cannot use these namespaces for running pods or services. 2.1.1. Creating a project using the web console If allowed by your cluster administrator, you can create a new project. Note Projects starting with openshift- and kube- are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create Projects starting with openshift- using the web console. Note You cannot assign an SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , and openshift . You cannot use these namespaces for running pods or services. Procedure Navigate to Home Projects . Click Create Project . Enter your project details. Click Create . 2.1.2. Creating a project using the Developer perspective in the web console You can use the Developer perspective in the OpenShift Container Platform web console to create a project in your cluster. Note Projects starting with openshift- and kube- are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create projects starting with openshift- or kube- using the Developer perspective. Cluster administrators can create these projects using the oc adm new-project command. Note You cannot assign an SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , and openshift . You cannot use these namespaces for running pods or services. Prerequisites Ensure that you have the appropriate roles and permissions to create projects, applications, and other workloads in OpenShift Container Platform. Procedure You can create a project using the Developer perspective, as follows: Click the Project drop-down menu to see a list of all available projects. Select Create Project . Figure 2.1. Create project In the Create Project dialog box, enter a unique name, such as myproject , in the Name field. Optional: Add the Display Name and Description details for the project. Click Create . Use the left navigation panel to navigate to the Project view and see the dashboard for your project. Optional: Use the Project drop-down menu at the top of the screen and select all projects to list all of the projects in your cluster. Use the Details tab to see the project details. If you have adequate permissions for a project, you can use the Project Access tab to provide or revoke administrator , edit , and view privileges for the project. 2.1.3. Creating a project using the CLI If allowed by your cluster administrator, you can create a new project. Note Projects starting with openshift- and kube- are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create Projects starting with openshift- or kube- using the oc new-project command. Cluster administrators can create these Projects using the oc adm new-project command. Note You cannot assign an SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , and openshift . You cannot use these namespaces for running pods or services. Procedure Run: USD oc new-project <project_name> \ --description="<description>" --display-name="<display_name>" For example: USD oc new-project hello-openshift \ --description="This is an example project" \ --display-name="Hello OpenShift" Note The number of projects you are allowed to create might be limited by the system administrator. After your limit is reached, you might have to delete an existing project in order to create a new one. 2.1.4. Viewing a project using the web console Procedure Navigate to Home Projects . Select a project to view. On this page, click Workloads to see workloads in the project. 2.1.5. Viewing a project using the CLI When viewing projects, you are restricted to seeing only the projects you have access to view based on the authorization policy. Procedure To view a list of projects, run: USD oc get projects You can change from the current project to a different project for CLI operations. The specified project is then used in all subsequent operations that manipulate project-scoped content: USD oc project <project_name> 2.1.6. Providing access permissions to your project using the Developer perspective You can use the Project view in the Developer perspective to grant or revoke access permissions to your project. Procedure To add users to your project and provide Admin , Edit , or View access to them: In the Developer perspective, navigate to the Project view. In the Project page, select the Project Access tab. Click Add Access to add a new row of permissions to the default ones. Figure 2.2. Project permissions Enter the user name, click the Select a role drop-down list, and select an appropriate role. Click Save to add the new permissions. You can also use: The Select a role drop-down list, to modify the access permissions of an existing user. The Remove Access icon, to completely remove the access permissions of an existing user to the project. Note Advanced role-based access control is managed in the Roles and Roles Binding views in the Administrator perspective. 2.1.7. Customizing the available cluster roles using the Developer perspective The users of a project are assigned to a cluster role based on their access control. You can access these cluster roles by navigating to the Project Project access Role . By default, these roles are Admin , Edit , and View . To add or edit the cluster roles for a project, you can customize the YAML code of the cluster. Procedure To customize the different cluster roles of a project: In the Search view, use the Resources drop-down list to search for Console . From the available options, select the Console operator.openshift.io/v1 . Figure 2.3. Searching Console resource Select cluster under the Name list. Navigate to the YAML tab to view and edit the YAML code. In the YAML code under spec , add or edit the list of availableClusterRoles and save your changes: spec: customization: projectAccess: availableClusterRoles: - admin - edit - view 2.1.8. Adding to a project Procedure Select Developer from the context selector at the top of the web console navigation menu. Click +Add At the top of the page, select the name of the project that you want to add to. Click a method for adding to your project, and then follow the workflow. Note You can also add components to the topology using quick search. 2.1.9. Checking project status using the web console Procedure Navigate to Home Projects . Select a project to see its status. 2.1.10. Checking project status using the CLI Procedure Run: USD oc status This command provides a high-level overview of the current project, with its components and their relationships. 2.1.11. Deleting a project using the web console You can delete a project by using the OpenShift Container Platform web console. Note If you do not have permissions to delete the project, the Delete Project option is not available. Procedure Navigate to Home Projects . Locate the project that you want to delete from the list of projects. On the far right side of the project listing, select Delete Project from the Options menu . When the Delete Project pane opens, enter the name of the project that you want to delete in the field. Click Delete . 2.1.12. Deleting a project using the CLI When you delete a project, the server updates the project status to Terminating from Active . Then, the server clears all content from a project that is in the Terminating state before finally removing the project. While a project is in Terminating status, you cannot add new content to the project. Projects can be deleted from the CLI or the web console. Procedure Run: USD oc delete project <project_name> 2.2. Creating a project as another user Impersonation allows you to create a project as a different user. 2.2.1. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 2.2.2. Impersonating a user when you create a project You can impersonate a different user when you create a project request. Because system:authenticated:oauth is the only bootstrap group that can create project requests, you must impersonate that group. Procedure To create a project request on behalf of a different user: USD oc new-project <project> --as=<user> \ --as-group=system:authenticated --as-group=system:authenticated:oauth 2.3. Configuring project creation In OpenShift Container Platform, projects are used to group and isolate related objects. When a request is made to create a new project using the web console or oc new-project command, an endpoint in OpenShift Container Platform is used to provision the project according to a template, which can be customized. As a cluster administrator, you can allow and configure how developers and service accounts can create, or self-provision , their own projects. 2.3.1. About project creation The OpenShift Container Platform API server automatically provisions new projects based on the project template that is identified by the projectRequestTemplate parameter in the cluster's project configuration resource. If the parameter is not defined, the API server creates a default template that creates a project with the requested name, and assigns the requesting user to the admin role for that project. When a project request is submitted, the API substitutes the following parameters into the template: Table 2.1. Default project template parameters Parameter Description PROJECT_NAME The name of the project. Required. PROJECT_DISPLAYNAME The display name of the project. May be empty. PROJECT_DESCRIPTION The description of the project. May be empty. PROJECT_ADMIN_USER The user name of the administrating user. PROJECT_REQUESTING_USER The user name of the requesting user. Access to the API is granted to developers with the self-provisioner role and the self-provisioners cluster role binding. This role is available to all authenticated developers by default. 2.3.2. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestTemplate: name: <template_name> After you save your changes, create a new project to verify that your changes were successfully applied. 2.3.3. Disabling project self-provisioning You can prevent an authenticated user group from self-provisioning new projects. Procedure Log in as a user with cluster-admin privileges. View the self-provisioners cluster role binding usage by running the following command: USD oc describe clusterrolebinding.rbac self-provisioners Example output Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth Review the subjects in the self-provisioners section. Remove the self-provisioner cluster role from the group system:authenticated:oauth . If the self-provisioners cluster role binding binds only the self-provisioner role to the system:authenticated:oauth group, run the following command: USD oc patch clusterrolebinding.rbac self-provisioners -p '{"subjects": null}' If the self-provisioners cluster role binding binds the self-provisioner role to more users, groups, or service accounts than the system:authenticated:oauth group, run the following command: USD oc adm policy \ remove-cluster-role-from-group self-provisioner \ system:authenticated:oauth Edit the self-provisioners cluster role binding to prevent automatic updates to the role. Automatic updates reset the cluster roles to the default state. To update the role binding using the CLI: Run the following command: USD oc edit clusterrolebinding.rbac self-provisioners In the displayed role binding, set the rbac.authorization.kubernetes.io/autoupdate parameter value to false , as shown in the following example: apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "false" ... To update the role binding by using a single command: USD oc patch clusterrolebinding.rbac self-provisioners -p '{ "metadata": { "annotations": { "rbac.authorization.kubernetes.io/autoupdate": "false" } } }' Log in as an authenticated user and verify that it can no longer self-provision a project: USD oc new-project test Example output Error from server (Forbidden): You may not request a new project via this API. Consider customizing this project request message to provide more helpful instructions specific to your organization. 2.3.4. Customizing the project request message When a developer or a service account that is unable to self-provision projects makes a project creation request using the web console or CLI, the following error message is returned by default: You may not request a new project via this API. Cluster administrators can customize this message. Consider updating it to provide further instructions on how to request a new project specific to your organization. For example: To request a project, contact your system administrator at [email protected] . To request a new project, fill out the project request form located at https://internal.example.com/openshift-project-request . To customize the project request message: Procedure Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Log in as a user with cluster-admin privileges. Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestMessage parameter and set the value to your custom message: Project configuration resource with custom project request message apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestMessage: <message_string> For example: apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestMessage: To request a project, contact your system administrator at [email protected]. After you save your changes, attempt to create a new project as a developer or service account that is unable to self-provision projects to verify that your changes were successfully applied.
|
[
"oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"",
"oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"",
"oc get projects",
"oc project <project_name>",
"spec: customization: projectAccess: availableClusterRoles: - admin - edit - view",
"oc status",
"oc delete project <project_name>",
"oc new-project <project> --as=<user> --as-group=system:authenticated --as-group=system:authenticated:oauth",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc describe clusterrolebinding.rbac self-provisioners",
"Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth",
"oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'",
"oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth",
"oc edit clusterrolebinding.rbac self-provisioners",
"apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"",
"oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'",
"oc new-project test",
"Error from server (Forbidden): You may not request a new project via this API.",
"You may not request a new project via this API.",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected]."
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/building_applications/projects
|
3.3. Allowed Hash Functions
|
3.3. Allowed Hash Functions The following keyed hash messages authentication (HMAC) are allowed: SHA-256 SHA-384 SHA-512 The following cryptographic hashing functions are allowed: SHA-256 SHA-384 SHA-512
| null |
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/allowed_hash_functions
|
Chapter 24. Managing Certificates for Users, Hosts, and Services
|
Chapter 24. Managing Certificates for Users, Hosts, and Services Identity Management (IdM) supports two types of certificate authorities (CAs): Integrated IdM CA Integrated CAs can create, revoke, and issue certificates for users, hosts, and services. For more details, see Section 24.1, "Managing Certificates with the Integrated IdM CAs" . IdM supports creating lightweight sub-CAs. For more details, see Section 26.1, "Lightweight Sub-CAs" External CA An external CA is a CA other than the integrated IdM CA. Using IdM tools, you add certificates issued by these CAs to users, services, or hosts as well as remove them. For more details, see Section 24.2, "Managing Certificates Issued by External CAs" . Each user, host, or service can have multiple certificates assigned. Note For more details on the supported CA configurations of the IdM server, see Section 2.3.2, "Determining What CA Configuration to Use" . 24.1. Managing Certificates with the Integrated IdM CAs 24.1.1. Requesting New Certificates for a User, Host, or Service To request a certificate using: the IdM web UI, see the section called "Web UI: Requesting New Certificates" . the command line, see the section called "Command Line: Requesting New Certificates" . Note that you must generate the certificate request itself with a third-party tool. The following procedures use the certutil and openSSL utilities. Important Services typically run on dedicated service nodes on which the private keys are stored. Copying a service's private key to the IdM server is considered insecure. Therefore, when requesting a certificate for a service, create the CSR on the service node. Web UI: Requesting New Certificates Under the Identity tab, select the Users , Hosts , or Services subtab. Click the name of the user, host, or service to open its configuration page. Figure 24.1. List of Hosts Click Actions New Certificate . Optional: Select the issuing CA and profile ID. Follow the instructions on the screen for using certutil . Click Issue . Command Line: Requesting New Certificates Request a new certificate using certutil in standard situations - see Section 24.1.1.1, "Requesting New Certificates Using certutil" . Request a new certificate using openSSL to enable a Kerberos alias to use a host or service certificate - see Section 24.1.1.2, "Preparing a Certificate Request With Multiple SAN Fields Using OpenSSL" . 24.1.1.1. Requesting New Certificates Using certutil Create a temporary directory for the certificate database: Create a new temporary certificate database, for instance: Create the certificate signing request (CSR) and redirect the output to a file. For example, to create a CSR for a 4096 bit certificate and to set the subject to CN=server.example.com,O=EXAMPLE.COM : Submit the certificate request to the CA. For details, see Section 24.1.1.4, "Submitting a Certificate Request to the IdM CA" . 24.1.1.2. Preparing a Certificate Request With Multiple SAN Fields Using OpenSSL Create one or more aliases, for example test1/server.example.com , test2/server.example.com , for your Kerberos principal test/server.example.com . See Section 20.2.1, "Kerberos Principal Alias" for more details. In the CSR, add a subjectAltName for dnsName ( server.example.com ) and otherName ( test2/server.example.com ). To do this, configure the openssl.conf file so that it includes the following line specifying the UPN otherName and subjectAltName: Create a certificate request using openssl : Submit the certificate request to the CA. For details, see Section 24.1.1.4, "Submitting a Certificate Request to the IdM CA" . 24.1.1.3. Requesting New Certificates Using Certmonger You can use the certmonger service to request a certificate from an IdM CA. For details, see the Requesting a CA-signed Certificate Through SCEP section in the System-level Authentication Guide . 24.1.1.4. Submitting a Certificate Request to the IdM CA Submit the certificate request file to the CA running on the IdM server. Be sure to specify the Kerberos principal to associate with the newly-issued certificate: The ipa cert-request command in IdM uses the following defaults: Certificate profile: caIPAserviceCert To select a custom profile, use the --profile-id option with the ipa cert-request command. For further details about creating a custom certificate profile, see Section 24.4.1, "Creating a Certificate Profile" . Integrated CA: ipa (IdM root CA) To select a sub-CA, use the --ca option with the ipa cert-request command. For further details, see the output of the ipa cert-request --help command. 24.1.2. Revoking Certificates with the Integrated IdM CAs If you need to invalidate the certificate before its expiration date, you can revoke it. To revoke a certificate using: the IdM web UI, see the section called "Web UI: Revoking Certificates" the command line, see the section called "Command Line: Revoking Certificates" A revoked certificate is invalid and cannot be used for authentication. All revocations are permanent, except for reason 6: Certificate Hold. Table 24.1. Revocation Reasons ID Reason Explanation 0 Unspecified 1 Key Compromised The key that issued the certificate is no longer trusted. Possible causes: lost token, improperly accessed file. 2 CA Compromised The CA that issued the certificate is no longer trusted. 3 Affiliation Changed Possible causes: A person has left the company or moved to another department. A host or service is being retired. 4 Superseded A newer certificate has replaced the current certificate. 5 Cessation of Operation The host or service is being decommissioned. 6 Certificate Hold The certificate is temporarily revoked. You can restore the certificate later. 8 Remove from CRL The certificate is not included in the certificate revocation list (CRL). 9 Privilege Withdrawn The user, host, or service is no longer permitted to use the certificate. 10 Attribute Authority (AA) Compromise The AA certificate is no longer trusted. Web UI: Revoking Certificates To revoke a certificate: Open the Authentication tab, and select the Certificates subtab. Click the serial number of the certificate to open the certificate information page. Figure 24.2. List of Certificates Click Actions Revoke Certificate . Select the reason for revoking, and click Revoke . See Table 24.1, "Revocation Reasons" for details. Command Line: Revoking Certificates Use the ipa cert-revoke command, and specify: the certificate serial number a number that identifies the reason for the revocation; see Table 24.1, "Revocation Reasons" for details For example, to revoke the certificate with serial number 1032 because of reason 1: Key Compromised: 24.1.3. Restoring Certificates with the Integrated IdM CAs If you have revoked a certificate because of reason 6: Certificate Hold, you can restore it again. To restore a certificate using: the IdM web UI, see the section called "Web UI: Restoring Certificates" the command line, see the section called "Command Line: Restoring Certificates" Web UI: Restoring Certificates Open the Authentication tab, and select the Certificates subtab. Click the serial number of the certificate to open the certificate information page. Figure 24.3. List of Certificates Click Actions Restore Certificate . Command Line: Restoring Certificates Use the ipa cert-remove-hold command and specify the certificate serial number. For example:
|
[
"mkdir ~/certdb/",
"certutil -N -d ~/certdb/",
"certutil -R -d ~/certdb/ -a -g 4096 -s \" CN=server.example.com,O=EXAMPLE.COM \" -8 server.example.com > certificate_request.csr",
"otherName= 1.3.6.1.4.1.311.20.2.3 ;UTF8: test2/[email protected] DNS.1 = server.example.com",
"openssl req -new -newkey rsa: 2048 -keyout test2service.key -sha256 -nodes -out certificate_request.csr -config openssl.conf",
"ipa cert-request certificate_request.csr --principal= host/server.example.com",
"ipa cert-revoke 1032 --revocation-reason=1",
"ipa cert-remove-hold 1032"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/certificates
|
3.17. Searching for Templates
|
3.17. Searching for Templates The following table describes all search options for templates. Table 3.13. Searching for Templates Property (of resource or resource-type) Type Description (Reference) Vms. Vms-prop String The property of the virtual machines associated with the template. Hosts. hosts-prop String The property of the hosts associated with the template. Events. events-prop String The property of the events associated with the template. Users. users-prop String The property of the users associated with the template. name String The name of the template. domain String The domain of the template. os String The type of operating system. creationdate Integer The date on which the template was created. Date format is mm/dd/yy . childcount Integer The number of virtual machines created from the template. mem Integer Defined memory. description String The description of the template. status String The status of the template. cluster String The cluster associated with the template. datacenter String The data center associated with the template. quota String The quota associated with the template. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Template: Events.severity >= normal and Vms.uptime > 0 This example returns a list of templates where events of normal or greater severity have occurred on virtual machines derived from the template, and the virtual machines are still running.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/searching_for_templates
|
Chapter 1. Governance
|
Chapter 1. Governance Enterprises must meet internal standards for software engineering, secure engineering, resiliency, security, and regulatory compliance for workloads hosted on private, multi and hybrid clouds. Red Hat Advanced Cluster Management for Kubernetes governance provides an extensible policy framework for enterprises to introduce their own security policies. Continue reading the related topics of the Red Hat Advanced Cluster Management governance framework: Policy deployment Policy controllers Policy controller advanced configuration Configuring policy compliance history API (Technology Preview) Supported policies Policy dependencies Governance dashboard Securing the hub cluster Gatekeeper operator overview Integrating Policy Generator 1.1. Policy controllers Policy controllers monitor and report whether your cluster is compliant with a policy. Use the Red Hat Advanced Cluster Management for Kubernetes policy framework by using the supported policy templates to apply policies managed by these controllers. The policy controllers manage Kubernetes custom resource definition instances. Policy controllers check for policy violations, and can make the cluster status compliant if the controller supports the enforcement feature. View the following topics to learn more about the following Red Hat Advanced Cluster Management for Kubernetes policy controllers: Kubernetes configuration policy controller Certificate policy controller Policy set controller Operator policy controller Important: Only the configuration policy controller policies support the enforce feature. You must manually remediate policies, where the policy controller does not support the enforce feature. 1.1.1. Kubernetes configuration policy controller Use the configuration policy controller to configure any Kubernetes resource and apply security policies across your clusters. The configuration policy controller communicates with the local Kubernetes API server so that you can get a list of your configurations that are in your cluster. During installation, the configuration policy controller is created on the managed cluster. The configuration policy is provided in the policy-templates field of the policy on the hub cluster, and is propagated to the selected managed clusters by the governance framework. When the remediationAction for the configuration policy controller is set to InformOnly , the parent policy does not enforce the configuration policy, even if the remediationAction in the parent policy is set to enforce . If you have existing Kubernetes manifests that you want to put in a policy, the Policy Generator is a useful tool to accomplish this. 1.1.1.1. Configuration policy YAML structure You can find the description of a field on your managed cluster by running the oc explain --api-version=policy.open-cluster-management.io/v1 ConfigurationPolicy.<field-path> command. Replace <field-path> with the path to the field that you need. apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-config spec: namespaceSelector: include: ["default"] exclude: [] matchExpressions: [] matchLabels: {} remediationAction: inform 1 customMessage: compliant: {} noncompliant: {} severity: low evaluationInterval: compliant: "" noncompliant: "" object-templates-raw: "" object-templates: 2 - complianceType: musthave metadataComplianceType: recordDiff: "" recreateOption: "" objectSelector: matchLabels: {} matchExpressions: [] objectDefinition: apiVersion: v1 kind: Pod metadata: name: pod spec: containers: - image: pod-image name: pod-name ports: - containerPort: 80 - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: myconfig namespace: default data: testData: hello ... 1 Configuration policies that specify an object without a name can only be set to inform . When the remediationAction for the configuration policy is set to enforce , the controller applies the specified configuration to the target managed cluster. 2 A Kubernetes object is defined in the object-templates array in the configuration policy, where fields of the configuration policy controller is compared with objects on the managed cluster. You can also use templated values within configuration policies. For more advanced use cases, specify a string in object-templates-raw to create the object-templates that you want. For more information, see Template processing . 1.1.1.2. Configuration policy YAML table Table 1.1. Parameter table Field Optional or required Description apiVersion Required Set the value to policy.open-cluster-management.io/v1 . kind Required Set the value to ConfigurationPolicy to indicate the type of policy. metadata.name Required The name of the policy. spec.namespaceSelector Required for namespaced objects that do not have a namespace specified Determines namespaces in the managed cluster that the object is applied to. The include and exclude parameters accept file path expressions to include and exclude namespaces by name. The matchExpressions and matchLabels parameters specify namespaces to include by label. See the Kubernetes labels and selectors documentation. The resulting list is compiled by using the intersection of results from all parameters. spec.remediationAction Required Specifies the action to take when the policy is non-compliant. Use the following parameter values: inform , InformOnly , or enforce . spec.customMessage Optional Configure the compliance message sent by the configuration policy based on the current compliance. Each message configuration is a string that can contain Go templates. The .DefaultMessage and .Policy context variables are available for use in the templates. You can access the default message by using the .DefaultMessage parameter. The .Policy context variable contains the current policy object, including its status. For example, you can access the state of each related object by specifying the .Policy.status.relatedObjects[*].object field. If you set a value for the evaluationInterval field other than watch , only the kind, name, and namespace of the related objects are available. [source,yaml] ---- .Policy.status.relatedObjects[*].object ---- If you set an evaluationInterval , only identifiable information is available. spec.customMessage.compliant Optional Configure custom messages for configuration policies that are compliant. Go templates and UTF-8 encoded characters, including emoji and foreign characters, are supported values. spec.customMessage.noncompliant Optional Configure custom messages for configuration policies that are non-compliant. Go templates and UTF-8 encoded characters, including emoji and foreign characters, are supported values. spec.severity Required Specifies the severity when the policy is non-compliant. Use the following parameter values: low , medium , high , or critical . spec.evaluationInterval Optional Specifies the frequency for a policy to be evaluated when it is in a particular compliance state. Use the parameters compliant and noncompliant . The default value for the compliant and noncompliant parameters is watch to leverage Kubernetes API watches instead of polling the Kubernetes API server. When managed clusters have low resources, the evaluation interval can be set to long polling intervals to reduce CPU and memory usage on the Kubernetes API and policy controller. These are in the format of durations. For example, 1h25m3s represents 1 hour, 25 minutes, and 3 seconds. These can also be set to never to avoid evaluating the policy after it is in a particular compliance state. spec.evaluationInterval.compliant Optional Specifies the evaluation frequency for a compliant policy. To enable the polling behavior, set this parameter to 10s . spec.evaluationInterval.noncompliant Optional Specifies the evaluation frequency for a non-compliant policy. To enable the polling behavior, set this parameter to 10s . spec.object-templates Optional The array of Kubernetes objects (either fully defined or containing a subset of fields) for the controller to compare with objects on the managed cluster. Note: While spec.object-templates and spec.object-templates-raw are listed as optional, exactly one of the two parameter fields must be set. spec.object-templates-raw Optional Used to set object templates with a raw YAML string. Specify conditions for the object templates, where advanced functions like if-else statements and the range function are supported values. For example, add the following value to avoid duplication in your object-templates definition: {{- if eq .metadata.name "policy-grc-your-meta-data-name" }} replicas: 2 {{- else }} replicas: 1 {{- end }} Note: While spec.object-templates and spec.object-templates-raw are listed as optional, exactly one of the two parameter fields must be set. spec.object-templates[].complianceType Required Use this parameter to define the desired state of the Kubernetes object on your managed clusters. Use one of the following verbs as the parameter value: mustonlyhave : Indicates that an object must exist with the exact fields and values as defined in the objectDefinition . musthave : Indicates an object must exist with the same fields as specified in the objectDefinition . Any existing fields on the object that are not specified in the object-template are ignored. In general, array values are appended. The exception for the array to be patched is when the item contains a name key with a value that matches an existing item. Use a fully defined objectDefinition using the mustonlyhave compliance type, if you want to replace the array. mustnothave : Indicates that an object with the same fields as specified in the objectDefinition cannot exist. spec.object-templates[].metadataComplianceType Optional Overrides spec.object-templates[].complianceType when comparing the manifest's metadata section to objects on the cluster ("musthave", "mustonlyhave"). Default is unset to not override complianceType for metadata. spec.object-templates[].recordDiff Optional Use this parameter to specify if and where to display the difference between the object on the cluster and the objectDefinition in the policy. The following options are supported: Set to InStatus to store the difference in the ConfigurationPolicy status. Set to Log to log the difference in the controller logs. Set to None to not log the difference. By default, this parameter is set to InStatus if the controller does not detect sensitive data in the difference. Otherwise, the default is None . If sensitive data is detected, the ConfigurationPolicy status displays a message to set recordDiff to view the difference. spec.object-templates[].recreateOption Optional Describes when to delete and recreate an object when an update is required. When you set the object to IfRequired , the policy recreates the object when updating an immutable field. When you set the parameter to Always , the policy recreates the object on any update. When you set the remediationAction to inform , the parameter value, recreateOption , has no effect on the object. The IfRequired value has no effect on clusters without dry-run update support. The default value is None . spec.object-templates[].objectDefinition Required A Kubernetes object (either fully defined or containing a subset of fields) for the controller to compare with objects on the managed cluster. spec.pruneObjectBehavior Optional Determines whether to clean up resources related to the policy when the policy is removed from a managed cluster. 1.1.1.3. Additional resources See the following topics for more information: See Creating configuration policies . See the Hub cluster policy framework for more details on the hub cluster policy. See the policy samples that use NIST Special Publication 800-53 (Rev. 4) , and are supported by Red Hat Advanced Cluster Management from the CM-Configuration-Management folder . For information about dry-run support, see the Kubernetes documentation, Dry-run . Learn about how policies are applied on your hub cluster, see Supported policies for more details. Refer to Policy controllers for more details about controllers. Customize your policy controller configuration. See Policy controller advanced configuration . Learn about cleaning up resources and other topics in the Cleaning up resources that are created by policies documentation. Refer to Policy Generator . Learn about how to create and customize policies, see Governance dashboard . See Template processing . 1.1.2. Certificate policy controller You can use the certificate policy controller to detect certificates that are close to expiring, time durations (hours) that are too long, or contain DNS names that fail to match specified patterns. You can add the certificate policy to the policy-templates field of the policy on the hub cluster, which propagates to the selected managed clusters by using the governance framework. See the Hub cluster policy framework documentation for more details on the hub cluster policy. Configure and customize the certificate policy controller by updating the following parameters in your controller policy: minimumDuration minimumCADuration maximumDuration maximumCADuration allowedSANPattern disallowedSANPattern Your policy might become non-compliant due to either of the following scenarios: When a certificate expires in less than the minimum duration of time or exceeds the maximum time. When DNS names fail to match the specified pattern. The certificate policy controller is created on your managed cluster. The controller communicates with the local Kubernetes API server to get the list of secrets that contain certificates and determine all non-compliant certificates. Certificate policy controller does not support the enforce feature. Note: The certificate policy controller automatically looks for a certificate in a secret in only the tls.crt key. If a secret is stored under a different key, add a label named certificate_key_name with a value set to the key to let the certificate policy controller know to look in a different key. For example, if a secret contains a certificate stored in the key named sensor-cert.pem , add the following label to the secret: certificate_key_name: sensor-cert.pem . 1.1.2.1. Certificate policy controller YAML structure View the following example of a certificate policy and review the element in the YAML table: apiVersion: policy.open-cluster-management.io/v1 kind: CertificatePolicy metadata: name: certificate-policy-example spec: namespaceSelector: include: ["default"] exclude: [] matchExpressions: [] matchLabels: {} labelSelector: myLabelKey: myLabelValue remediationAction: severity: minimumDuration: minimumCADuration: maximumDuration: maximumCADuration: allowedSANPattern: disallowedSANPattern: 1.1.2.1.1. Certificate policy controller YAML table Table 1.2. Parameter table Field Optional or required Description apiVersion Required Set the value to policy.open-cluster-management.io/v1 . kind Required Set the value to CertificatePolicy to indicate the type of policy. metadata.name Required The name to identify the policy. metadata.labels Optional In a certificate policy, the category=system-and-information-integrity label categorizes the policy and facilitates querying the certificate policies. If there is a different value for the category key in your certificate policy, the value is overridden by the certificate controller. spec.namespaceSelector Required Determines namespaces in the managed cluster where secrets are monitored. The include and exclude parameters accept file path expressions to include and exclude namespaces by name. The matchExpressions and matchLabels parameters specify namespaces to be included by label. See the Kubernetes labels and selectors documentation. The resulting list is compiled by using the intersection of results from all parameters. Note: If the namespaceSelector for the certificate policy controller does not match any namespace, the policy is considered compliant. spec.labelSelector Optional Specifies identifying attributes of objects. See the Kubernetes labels and selectors documentation. spec.remediationAction Required Specifies the remediation of your policy. Set the parameter value to inform . Certificate policy controller only supports inform feature. spec.severity Optional Informs the user of the severity when the policy is non-compliant. Use the following parameter values: low , medium , high , or critical . spec.minimumDuration Required When a value is not specified, the default value is 100h . This parameter specifies the smallest duration (in hours) before a certificate is considered non-compliant. The parameter value uses the time duration format from Golang. See Golang Parse Duration for more information. spec.minimumCADuration Optional Set a value to identify signing certificates that might expire soon with a different value from other certificates. If the parameter value is not specified, the CA certificate expiration is the value used for the minimumDuration . See Golang Parse Duration for more information. spec.maximumDuration Optional Set a value to identify certificates that have been created with a duration that exceeds your desired limit. The parameter uses the time duration format from Golang. See Golang Parse Duration for more information. spec.maximumCADuration Optional Set a value to identify signing certificates that have been created with a duration that exceeds your defined limit. The parameter uses the time duration format from Golang. See Golang Parse Duration for more information. spec.allowedSANPattern Optional A regular expression that must match every SAN entry that you have defined in your certificates. This parameter checks DNS names against patterns. See the Golang Regular Expression syntax for more information. spec.disallowedSANPattern Optional A regular expression that must not match any SAN entries you have defined in your certificates. This parameter checks DNS names against patterns. Note: To detect wild-card certificate, use the following SAN pattern: disallowedSANPattern: "[\\*]" See the Golang Regular Expression syntax for more information. 1.1.2.2. Certificate policy sample When your certificate policy controller is created on your hub cluster, a replicated policy is created on your managed cluster. See policy-certificate.yaml to view the certificate policy sample. 1.1.2.3. Additional resources Learn how to manage a certificate policy, see Managing security policies for more details. Refer to Policy controllers introduction for more topics. Return to the Certificates introduction . 1.1.3. Policy set controller The policy set controller aggregates the policy status scoped to policies that are defined in the same namespace. Create a policy set ( PolicySet ) to group policies that are in the same namespace. All policies in the PolicySet are placed together in a selected cluster by creating a PlacementBinding to bind the PolicySet and Placement . The policy set is deployed to the hub cluster. Additionally, when a policy is a part of multiple policy sets, existing and new Placement resources remain in the policy. When a user removes a policy from the policy set, the policy is not applied to the cluster that is selected in the policy set, but the placements remain. The policy set controller only checks for violations in clusters that include the policy set placement. Notes: The Red Hat Advanced Cluster Management sample policy set uses cluster placement. If you use cluster placement, bind the namespace containing the policy to the managed cluster set. See Deploying policies to your cluster for more details on using cluster placement. In order to use a Placement resource, a ManagedClusterSet resource must be bound to the namespace of the Placement resource with a ManagedClusterSetBinding resource. Refer to Creating a ManagedClusterSetBinding resource for additional details. Learn more details about the policy set structure in the following sections: Policy set controller YAML structure Policy set controller YAML table Policy set sample 1.1.3.1. Policy set YAML structure Your policy set might resemble the following YAML file: apiVersion: policy.open-cluster-management.io/v1beta1 kind: PolicySet metadata: name: demo-policyset spec: policies: - policy-demo --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: demo-policyset-pb placementRef: apiGroup: cluster.open-cluster-management.io kind: Placement name: demo-policyset-pr subjects: - apiGroup: policy.open-cluster-management.io kind: PolicySet name: demo-policyset --- apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: demo-policyset-pr spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: name operator: In values: - local-cluster tolerations: - key: cluster.open-cluster-management.io/unavailable operator: Exists - key: cluster.open-cluster-management.io/unreachable operator: Exists 1.1.3.2. Policy set table View the following parameter table for descriptions: Table 1.3. Parameter table Field Optional or required Description apiVersion Required Set the value to policy.open-cluster-management.io/v1beta1 . kind Required Set the value to PolicySet to indicate the type of policy. metadata.name Required The name for identifying the policy resource. spec Required Add configuration details for your policy. spec.policies Optional The list of policies that you want to group together in the policy set. 1.1.3.3. Policy set sample apiVersion: policy.open-cluster-management.io/v1beta1 kind: PolicySet metadata: name: pci namespace: default spec: description: Policies for PCI compliance policies: - policy-pod - policy-namespace status: compliant: NonCompliant placement: - placementBinding: binding1 placement: placement1 policySet: policyset-ps 1.1.3.4. Additional resources See Red Hat OpenShift Platform Plus policy set . See the Creating policy sets section in the Managing security policies topic. Also view the stable PolicySets , which require the Policy Generator for deployment, PolicySets-- Stable . 1.1.4. Operator policy controller The operator policy controller allows you to monitor and install Operator Lifecycle Manager operators across your clusters. Use the operator policy controller to monitor the health of various pieces of the operator and to specify how you want to automatically handle updates to the operator. You can also distribute an operator policy to managed clusters by using the governance framework and adding the policy to the policy-templates field of a policy on the hub cluster. You can also use template values within the operatorGroup and subscription fields of an operator policy. For more information, see Template processing . 1.1.4.1. Prerequisites Operator Lifecycle Manager must be available on your managed cluster. This is enabled by default on Red Hat OpenShift Container Platform. Required access: Cluster administrator 1.1.4.2. Operator policy YAML table Field Optional or required Description apiVersion Required Set the value to policy.open-cluster-management.io/v1beta1 . kind Required Set the value to OperatorPolicy to indicate the type of policy. metadata.name Required The name for identifying the policy resource. spec.remediationAction Required If the remediationAction for the operator policy is set to enforce , the controller creates resources on the target managed cluster to communicate to OLM to install the operator and approve updates based on the versions specified in the policy. + If the remediationAction set to inform , the controller only reports the status of the operator, including if any upgrades are available. spec.operatorGroup Optional By default, if the operatorGroup field is not specified, the controller generates an AllNamespaces type OperatorGroup in the same namespace as the subscription, if supported. This resource is generated by the operator policy controller. spec.complianceType Required Specifies the desired state of the operator on the cluster. If set to musthave , the policy is compliant when the operator is found. If set to mustnothave , the policy is compliant when the operator is not found. spec.removalBehavior Optional Determines which resource types need to be kept or removed when you enforce an OperatorPolicy resource with complianceType: mustnothave defined. There is no effect when complianceType is set to musthave . - operatorGroups can be set to Keep or DeleteIfUnused . The default value is DeleteIfUnusued which only removes the OperatorGroup resource if it is not used by any other operators. - subscriptions can be set to Keep or Delete . The default value is Delete . - clusterServiceVersions can be set to Keep or Delete . The default value is Delete . - customResourceDefinitions can be set to Keep or Delete . The default value is Keep . If you set this to Delete , the CustomResourceDefintion resources on the managed cluster are removed and can cause data loss. spec.subscription Required Define the configurations to create an operator subscription. Add information in the following fields to create an operator subscription. Default options are selected for a few items if there is no entry: channel : If not specified, the default channel is selected from the operator catalog. The default can be different on different OpenShift Container Platform versions. name : Specify the package name of the operator. namespace : If not specified, the default namespaced that is used for OpenShift Container Platform managed clusters is openshift-operators . source : If not specified, the catalog that contains the operator is chosen. sourceNamespace : If not specified, the namespace of the catalog that contains the operator is chosen. Note: If you define a value for upgradeApproval , your policy becomes non-compliant. spec.complianceConfig Optional Use this parameter to define the compliance behavior for specific scenarios that are associated with operators. You can set each of the following options individually, where the supported values are Compliant and NonCompliant : catalogSourceUnhealthy : Define the compliance when the catalog source for the operator is unhealthy. The default value is Compliant because this only affects possible upgrades. deploymentsUnavailable : Define the compliance when at least one deployment of the operator is not fully available. The default value is NonCompliant because this reflects the availability of the service that the operator provides. upgradesAvailable : Define the compliance when there is an upgrade available for the operator. The default value is Compliant because the existing operator installation might be running correctly. spec.upgradeApproval Required If the upgradeApproval field is set to Automatic , version upgrades on the cluster are approved by the policy when the policy is set to enforce . If you set the field to None , version upgrades to the specific operator are not approved when the policy is set to enforce . spec.versions Optional Declare which versions of the operator are compliant. If the field is empty, any version running on the cluster is considered compliant. If the field is not empty, the version on the managed cluster must match one of the versions in the list for the policy to be compliant. If the policy is set to enforce and the list is not empty, the versions listed here are approved by the controller on the cluster. 1.1.4.3. Additional resources See Template processing . See Installing an operator by using the OperatorPolicy resource for more details. See Managing operator policies in disconnected environments . See the Subscription topic in the OpenShift Container Platform documentation. See Operator Lifecycle Manager (OLM) for more details. See the Adding Operators to a cluster documentation for general information on OLM. 1.2. Template processing Configuration policies and operator policies support the inclusion of Golang text templates. These templates are resolved at runtime either on the hub cluster or the target managed cluster using configurations related to that cluster. This gives you the ability to define policies with dynamic content, and inform or enforce Kubernetes resources that are customized to the target cluster. A policy definition can contain both hub cluster and managed cluster templates. Hub cluster templates are processed first on the hub cluster, then the policy definition with resolved hub cluster templates is propagated to the target clusters. A controller on the managed cluster processes any managed cluster templates in the policy definition and then enforces or verifies the fully resolved object definition. The template must conform to the Golang template language specification, and the resource definition generated from the resolved template must be a valid YAML. See the Golang documentation about Package templates for more information. Any errors in template validation are recognized as policy violations. When you use a custom template function, the values are replaced at runtime. Important: If you use hub cluster templates to propagate secrets or other sensitive data, the sensitive data exists in the managed cluster namespace on the hub cluster and on the managed clusters where that policy is distributed. The template content is expanded in the policy, and policies are not encrypted by the OpenShift Container Platform ETCD encryption support. To address this, use fromSecret or copySecretData , which automatically encrypts the values from the secret, or protect to encrypt other values. When you add multiline string values such as, certificates, always add | toRawJson | toLiteral syntax at the end of the template pipeline to handle line breaks. For example, if you are copying a certificate from a Secret resource and including it in a ConfigMap resource, your template pipeline might be similar to the following syntax: The toRawJson template function converts the input value to a JSON string with new lines escaped to not affect the YAML structure. The toLiteral template function removes the outer single quotes from the output. For example, when templates are processed for the key: '{{ 'hello\nworld' | toRawJson }}' template pipeline, the output is key: '"hello\nworld"' . The output of the key: '{{ 'hello\nworld' | toRawJson | toLiteral }}' template pipeline is key: "hello\nworld" . See the following table for a comparison of hub cluster and managed cluster templates: 1.2.1. Comparison of hub cluster and managed cluster templates Table 1.4. Comparison table Templates Hub cluster Managed cluster Syntax Golang text template specification Golang text template specification Delimiter {{hub ... hub}} {{ ... }} Context A .ManagedClusterName variable is available, which at runtime, resolves to the name of the target cluster where the policy is propagated. The .ManagedClusterLabels variable is also available, which resolves to a map of keys and values of the labels on the managed cluster where the policy is propagated. No context variables Access control By default, you can only reference namespaced Kubernetes resources that are in the same namespace as the Policy object and the ManagedCluster object of the cluster that the policy propogates to. Alternatively, you can specify the spec.hubTemplateOptions.serviceAccountName field in the Policy object to a service account in the same namespace as the Policy resource. When you specify the field, the service account is used for all hub cluster template lookups. Note: The service account must have list and watch permissions on any resource that is looked up in a hub cluster template. You can reference any resource on the cluster. Functions A set of template functions that support dynamic access to Kubernetes resources and string manipulation. See Template functions for more information. See the Access control row for lookup restrictions. The fromSecret template function on the hub cluster stores the resulting value as an encrypted string on the replicated policy, in the managed cluster namespace. The equivalent call might use the following syntax: {{hub "(lookup "v1" "Secret" "default" "my-hub-secret").data.message | protect hub}} A set of template functions support dynamic access to Kubernetes resources and string manipulation. See Template functions for more information. Function output storage The output of template functions are stored in Policy resource objects in each applicable managed cluster namespace on the hub cluster, before it is synced to the managed cluster. This means that any sensitive results from template functions are readable by anyone with read access to the Policy resource objects on the hub cluster, and anyone with read access to the ConfigurationPolicy or OperatorPolicy resource objects on the managed clusters. Additionally, if etcd encryption is enabled, the policy resource objects are not encrypted. It is best to carefully consider this when using template functions that return sensitive output (e.g. from a secret). The output of template functions are not stored in policy related resource objects. Policy metadata The .PolicyMetadata variable resolves to a map with the name , namespace , labels , and annotations keys with values from the root policy. No context variables Processing Processing occurs at runtime on the hub cluster during propagation of replicated policies to clusters. Policies and the hub cluster templates within the policies are processed on the hub cluster only when templates are created or updated. Processing occurs on the managed cluster. Configuration policies are processed periodically, which automatically updates the resolved object definition with data in the referenced resources. Operator policies automatically update whenever a referenced resource changes. Processing errors Errors from the hub cluster templates are displayed as violations on the managed clusters the policy applies to. Errors from the managed cluster templates are displayed as violations on the specific target cluster where the violation occurred. Continue reading the following topics: Template functions Advanced template processing in configuration policies 1.2.2. Template functions Reference Kubernetes resources such as resource-specific and generic template functions on your hub cluster by using the {{hub ... hub}} delimiters, or on your managed cluster by using the {{ ... }} delimiters. You can use resource-specific functions for convenience and to make the content of your resources more accessible. 1.2.2.1. Template function descriptions If you use the generic function, lookup , which is more advanced, familiarize yourself with the YAML structure of the resource that is being looked up. In addition to these functions, utility functions such as base64enc , base64dec , indent , autoindent , toInt , toBool , and more are available. To conform templates with YAML syntax, you must define templates in the policy resource as strings using quotes or a block character ( | or > ). This causes the resolved template value to also be a string. To override this, use toInt or toBool as the final function in the template to initiate further processing that forces the value to be interpreted as an integer or boolean respectively. Continue reading to view the descriptions and examples for some of the custom template functions that are supported: fromSecret fromConfigMap fromClusterClaim lookup base64enc base64dec indent autoindent toInt toBool protect toLiteral copySecretData copyConfigMapData getNodesWithExactRoles hasNodesWithExactRoles Sprig open source 1.2.2.1.1. fromSecret The fromSecret function returns the value of the given data key in the secret. View the following syntax for the function: When you use this function, enter the namespace, name, and data key of a Kubernetes Secret resource. You must use the same namespace that is used for the policy when using the function in a hub cluster template. See Template processing for more details. View the following configuration policy that enforces a Secret resource on the target cluster: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 data: 1 USER_NAME: YWRtaW4= PASSWORD: '{{ fromSecret "default" "localsecret" "PASSWORD" }}' 2 kind: Secret 3 metadata: name: demosecret namespace: test type: Opaque remediationAction: enforce severity: low 1 When you use this function with hub cluster templates, the output is automatically encrypted using the protect function. 2 The value for the PASSWORD data key is a template that references the secret on the target cluster. 3 You receive a policy violation if the Kubernetes Secret resource does not exist on the target cluster. If the data key does not exist on the target cluster, the value becomes an empty string. Important: When you add multiline string values such as, certificates, always add | toRawJson | toLiteral syntax at the end of the template pipeline to handle line breaks. For example, if you are copying a certificate from a Secret resource and including it in a ConfigMap resource, your template pipeline might be similar to the following syntax: The toRawJson template function converts the input value to a JSON string with new lines escaped to not affect the YAML structure. The toLiteral template function removes the outer single quotes from the output. For example, when templates are processed for the key: '{{ 'hello\nworld' | toRawJson }}' template pipeline, the output is key: '"hello\nworld"' . The output of the key: '{{ 'hello\nworld' | toRawJson | toLiteral }}' template pipeline is key: "hello\nworld" . 1.2.2.1.2. fromConfigmap The fromConfigMap function returns the value of the given data key in the config map. When you use this function, enter the namespace, name, and data key of a Kubernetes ConfigMap resource. You must use the same namespace that is used for the policy using the function in a hub cluster template. See Template processing for more details. View the following syntax for the function: View the following configuration policy that enforces a Kubernetes resource on the target managed cluster: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromcm-lookup namespace: test-templates spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: kind: ConfigMap 1 apiVersion: v1 metadata: name: demo-app-config namespace: test data: 2 app-name: sampleApp app-description: "this is a sample app" log-file: '{{ fromConfigMap "default" "logs-config" "log-file" }}' 3 log-level: '{{ fromConfigMap "default" "logs-config" "log-level" }}' 4 remediationAction: enforce severity: low 1 You receive a policy violation if the Kubernetes ConfigMap resource does not exist on the target cluster. 2 If the data key does not exist on the target cluster, the value becomes an empty string. 3 The value for the log-file data key is a template that retrieves the value of the log-file from the logs-config config map in the default namespace. 4 The log-level is a tempalte that retrieves the value of the log-level data key in the default namespace. 1.2.2.1.3. fromClusterClaim The fromClusterClaim function returns the value of the Spec.Value in the ClusterClaim resource. View the following syntax for the function: View the following example of the configuration policy that enforces a Kubernetes resource on the target managed cluster: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-clusterclaims 1 namespace: default spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: kind: ConfigMap apiVersion: v1 metadata: name: sample-app-config namespace: default data: 2 platform: '{{ fromClusterClaim "platform.open-cluster-management.io" }}' 3 product: '{{ fromClusterClaim "product.open-cluster-management.io" }}' version: '{{ fromClusterClaim "version.openshift.io" }}' remediationAction: enforce severity: low 1 When you use this function, enter the name of a Kubernetes ClusterClaim resource. You receive a policy violation if the ClusterClaim resource does not exist. 2 Configuration values can be set as key-value properties. 3 The value for the platform data key is a template that retrieves the value of the platform.open-cluster-management.io cluster claim. Similarly, it retrieves values for product and version from the ClusterClaim resource. 1.2.2.1.4. lookup The lookup function returns the Kubernetes resource as a JSON compatible map. When you use this function, enter the API version, kind, namespace, name, and optional label selectors of the Kubernetes resource. You must use the same namespace that is used for the policy within the hub cluster template. See Template processing for more details. If the requested resource does not exist, an empty map is returned. If the resource does not exist and the value is provided to another template function, you might get the following error: invalid value; expected string . Note: Use the default template function, so the correct type is provided to later template functions. See the Sprig open source section. View the following syntax for the function: For label selector examples, see the reference to the Kubernetes labels and selectors documentation, in the Additional resources section. View the following example of the configuration policy that enforces a Kubernetes resource on the target managed cluster: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-lookup namespace: test-templates spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: kind: ConfigMap apiVersion: v1 metadata: name: demo-app-config namespace: test data: 1 app-name: sampleApp app-description: "this is a sample app" metrics-url: | 2 http://{{ (lookup "v1" "Service" "default" "metrics").spec.clusterIP }}:8080 remediationAction: enforce severity: low 1 Configuration values can be set as key-value properties. 2 The value for the metrics-url data key is a template that retrieves the v1/Service Kubernetes resource metrics from the default namespace, and is set to the value of the Spec.ClusterIP in the queried resource. 1.2.2.1.5. base64enc The base64enc function returns a base64 encoded value of the input data string . When you use this function, enter a string value. View the following syntax for the function: View the following example of the configuration policy that uses the base64enc function: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: ... data: USER_NAME: '{{ fromConfigMap "default" "myconfigmap" "admin-user" | base64enc }}' 1.2.2.1.6. base64dec The base64dec function returns a base64 decoded value of the input enc-data string . When you use this function, enter a string value. View the following syntax for the function: View the following example of the configuration policy that uses the base64dec function: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: ... data: app-name: | "{{ ( lookup "v1" "Secret" "testns" "mytestsecret") .data.appname ) | base64dec }}" 1.2.2.1.7. indent The indent function returns the padded data string . When you use this function, enter a data string with the specific number of spaces. View the following syntax for the function: View the following example of the configuration policy that uses the indent function: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: ... data: Ca-cert: | {{ ( index ( lookup "v1" "Secret" "default" "mycert-tls" ).data "ca.pem" ) | base64dec | indent 4 }} 1.2.2.1.8. autoindent The autoindent function acts like the indent function that automatically determines the number of leading spaces based on the number of spaces before the template. View the following example of the configuration policy that uses the autoindent function: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: ... data: Ca-cert: | {{ ( index ( lookup "v1" "Secret" "default" "mycert-tls" ).data "ca.pem" ) | base64dec | autoindent }} 1.2.2.1.9. toInt The toInt function casts and returns the integer value of the input value. When this is the last function in the template, there is further processing of the source content. This is to ensure that the value is interpreted as an integer by the YAML. When you use this function, enter the data that needs to be casted as an integer. View the following syntax for the function: View the following example of the configuration policy that uses the toInt function: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-template-function namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: ... spec: vlanid: | {{ (fromConfigMap "site-config" "site1" "vlan") | toInt }} 1.2.2.1.10. toBool The toBool function converts the input string into a boolean, and returns the boolean. When this is the last function in the template, there is further processing of the source content. This is to ensure that the value is interpreted as a boolean by the YAML. When you use this function, enter the string data that needs to be converted to a boolean. View the following syntax for the function: View the following example of the configuration policy that uses the toBool function: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-template-function namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: ... spec: enabled: | {{ (fromConfigMap "site-config" "site1" "enabled") | toBool }} 1.2.2.1.11. protect The protect function enables you to encrypt a string in a hub cluster policy template. It is automatically decrypted on the managed cluster when the policy is evaluated. View the following example of the configuration policy that uses the protect function: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-template-function namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: ... spec: enabled: | {{hub (lookup "v1" "Secret" "default" "my-hub-secret").data.message | protect hub}} In the YAML example, there is an existing hub cluster policy template that is defined to use the lookup function. On the replicated policy in the managed cluster namespace, the value might resemble the following syntax: USDocm_encrypted:okrrBqt72oI+3WT/0vxeI3vGa+wpLD7Z0ZxFMLvL204= Each encryption algorithm used is AES-CBC using 256-bit keys. Each encryption key is unique per managed cluster and is automatically rotated every 30 days. This ensures that your decrypted value is to never be stored in the policy on the managed cluster. To force an immediate rotation, delete the policy.open-cluster-management.io/last-rotated annotation on the policy-encryption-key Secret in the managed cluster namespace on the hub cluster. Policies are then reprocessed to use the new encryption key. 1.2.2.1.12. toLiteral The toLiteral function removes any quotation marks around the template string after it is processed. You can use this function to convert a JSON string from a config map field to a JSON value in the manifest. Run the following function to remove quotation marks from the key parameter value: After using the toLiteral function, the following update is displayed: 1.2.2.1.13. copySecretData The copySecretData function copies all of the data contents of the specified secret. View the following sample of the function: complianceType: musthave objectDefinition: apiVersion: v1 kind: Secret metadata: name: my-secret-copy data: '{{ copySecretData "default" "my-secret" }}' 1 1 When you use this function with hub cluster templates, the output is automatically encrypted using the protect function. 1.2.2.1.14. copyConfigMapData The copyConfigMapData function copies all of the data content of the specified config map. View the following sample of the function: complianceType: musthave objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: my-secret-copy data: '{{ copyConfigMapData "default" "my-configmap" }}' 1.2.2.1.15. getNodesWithExactRoles The getNodesWithExactRoles function returns a list of nodes with only the roles that you specify, and ignores nodes that have any additional roles except the node-role.kubernetes.io/worker role. View the following sample function where you are selecting "infra" nodes and ignoring the storage nodes: complianceType: musthave objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: my-configmap data: infraNode: | {{- range USDi,USDnd := (getNodesWithExactRoles "infra").items }} node{{ USDi }}: {{ USDnd.metadata.name }} {{- end }} replicas: {{ len ((getNodesWithExactRoles "infra").items) | toInt }} 1.2.2.1.16. hasNodesWithExactRoles The hasNodesWithExactRoles function returns the true value if the cluster contains nodes with only the roles that you specify, and ignores nodes that have any additional roles except the node-role.kubernetes.io/worker role. View the following sample of the function: complianceType: musthave objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: my-configmap data: key: '{{ hasNodesWithExactRoles "infra" }}' 1.2.2.1.17. Sprig open source Red Hat Advanced Cluster Management supports the following template functions that are included from the sprig open source project: Table 1.5. Table of supported, community Sprig functions Sprig library Functions Cryptographic and security htpasswd Date date , mustToDate , now , toDate Default default , empty , fromJson , mustFromJson , ternary , toJson , toRawJson Dictionaries and dict dict , dig , get , hasKey , merge , mustMerge , set , unset Integer math add , mul , div , round , sub Integer slice until , untilStep , Lists append , concat , has , list , mustAppend , mustHas , mustPrepend , mustSlice , prepend , slice String functions cat , contains , hasPrefix , hasSuffix , join , lower , mustRegexFind , mustRegexFindAll , mustRegexMatch , quote , regexFind , regexFindAll , regexMatch , regexQuoteMeta , replace , split , splitn , substr , trim , trimAll , trunc , upper Version comparison semver , semverCompare 1.2.2.2. Additional resources See Template processing for more details. See Advanced template processing in configuration policies for use-cases. For label selector examples, see the Kubernetes labels and selectors documentation. Refer to the Golang documentation - Package templates . See the Sprig Function Documentation for more details. 1.2.3. Advanced template processing in configuration policies Use both managed cluster and hub cluster templates to reduce the need to create separate policies for each target cluster or hardcode configuration values in the policy definitions. For security, both resource-specific and the generic lookup functions in hub cluster templates are restricted to the namespace of the policy on the hub cluster. Important: If you use hub cluster templates to propagate secrets or other sensitive data, that causes sensitive data exposure in the managed cluster namespace on the hub cluster and on the managed clusters where that policy is distributed. The template content is expanded in the policy, and policies are not encrypted by the OpenShift Container Platform ETCD encryption support. To address this, use fromSecret or copySecretData , which automatically encrypts the values from the secret, or protect to encrypt other values. Continue reading for advanced template use-cases: Special annotation for reprocessing Object template processing Bypass template processing 1.2.3.1. Special annotation for reprocessing Hub cluster templates are resolved to the data in the referenced resources during policy creation, or when the referenced resources are updated. If you need to manually initiate an update, use the special annotation, policy.open-cluster-management.io/trigger-update , to indicate changes for the data referenced by the templates. Any change to the special annotation value automatically initiates template processing. Additionally, the latest contents of the referenced resource are read and updated in the policy definition that is propagated for processing on managed clusters. A way to use this annotation is to increment the value by one each time. 1.2.3.2. Object template processing Set object templates with a YAML string representation. The object-template-raw parameter is an optional parameter that supports advanced templating use-cases, such as if-else and the range function. The following example is defined to add the species-category: mammal label to any ConfigMap in the default namespace that has a name key equal to Sea Otter : object-templates-raw: | {{- range (lookup "v1" "ConfigMap" "default" "").items }} {{- if eq .data.name "Sea Otter" }} - complianceType: musthave objectDefinition: kind: ConfigMap apiVersion: v1 metadata: name: {{ .metadata.name }} namespace: {{ .metadata.namespace }} labels: species-category: mammal {{- end }} {{- end }} Note: While spec.object-templates and spec.object-templates-raw are optional, exactly one of the two parameter fields must be set. View the following policy example that uses advanced templates to create and configure infrastructure MachineSet objects for your managed clusters. apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: create-infra-machineset spec: remediationAction: enforce severity: low object-templates-raw: | {{- /* Specify the parameters needed to create the MachineSet */ -}} {{- USDmachineset_role := "infra" }} {{- USDregion := "ap-southeast-1" }} {{- USDzones := list "ap-southeast-1a" "ap-southeast-1b" "ap-southeast-1c" }} {{- USDinfrastructure_id := (lookup "config.openshift.io/v1" "Infrastructure" "" "cluster").status.infrastructureName }} {{- USDworker_ms := (index (lookup "machine.openshift.io/v1beta1" "MachineSet" "openshift-machine-api" "").items 0) }} {{- /* Generate the MachineSet for each zone as specified */ -}} {{- range USDzone := USDzones }} - complianceType: musthave objectDefinition: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: {{ USDinfrastructure_id }} name: {{ USDinfrastructure_id }}-{{ USDmachineset_role }}-{{ USDzone }} namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: {{ USDinfrastructure_id }} machine.openshift.io/cluster-api-machineset: {{ USDinfrastructure_id }}-{{ USDmachineset_role }}-{{ USDzone }} template: metadata: labels: machine.openshift.io/cluster-api-cluster: {{ USDinfrastructure_id }} machine.openshift.io/cluster-api-machine-role: {{ USDmachineset_role }} machine.openshift.io/cluster-api-machine-type: {{ USDmachineset_role }} machine.openshift.io/cluster-api-machineset: {{ USDinfrastructure_id }}-{{ USDmachineset_role }}-{{ USDzone }} spec: metadata: labels: node-role.kubernetes.io/{{ USDmachineset_role }}: "" taints: - key: node-role.kubernetes.io/{{ USDmachineset_role }} effect: NoSchedule providerSpec: value: ami: id: {{ USDworker_ms.spec.template.spec.providerSpec.value.ami.id }} apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: encrypted: true iops: 2000 kmsKey: arn: '' volumeSize: 500 volumeType: io1 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 instanceType: {{ USDworker_ms.spec.template.spec.providerSpec.value.instanceType }} iamInstanceProfile: id: {{ USDinfrastructure_id }}-worker-profile kind: AWSMachineProviderConfig placement: availabilityZone: {{ USDzone }} region: {{ USDregion }} securityGroups: - filters: - name: tag:Name values: - {{ USDinfrastructure_id }}-worker-sg subnet: filters: - name: tag:Name values: - {{ USDinfrastructure_id }}-private-{{ USDzone }} tags: - name: kubernetes.io/cluster/{{ USDinfrastructure_id }} value: owned userDataSecret: name: worker-user-data {{- end }} 1.2.3.3. Bypass template processing You might create a policy that contains a template that is not intended to be processed by Red Hat Advanced Cluster Management. By default, Red Hat Advanced Cluster Management processes all templates. To bypass template processing for your hub cluster, you must change {{ template content }} to {{ `{{ template content }}` }} . Alternatively, you can add the following annotation in the ConfigurationPolicy section of your Policy : policy.open-cluster-management.io/disable-templates: "true" . When this annotation is included, the workaround is not necessary. Template processing is bypassed for the ConfigurationPolicy . 1.2.3.4. Additional resources See Template functions for more details. Return to Template processing . See Kubernetes configuration policy controller for more details. Also refer to the Backing up etcd data .
|
[
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-config spec: namespaceSelector: include: [\"default\"] exclude: [] matchExpressions: [] matchLabels: {} remediationAction: inform 1 customMessage: compliant: {} noncompliant: {} severity: low evaluationInterval: compliant: \"\" noncompliant: \"\" object-templates-raw: \"\" object-templates: 2 - complianceType: musthave metadataComplianceType: recordDiff: \"\" recreateOption: \"\" objectSelector: matchLabels: {} matchExpressions: [] objectDefinition: apiVersion: v1 kind: Pod metadata: name: pod spec: containers: - image: pod-image name: pod-name ports: - containerPort: 80 - complianceType: mustonlyhave objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: myconfig namespace: default data: testData: hello",
"{{- if eq .metadata.name \"policy-grc-your-meta-data-name\" }} replicas: 2 {{- else }} replicas: 1 {{- end }}",
"apiVersion: policy.open-cluster-management.io/v1 kind: CertificatePolicy metadata: name: certificate-policy-example spec: namespaceSelector: include: [\"default\"] exclude: [] matchExpressions: [] matchLabels: {} labelSelector: myLabelKey: myLabelValue remediationAction: severity: minimumDuration: minimumCADuration: maximumDuration: maximumCADuration: allowedSANPattern: disallowedSANPattern:",
"apiVersion: policy.open-cluster-management.io/v1beta1 kind: PolicySet metadata: name: demo-policyset spec: policies: - policy-demo --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: demo-policyset-pb placementRef: apiGroup: cluster.open-cluster-management.io kind: Placement name: demo-policyset-pr subjects: - apiGroup: policy.open-cluster-management.io kind: PolicySet name: demo-policyset --- apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: demo-policyset-pr spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: name operator: In values: - local-cluster tolerations: - key: cluster.open-cluster-management.io/unavailable operator: Exists - key: cluster.open-cluster-management.io/unreachable operator: Exists",
"apiVersion: policy.open-cluster-management.io/v1beta1 kind: PolicySet metadata: name: pci namespace: default spec: description: Policies for PCI compliance policies: - policy-pod - policy-namespace status: compliant: NonCompliant placement: - placementBinding: binding1 placement: placement1 policySet: policyset-ps",
"ca.crt: '{{ fromSecret \"openshift-config\" \"ca-config-map-secret\" \"ca.crt\" | base64dec | toRawJson | toLiteral }}'",
"func fromSecret (ns string, secretName string, datakey string) (dataValue string, err error)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 data: 1 USER_NAME: YWRtaW4= PASSWORD: '{{ fromSecret \"default\" \"localsecret\" \"PASSWORD\" }}' 2 kind: Secret 3 metadata: name: demosecret namespace: test type: Opaque remediationAction: enforce severity: low",
"ca.crt: '{{ fromSecret \"openshift-config\" \"ca-config-map-secret\" \"ca.crt\" | base64dec | toRawJson | toLiteral }}'",
"func fromConfigMap (ns string, configmapName string, datakey string) (dataValue string, err Error)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromcm-lookup namespace: test-templates spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: kind: ConfigMap 1 apiVersion: v1 metadata: name: demo-app-config namespace: test data: 2 app-name: sampleApp app-description: \"this is a sample app\" log-file: '{{ fromConfigMap \"default\" \"logs-config\" \"log-file\" }}' 3 log-level: '{{ fromConfigMap \"default\" \"logs-config\" \"log-level\" }}' 4 remediationAction: enforce severity: low",
"func fromClusterClaim (clusterclaimName string) (dataValue string, err Error)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-clusterclaims 1 namespace: default spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: kind: ConfigMap apiVersion: v1 metadata: name: sample-app-config namespace: default data: 2 platform: '{{ fromClusterClaim \"platform.open-cluster-management.io\" }}' 3 product: '{{ fromClusterClaim \"product.open-cluster-management.io\" }}' version: '{{ fromClusterClaim \"version.openshift.io\" }}' remediationAction: enforce severity: low",
"func lookup (apiversion string, kind string, namespace string, name string, labelselector ...string) (value string, err Error)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-lookup namespace: test-templates spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: kind: ConfigMap apiVersion: v1 metadata: name: demo-app-config namespace: test data: 1 app-name: sampleApp app-description: \"this is a sample app\" metrics-url: | 2 http://{{ (lookup \"v1\" \"Service\" \"default\" \"metrics\").spec.clusterIP }}:8080 remediationAction: enforce severity: low",
"func base64enc (data string) (enc-data string)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: data: USER_NAME: '{{ fromConfigMap \"default\" \"myconfigmap\" \"admin-user\" | base64enc }}'",
"func base64dec (enc-data string) (data string)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: data: app-name: | \"{{ ( lookup \"v1\" \"Secret\" \"testns\" \"mytestsecret\") .data.appname ) | base64dec }}\"",
"func indent (spaces int, data string) (padded-data string)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: data: Ca-cert: | {{ ( index ( lookup \"v1\" \"Secret\" \"default\" \"mycert-tls\" ).data \"ca.pem\" ) | base64dec | indent 4 }}",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-fromsecret namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: data: Ca-cert: | {{ ( index ( lookup \"v1\" \"Secret\" \"default\" \"mycert-tls\" ).data \"ca.pem\" ) | base64dec | autoindent }}",
"func toInt (input interface{}) (output int)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-template-function namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: spec: vlanid: | {{ (fromConfigMap \"site-config\" \"site1\" \"vlan\") | toInt }}",
"func toBool (input string) (output bool)",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-template-function namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: spec: enabled: | {{ (fromConfigMap \"site-config\" \"site1\" \"enabled\") | toBool }}",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: demo-template-function namespace: test spec: namespaceSelector: exclude: - kube-* include: - default object-templates: - complianceType: musthave objectDefinition: spec: enabled: | {{hub (lookup \"v1\" \"Secret\" \"default\" \"my-hub-secret\").data.message | protect hub}}",
"key: '{{ \"[\\\"10.10.10.10\\\", \\\"1.1.1.1\\\"]\" | toLiteral }}'",
"key: [\"10.10.10.10\", \"1.1.1.1\"]",
"complianceType: musthave objectDefinition: apiVersion: v1 kind: Secret metadata: name: my-secret-copy data: '{{ copySecretData \"default\" \"my-secret\" }}' 1",
"complianceType: musthave objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: my-secret-copy data: '{{ copyConfigMapData \"default\" \"my-configmap\" }}'",
"complianceType: musthave objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: my-configmap data: infraNode: | {{- range USDi,USDnd := (getNodesWithExactRoles \"infra\").items }} node{{ USDi }}: {{ USDnd.metadata.name }} {{- end }} replicas: {{ len ((getNodesWithExactRoles \"infra\").items) | toInt }}",
"complianceType: musthave objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: my-configmap data: key: '{{ hasNodesWithExactRoles \"infra\" }}'",
"object-templates-raw: | {{- range (lookup \"v1\" \"ConfigMap\" \"default\" \"\").items }} {{- if eq .data.name \"Sea Otter\" }} - complianceType: musthave objectDefinition: kind: ConfigMap apiVersion: v1 metadata: name: {{ .metadata.name }} namespace: {{ .metadata.namespace }} labels: species-category: mammal {{- end }} {{- end }}",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: create-infra-machineset spec: remediationAction: enforce severity: low object-templates-raw: | {{- /* Specify the parameters needed to create the MachineSet */ -}} {{- USDmachineset_role := \"infra\" }} {{- USDregion := \"ap-southeast-1\" }} {{- USDzones := list \"ap-southeast-1a\" \"ap-southeast-1b\" \"ap-southeast-1c\" }} {{- USDinfrastructure_id := (lookup \"config.openshift.io/v1\" \"Infrastructure\" \"\" \"cluster\").status.infrastructureName }} {{- USDworker_ms := (index (lookup \"machine.openshift.io/v1beta1\" \"MachineSet\" \"openshift-machine-api\" \"\").items 0) }} {{- /* Generate the MachineSet for each zone as specified */ -}} {{- range USDzone := USDzones }} - complianceType: musthave objectDefinition: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: {{ USDinfrastructure_id }} name: {{ USDinfrastructure_id }}-{{ USDmachineset_role }}-{{ USDzone }} namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: {{ USDinfrastructure_id }} machine.openshift.io/cluster-api-machineset: {{ USDinfrastructure_id }}-{{ USDmachineset_role }}-{{ USDzone }} template: metadata: labels: machine.openshift.io/cluster-api-cluster: {{ USDinfrastructure_id }} machine.openshift.io/cluster-api-machine-role: {{ USDmachineset_role }} machine.openshift.io/cluster-api-machine-type: {{ USDmachineset_role }} machine.openshift.io/cluster-api-machineset: {{ USDinfrastructure_id }}-{{ USDmachineset_role }}-{{ USDzone }} spec: metadata: labels: node-role.kubernetes.io/{{ USDmachineset_role }}: \"\" taints: - key: node-role.kubernetes.io/{{ USDmachineset_role }} effect: NoSchedule providerSpec: value: ami: id: {{ USDworker_ms.spec.template.spec.providerSpec.value.ami.id }} apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: encrypted: true iops: 2000 kmsKey: arn: '' volumeSize: 500 volumeType: io1 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 instanceType: {{ USDworker_ms.spec.template.spec.providerSpec.value.instanceType }} iamInstanceProfile: id: {{ USDinfrastructure_id }}-worker-profile kind: AWSMachineProviderConfig placement: availabilityZone: {{ USDzone }} region: {{ USDregion }} securityGroups: - filters: - name: tag:Name values: - {{ USDinfrastructure_id }}-worker-sg subnet: filters: - name: tag:Name values: - {{ USDinfrastructure_id }}-private-{{ USDzone }} tags: - name: kubernetes.io/cluster/{{ USDinfrastructure_id }} value: owned userDataSecret: name: worker-user-data {{- end }}"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/governance/governance
|
8.2. Top Three Causes of Problems
|
8.2. Top Three Causes of Problems The following sections describe the top three causes of problems: labeling problems, configuring Booleans and ports for services, and evolving SELinux rules. 8.2.1. Labeling Problems On systems running SELinux, all processes and files are labeled with a label that contains security-relevant information. This information is called the SELinux context. If these labels are wrong, access may be denied. If an application is labeled incorrectly, the process it transitions to may not have the correct label, possibly causing SELinux to deny access, and the process being able to create mislabeled files. A common cause of labeling problems is when a non-standard directory is used for a service. For example, instead of using /var/www/html/ for a website, an administrator wants to use /srv/myweb/ . On Red Hat Enterprise Linux 6, the /srv/ directory is labeled with the var_t type. Files and directories created and /srv/ inherit this type. Also, newly-created top-level directories (such as /myserver/ ) may be labeled with the default_t type. SELinux prevents the Apache HTTP Server ( httpd ) from accessing both of these types. To allow access, SELinux must know that the files in /srv/myweb/ are to be accessible to httpd : This semanage command adds the context for the /srv/myweb/ directory (and all files and directories under it) to the SELinux file-context configuration [11] . The semanage command does not change the context. As the Linux root user, run the restorecon command to apply the changes: Refer to Section 5.6.2, "Persistent Changes: semanage fcontext" for further information about adding contexts to the file-context configuration. 8.2.1.1. What is the Correct Context? The matchpathcon command checks the context of a file path and compares it to the default label for that path. The following example demonstrates using matchpathcon on a directory that contains incorrectly labeled files: In this example, the index.html and page1.html files are labeled with the user_home_t type. This type is used for files in user home directories. Using the mv command to move files from your home directory may result in files being labeled with the user_home_t type. This type should not exist outside of home directories. Use the restorecon command to restore such files to their correct type: To restore the context for all files under a directory, use the -R option: Refer to Section 5.9.3, "Checking the Default SELinux Context" for a more detailed example of matchpathcon . [11] Files in /etc/selinux/targeted/contexts/files/ define contexts for files and directories. Files in this directory are read by the restorecon and setfiles commands to restore files and directories to their default contexts.
|
[
"~]# semanage fcontext -a -t httpd_sys_content_t \"/srv/myweb(/.*)?\"",
"~]# restorecon -R -v /srv/myweb",
"~]USD matchpathcon -V /var/www/html/* /var/www/html/index.html has context unconfined_u:object_r:user_home_t:s0, should be system_u:object_r:httpd_sys_content_t:s0 /var/www/html/page1.html has context unconfined_u:object_r:user_home_t:s0, should be system_u:object_r:httpd_sys_content_t:s0",
"~]# restorecon -v /var/www/html/index.html restorecon reset /var/www/html/index.html context unconfined_u:object_r:user_home_t:s0->system_u:object_r:httpd_sys_content_t:s0",
"~]# restorecon -R -v /var/www/html/ restorecon reset /var/www/html/page1.html context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /var/www/html/index.html context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:httpd_sys_content_t:s0"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-troubleshooting-top_three_causes_of_problems
|
Chapter 10. Changing the cloud provider credentials configuration
|
Chapter 10. Changing the cloud provider credentials configuration For supported configurations, you can change how OpenShift Container Platform authenticates with your cloud provider. To determine which cloud credentials strategy your cluster uses, see Determining the Cloud Credential Operator mode . 10.1. Rotating cloud provider service keys with the Cloud Credential Operator utility Some organizations require the rotation of the service keys that authenticate the cluster. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to update keys for clusters installed on the following cloud providers: Amazon Web Services (AWS) with Security Token Service (STS) Google Cloud Platform (GCP) with GCP Workload Identity Microsoft Azure with Workload ID IBM Cloud 10.1.1. Rotating AWS OIDC bound service account signer keys If the Cloud Credential Operator (CCO) for your OpenShift Container Platform cluster on Amazon Web Services (AWS) is configured to operate in manual mode with STS, you can rotate the bound service account signer key. To rotate the key, you delete the existing key on your cluster, which causes the Kubernetes API server to create a new key. To reduce authentication failures during this process, you must immediately add the new public key to the existing issuer file. After the cluster is using the new key for authentication, you can remove any remaining keys. Important The process to rotate OIDC bound service account signer keys is disruptive and takes a significant amount of time. Some steps are time-sensitive. Before proceeding, observe the following considerations: Read the following steps and ensure that you understand and accept the time requirement. The exact time requirement varies depending on the individual cluster, but it is likely to require at least one hour. To reduce the risk of authentication failures, ensure that you understand and prepare for the time-sensitive steps. During this process, you must refresh all service accounts and restart all pods on the cluster. These actions are disruptive to workloads. To mitigate this impact, you can temporarily halt these services and then redeploy them when the cluster is ready. Prerequisites You have access to the OpenShift CLI ( oc ) as a user with the cluster-admin role. You have created an AWS account for the ccoctl utility to use with the following permissions: s3:GetObject s3:PutObject s3:PutObjectTagging For clusters that store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the cloudfront:ListDistributions permission. You have configured the ccoctl utility. Your cluster is in a stable state. You can confirm that the cluster is stable by running the following command: USD oc adm wait-for-stable-cluster --minimum-stable-period=5s Procedure Configure the following environment variables: INFRA_ID=USD(oc get infrastructures cluster -o jsonpath='{.status.infrastructureName}') CLUSTER_NAME=USD{INFRA_ID%-*} 1 1 1 This value should match the name of the cluster that was specified in the metadata.name field of the install-config.yaml file during installation. Note Your cluster might differ from this example, and the resource names might not be derived identically from the cluster name. Ensure that you specify the correct corresponding resource names for your cluster. For AWS clusters that store the OIDC configuration in a public S3 bucket, configure the following environment variable: AWS_BUCKET=USD(oc get authentication cluster -o jsonpath={'.spec.serviceAccountIssuer'} | awk -F'://' '{printUSD2}' |awk -F'.' '{printUSD1}') For AWS clusters that store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, complete the following steps: Extract the public CloudFront distribution URL by running the following command: USD basename USD(oc get authentication cluster -o jsonpath={'.spec.serviceAccountIssuer'} ) Example output <subdomain>.cloudfront.net where <subdomain> is an alphanumeric string. Determine the private S3 bucket name by running the following command: USD aws cloudfront list-distributions --query "DistributionList.Items[].{DomainName: DomainName, OriginDomainName: Origins.Items[0].DomainName}[?contains(DomainName, '<subdomain>.cloudfront.net')]" Example output [ { "DomainName": "<subdomain>.cloudfront.net", "OriginDomainName": "<s3_bucket>.s3.us-east-2.amazonaws.com" } ] where <s3_bucket> is the private S3 bucket name for your cluster. Configure the following environment variable: AWS_BUCKET=USD<s3_bucket> where <s3_bucket> is the private S3 bucket name for your cluster. Create a temporary directory to use and assign it an environment variable by running the following command: USD TEMPDIR=USD(mktemp -d) To cause the Kubernetes API server to create a new bound service account signing key, you delete the bound service account signing key. Important After you complete this step, the Kubernetes API server starts to roll out a new key. To reduce the risk of authentication failures, complete the remaining steps as quickly as possible. The remaining steps might be disruptive to workloads. When you are ready, delete the bound service account signing key by running the following command: USD oc delete secrets/-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator Download the public key from the service account signing key secret that the Kubernetes API server created by running the following command: USD oc get secret/-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator \ -ojsonpath='{ .data.service-account\.pub }' | base64 \ -d > USD{TEMPDIR}/serviceaccount-signer.public Use the public key to create a keys.json file by running the following command: USD ccoctl aws create-identity-provider \ --dry-run \ 1 --output-dir USD{TEMPDIR} \ --name fake \ 2 --region us-east-1 3 1 The --dry-run option outputs files, including the new keys.json file, to the disk without making API calls. 2 Because the --dry-run option does not make any API calls, some parameters do not require real values. 3 Specify any valid AWS region, such as us-east-1 . This value does not need to match the region the cluster is in. Rename the keys.json file by running the following command: USD cp USD{TEMPDIR}/<number>-keys.json USD{TEMPDIR}/jwks.new.json where <number> is a two-digit numerical value that varies depending on your environment. Download the existing keys.json file from the cloud provider by running the following command: USD aws s3api get-object \ --bucket USD{AWS_BUCKET} \ --key keys.json USD{TEMPDIR}/jwks.current.json Combine the two keys.json files by running the following command: USD jq -s '{ keys: map(.keys[])}' USD{TEMPDIR}/jwks.current.json USD{TEMPDIR}/jwks.new.json > USD{TEMPDIR}/jwks.combined.json To enable authentication for the old and new keys during the rotation, upload the combined keys.json file to the cloud provider by running the following command: USD aws s3api put-object \ --bucket USD{AWS_BUCKET} \ --tagging "openshift.io/cloud-credential-operator/USD{CLUSTER_NAME}=owned" \ --key keys.json \ --body USD{TEMPDIR}/jwks.combined.json Wait for the Kubernetes API server to update and use the new key. You can monitor the update progress by running the following command: USD oc adm wait-for-stable-cluster This process might take 15 minutes or longer. The following output indicates that the process is complete: All clusteroperators are stable To ensure that all pods on the cluster use the new key, you must restart them. Important This step maintains uptime for services that are configured for high availability across multiple nodes, but might cause downtime for any services that are not. Restart all of the pods in the cluster by running the following command: USD oc adm reboot-machine-config-pool mcp/worker mcp/master Monitor the restart and update process by running the following command: USD oc adm wait-for-node-reboot nodes --all This process might take 15 minutes or longer. The following output indicates that the process is complete: All nodes rebooted Monitor the update progress by running the following command: USD oc adm wait-for-stable-cluster This process might take 15 minutes or longer. The following output indicates that the process is complete: All clusteroperators are stable Replace the combined keys.json file with the updated keys.json file on the cloud provider by running the following command: USD aws s3api put-object \ --bucket USD{AWS_BUCKET} \ --tagging "openshift.io/cloud-credential-operator/USD{CLUSTER_NAME}=owned" \ --key keys.json \ --body USD{TEMPDIR}/jwks.new.json 10.1.2. Rotating GCP OIDC bound service account signer keys If the Cloud Credential Operator (CCO) for your OpenShift Container Platform cluster on Google Cloud Platform (GCP) is configured to operate in manual mode with GCP Workload Identity, you can rotate the bound service account signer key. To rotate the key, you delete the existing key on your cluster, which causes the Kubernetes API server to create a new key. To reduce authentication failures during this process, you must immediately add the new public key to the existing issuer file. After the cluster is using the new key for authentication, you can remove any remaining keys. Important The process to rotate OIDC bound service account signer keys is disruptive and takes a significant amount of time. Some steps are time-sensitive. Before proceeding, observe the following considerations: Read the following steps and ensure that you understand and accept the time requirement. The exact time requirement varies depending on the individual cluster, but it is likely to require at least one hour. To reduce the risk of authentication failures, ensure that you understand and prepare for the time-sensitive steps. During this process, you must refresh all service accounts and restart all pods on the cluster. These actions are disruptive to workloads. To mitigate this impact, you can temporarily halt these services and then redeploy them when the cluster is ready. Prerequisites You have access to the OpenShift CLI ( oc ) as a user with the cluster-admin role. You have added one of the following authentication options to the GCP account that the ccoctl utility uses: The IAM Workload Identity Pool Admin role The following granular permissions: storage.objects.create storage.objects.delete You have configured the ccoctl utility. Your cluster is in a stable state. You can confirm that the cluster is stable by running the following command: USD oc adm wait-for-stable-cluster --minimum-stable-period=5s Procedure Configure the following environment variables: CURRENT_ISSUER=USD(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}') GCP_BUCKET=USD(echo USD{CURRENT_ISSUER} | cut -d "/" -f4) Note Your cluster might differ from this example, and the resource names might not be derived identically from the cluster name. Ensure that you specify the correct corresponding resource names for your cluster. Create a temporary directory to use and assign it an environment variable by running the following command: USD TEMPDIR=USD(mktemp -d) To cause the Kubernetes API server to create a new bound service account signing key, you delete the bound service account signing key. Important After you complete this step, the Kubernetes API server starts to roll out a new key. To reduce the risk of authentication failures, complete the remaining steps as quickly as possible. The remaining steps might be disruptive to workloads. When you are ready, delete the bound service account signing key by running the following command: USD oc delete secrets/-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator Download the public key from the service account signing key secret that the Kubernetes API server created by running the following command: USD oc get secret/-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator \ -ojsonpath='{ .data.service-account\.pub }' | base64 \ -d > USD{TEMPDIR}/serviceaccount-signer.public Use the public key to create a keys.json file by running the following command: USD ccoctl gcp create-workload-identity-provider \ --dry-run \ 1 --output-dir=USD{TEMPDIR} \ --name fake \ 2 --project fake \ --workload-identity-pool fake 1 The --dry-run option outputs files, including the new keys.json file, to the disk without making API calls. 2 Because the --dry-run option does not make any API calls, some parameters do not require real values. Rename the keys.json file by running the following command: USD cp USD{TEMPDIR}/<number>-keys.json USD{TEMPDIR}/jwks.new.json where <number> is a two-digit numerical value that varies depending on your environment. Download the existing keys.json file from the cloud provider by running the following command: USD gcloud storage cp gs://USD{GCP_BUCKET}/keys.json USD{TEMPDIR}/jwks.current.json Combine the two keys.json files by running the following command: USD jq -s '{ keys: map(.keys[])}' USD{TEMPDIR}/jwks.current.json USD{TEMPDIR}/jwks.new.json > USD{TEMPDIR}/jwks.combined.json To enable authentication for the old and new keys during the rotation, upload the combined keys.json file to the cloud provider by running the following command: USD gcloud storage cp USD{TEMPDIR}/jwks.combined.json gs://USD{GCP_BUCKET}/keys.json Wait for the Kubernetes API server to update and use the new key. You can monitor the update progress by running the following command: USD oc adm wait-for-stable-cluster This process might take 15 minutes or longer. The following output indicates that the process is complete: All clusteroperators are stable To ensure that all pods on the cluster use the new key, you must restart them. Important This step maintains uptime for services that are configured for high availability across multiple nodes, but might cause downtime for any services that are not. Restart all of the pods in the cluster by running the following command: USD oc adm reboot-machine-config-pool mcp/worker mcp/master Monitor the restart and update process by running the following command: USD oc adm wait-for-node-reboot nodes --all This process might take 15 minutes or longer. The following output indicates that the process is complete: All nodes rebooted Monitor the update progress by running the following command: USD oc adm wait-for-stable-cluster This process might take 15 minutes or longer. The following output indicates that the process is complete: All clusteroperators are stable Replace the combined keys.json file with the updated keys.json file on the cloud provider by running the following command: USD gcloud storage cp USD{TEMPDIR}/jwks.new.json gs://USD{GCP_BUCKET}/keys.json 10.1.3. Rotating Azure OIDC bound service account signer keys If the Cloud Credential Operator (CCO) for your OpenShift Container Platform cluster on Microsoft Azure is configured to operate in manual mode with Microsoft Entra Workload ID, you can rotate the bound service account signer key. To rotate the key, you delete the existing key on your cluster, which causes the Kubernetes API server to create a new key. To reduce authentication failures during this process, you must immediately add the new public key to the existing issuer file. After the cluster is using the new key for authentication, you can remove any remaining keys. Important The process to rotate OIDC bound service account signer keys is disruptive and takes a significant amount of time. Some steps are time-sensitive. Before proceeding, observe the following considerations: Read the following steps and ensure that you understand and accept the time requirement. The exact time requirement varies depending on the individual cluster, but it is likely to require at least one hour. To reduce the risk of authentication failures, ensure that you understand and prepare for the time-sensitive steps. During this process, you must refresh all service accounts and restart all pods on the cluster. These actions are disruptive to workloads. To mitigate this impact, you can temporarily halt these services and then redeploy them when the cluster is ready. Prerequisites You have access to the OpenShift CLI ( oc ) as a user with the cluster-admin role. You have created a global Azure account for the ccoctl utility to use with the following permissions: Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.Storage/storageAccounts/blobServices/containers/write You have configured the ccoctl utility. Your cluster is in a stable state. You can confirm that the cluster is stable by running the following command: USD oc adm wait-for-stable-cluster --minimum-stable-period=5s Procedure Configure the following environment variables: CURRENT_ISSUER=USD(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}') AZURE_STORAGE_ACCOUNT=USD(echo USD{CURRENT_ISSUER} | cut -d "/" -f3 | cut -d "." -f1) AZURE_STORAGE_CONTAINER=USD(echo USD{CURRENT_ISSUER} | cut -d "/" -f4) Note Your cluster might differ from this example, and the resource names might not be derived identically from the cluster name. Ensure that you specify the correct corresponding resource names for your cluster. Create a temporary directory to use and assign it an environment variable by running the following command: USD TEMPDIR=USD(mktemp -d) To cause the Kubernetes API server to create a new bound service account signing key, you delete the bound service account signing key. Important After you complete this step, the Kubernetes API server starts to roll out a new key. To reduce the risk of authentication failures, complete the remaining steps as quickly as possible. The remaining steps might be disruptive to workloads. When you are ready, delete the bound service account signing key by running the following command: USD oc delete secrets/-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator Download the public key from the service account signing key secret that the Kubernetes API server created by running the following command: USD oc get secret/-bound-service-account-signing-key \ -n openshift-kube-apiserver-operator \ -ojsonpath='{ .data.service-account\.pub }' | base64 \ -d > USD{TEMPDIR}/serviceaccount-signer.public Use the public key to create a keys.json file by running the following command: USD ccoctl aws create-identity-provider \ 1 --dry-run \ 2 --output-dir USD{TEMPDIR} \ --name fake \ 3 --region us-east-1 4 1 The ccoctl azure command does not include a --dry-run option. To use the --dry-run option, you must specify aws for an Azure cluster. 2 The --dry-run option outputs files, including the new keys.json file, to the disk without making API calls. 3 Because the --dry-run option does not make any API calls, some parameters do not require real values. 4 Specify any valid AWS region, such as us-east-1 . This value does not need to match the region the cluster is in. Rename the keys.json file by running the following command: USD cp USD{TEMPDIR}/<number>-keys.json USD{TEMPDIR}/jwks.new.json where <number> is a two-digit numerical value that varies depending on your environment. Download the existing keys.json file from the cloud provider by running the following command: USD az storage blob download \ --container-name USD{AZURE_STORAGE_CONTAINER} \ --account-name USD{AZURE_STORAGE_ACCOUNT} \ --name 'openid/v1/jwks' \ -f USD{TEMPDIR}/jwks.current.json Combine the two keys.json files by running the following command: USD jq -s '{ keys: map(.keys[])}' USD{TEMPDIR}/jwks.current.json USD{TEMPDIR}/jwks.new.json > USD{TEMPDIR}/jwks.combined.json To enable authentication for the old and new keys during the rotation, upload the combined keys.json file to the cloud provider by running the following command: USD az storage blob upload \ --overwrite \ --account-name USD{AZURE_STORAGE_ACCOUNT} \ --container-name USD{AZURE_STORAGE_CONTAINER} \ --name 'openid/v1/jwks' \ -f USD{TEMPDIR}/jwks.combined.json Wait for the Kubernetes API server to update and use the new key. You can monitor the update progress by running the following command: USD oc adm wait-for-stable-cluster This process might take 15 minutes or longer. The following output indicates that the process is complete: All clusteroperators are stable To ensure that all pods on the cluster use the new key, you must restart them. Important This step maintains uptime for services that are configured for high availability across multiple nodes, but might cause downtime for any services that are not. Restart all of the pods in the cluster by running the following command: USD oc adm reboot-machine-config-pool mcp/worker mcp/master Monitor the restart and update process by running the following command: USD oc adm wait-for-node-reboot nodes --all This process might take 15 minutes or longer. The following output indicates that the process is complete: All nodes rebooted Monitor the update progress by running the following command: USD oc adm wait-for-stable-cluster This process might take 15 minutes or longer. The following output indicates that the process is complete: All clusteroperators are stable Replace the combined keys.json file with the updated keys.json file on the cloud provider by running the following command: USD az storage blob upload \ --overwrite \ --account-name USD{AZURE_STORAGE_ACCOUNT} \ --container-name USD{AZURE_STORAGE_CONTAINER} \ --name 'openid/v1/jwks' \ -f USD{TEMPDIR}/jwks.new.json 10.1.4. Rotating IBM Cloud credentials You can rotate API keys for your existing service IDs and update the corresponding secrets. Prerequisites You have configured the ccoctl utility. You have existing service IDs in a live OpenShift Container Platform cluster installed. Procedure Use the ccoctl utility to rotate your API keys for the service IDs and update the secrets by running the following command: USD ccoctl <provider_name> refresh-keys \ 1 --kubeconfig <openshift_kubeconfig_file> \ 2 --credentials-requests-dir <path_to_credential_requests_directory> \ 3 --name <name> 4 1 The name of the provider. For example: ibmcloud or powervs . 2 The kubeconfig file associated with the cluster. For example, <installation_directory>/auth/kubeconfig . 3 The directory where the credential requests are stored. 4 The name of the OpenShift Container Platform cluster. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. 10.2. Rotating cloud provider credentials Some organizations require the rotation of the cloud provider credentials. To allow the cluster to use the new credentials, you must update the secrets that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. 10.2.1. Rotating cloud provider credentials manually If your cloud provider credentials are changed for any reason, you must manually update the secret that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. The process for rotating cloud credentials depends on the mode that the CCO is configured to use. After you rotate credentials for a cluster that is using mint mode, you must manually remove the component credentials that were created by the removed credential. Prerequisites Your cluster is installed on a platform that supports rotating cloud credentials manually with the CCO mode that you are using: For mint mode, Amazon Web Services (AWS) and Google Cloud Platform (GCP) are supported. For passthrough mode, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), and VMware vSphere are supported. You have changed the credentials that are used to interface with your cloud provider. The new credentials have sufficient permissions for the mode CCO is configured to use in your cluster. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds Azure azure-credentials GCP gcp-credentials RHOSP openstack-credentials VMware vSphere vsphere-creds Click the Options menu in the same row as the secret and select Edit Secret . Record the contents of the Value field or fields. You can use this information to verify that the value is different after updating the credentials. Update the text in the Value field or fields with the new authentication information for your cloud provider, and then click Save . If you are updating the credentials for a vSphere cluster that does not have the vSphere CSI Driver Operator enabled, you must force a rollout of the Kubernetes controller manager to apply the updated credentials. Note If the vSphere CSI Driver Operator is enabled, this step is not required. To apply the updated vSphere credentials, log in to the OpenShift Container Platform CLI as a user with the cluster-admin role and run the following command: USD oc patch kubecontrollermanager cluster \ -p='{"spec": {"forceRedeploymentReason": "recovery-'"USD( date )"'"}}' \ --type=merge While the credentials are rolling out, the status of the Kubernetes Controller Manager Operator reports Progressing=true . To view the status, run the following command: USD oc get co kube-controller-manager If the CCO for your cluster is configured to use mint mode, delete each component secret that is referenced by the individual CredentialsRequest objects. Log in to the OpenShift Container Platform CLI as a user with the cluster-admin role. Get the names and namespaces of all referenced component secrets: USD oc -n openshift-cloud-credential-operator get CredentialsRequest \ -o json | jq -r '.items[] | select (.spec.providerSpec.kind=="<provider_spec>") | .spec.secretRef' where <provider_spec> is the corresponding value for your cloud provider: AWS: AWSProviderSpec GCP: GCPProviderSpec Partial example output for AWS { "name": "ebs-cloud-credentials", "namespace": "openshift-cluster-csi-drivers" } { "name": "cloud-credential-operator-iam-ro-creds", "namespace": "openshift-cloud-credential-operator" } Delete each of the referenced component secrets: USD oc delete secret <secret_name> \ 1 -n <secret_namespace> 2 1 Specify the name of a secret. 2 Specify the namespace that contains the secret. Example deletion of an AWS secret USD oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers You do not need to manually delete the credentials from your provider console. Deleting the referenced component secrets will cause the CCO to delete the existing credentials from the platform and create new ones. Verification To verify that the credentials have changed: In the Administrator perspective of the web console, navigate to Workloads Secrets . Verify that the contents of the Value field or fields have changed. Additional resources The Cloud Credential Operator in mint mode The Cloud Credential Operator in passthrough mode vSphere CSI Driver Operator 10.3. Removing cloud provider credentials After installing OpenShift Container Platform, some organizations require the removal of the cloud provider credentials that were used during the initial installation. To allow the cluster to use the new credentials, you must update the secrets that the Cloud Credential Operator (CCO) uses to manage cloud provider credentials. 10.3.1. Removing cloud provider credentials For clusters that use the Cloud Credential Operator (CCO) in mint mode, the administrator-level credential is stored in the kube-system namespace. The CCO uses the admin credential to process the CredentialsRequest objects in the cluster and create users for components with limited permissions. After installing an OpenShift Container Platform cluster with the CCO in mint mode, you can remove the administrator-level credential secret from the kube-system namespace in the cluster. The CCO only requires the administrator-level credential during changes that require reconciling new or modified CredentialsRequest custom resources, such as minor cluster version updates. Note Before performing a minor version cluster update (for example, updating from OpenShift Container Platform 4.17 to 4.18), you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the update might be blocked. Prerequisites Your cluster is installed on a platform that supports removing cloud credentials from the CCO. Supported platforms are AWS and GCP. Procedure In the Administrator perspective of the web console, navigate to Workloads Secrets . In the table on the Secrets page, find the root secret for your cloud provider. Platform Secret name AWS aws-creds GCP gcp-credentials Click the Options menu in the same row as the secret and select Delete Secret . Additional resources The Cloud Credential Operator in mint mode 10.4. Enabling token-based authentication After installing an Microsoft Azure OpenShift Container Platform cluster, you can enable Microsoft Entra Workload ID to use short-term credentials. 10.4.1. Configuring the Cloud Credential Operator utility To configure an existing cluster to create and manage cloud credentials from outside of the cluster, extract and prepare the Cloud Credential Operator utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image}) Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 10.4.2. Enabling Microsoft Entra Workload ID on an existing cluster If you did not configure your Microsoft Azure OpenShift Container Platform cluster to use Microsoft Entra Workload ID during installation, you can enable this authentication method on an existing cluster. Important The process to enable Workload ID on an existing cluster is disruptive and takes a significant amount of time. Before proceeding, observe the following considerations: Read the following steps and ensure that you understand and accept the time requirement. The exact time requirement varies depending on the individual cluster, but it is likely to require at least one hour. During this process, you must refresh all service accounts and restart all pods on the cluster. These actions are disruptive to workloads. To mitigate this impact, you can temporarily halt these services and then redeploy them when the cluster is ready. After starting this process, do not attempt to update the cluster until it is complete. If an update is triggered, the process to enable Workload ID on an existing cluster fails. Prerequisites You have installed an OpenShift Container Platform cluster on Microsoft Azure. You have access to the cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have extracted and prepared the Cloud Credential Operator utility ( ccoctl ) binary. You have access to your Azure account by using the Azure CLI ( az ). Procedure Create an output directory for the manifests that the ccoctl utility generates. This procedure uses ./output_dir as an example. Extract the service account public signing key for the cluster to the output directory by running the following command: USD oc get configmap \ --namespace openshift-kube-apiserver bound-sa-token-signing-certs \ --output 'go-template={{index .data "service-account-001.pub"}}' > ./output_dir/serviceaccount-signer.public 1 1 This procedure uses a file named serviceaccount-signer.public as an example. Use the extracted service account public signing key to create an OpenID Connect (OIDC) issuer and Azure blob storage container with OIDC configuration files by running the following command: USD ./ccoctl azure create-oidc-issuer \ --name <azure_infra_name> \ 1 --output-dir ./output_dir \ --region <azure_region> \ 2 --subscription-id <azure_subscription_id> \ 3 --tenant-id <azure_tenant_id> \ --public-key-file ./output_dir/serviceaccount-signer.public 4 1 The value of the name parameter is used to create an Azure resource group. To use an existing Azure resource group instead of creating a new one, specify the --oidc-resource-group-name argument with the existing group name as its value. 2 Specify the region of the existing cluster. 3 Specify the subscription ID of the existing cluster. 4 Specify the file that contains the service account public signing key for the cluster. Verify that the configuration file for the Azure pod identity webhook was created by running the following command: USD ll ./output_dir/manifests Example output total 8 -rw-------. 1 cloud-user cloud-user 193 May 22 02:29 azure-ad-pod-identity-webhook-config.yaml 1 -rw-------. 1 cloud-user cloud-user 165 May 22 02:29 cluster-authentication-02-config.yaml 1 The file azure-ad-pod-identity-webhook-config.yaml contains the Azure pod identity webhook configuration. Set an OIDC_ISSUER_URL variable with the OIDC issuer URL from the generated manifests in the output directory by running the following command: USD OIDC_ISSUER_URL=`awk '/serviceAccountIssuer/ { print USD2 }' ./output_dir/manifests/cluster-authentication-02-config.yaml` Update the spec.serviceAccountIssuer parameter of the cluster authentication configuration by running the following command: USD oc patch authentication cluster \ --type=merge \ -p "{\"spec\":{\"serviceAccountIssuer\":\"USD{OIDC_ISSUER_URL}\"}}" Monitor the configuration update progress by running the following command: USD oc adm wait-for-stable-cluster This process might take 15 minutes or longer. The following output indicates that the process is complete: All clusteroperators are stable Restart all of the pods in the cluster by running the following command: USD oc adm reboot-machine-config-pool mcp/worker mcp/master Restarting a pod updates the serviceAccountIssuer field and refreshes the service account public signing key. Monitor the restart and update process by running the following command: USD oc adm wait-for-node-reboot nodes --all This process might take 15 minutes or longer. The following output indicates that the process is complete: All nodes rebooted Update the Cloud Credential Operator spec.credentialsMode parameter to Manual by running the following command: USD oc patch cloudcredential cluster \ --type=merge \ --patch '{"spec":{"credentialsMode":"Manual"}}' Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --credentials-requests \ --included \ --to <path_to_directory_for_credentials_requests> \ --registry-config ~/.pull-secret Note This command might take a few moments to run. Set an AZURE_INSTALL_RG variable with the Azure resource group name by running the following command: USD AZURE_INSTALL_RG=`oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'` Use the ccoctl utility to create managed identities for all CredentialsRequest objects by running the following command: USD ccoctl azure create-managed-identities \ --name <azure_infra_name> \ --output-dir ./output_dir \ --region <azure_region> \ --subscription-id <azure_subscription_id> \ --credentials-requests-dir <path_to_directory_for_credentials_requests> \ --issuer-url "USD{OIDC_ISSUER_URL}" \ --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \ 1 --installation-resource-group-name "USD{AZURE_INSTALL_RG}" 1 Specify the name of the resource group that contains the DNS zone. Apply the Azure pod identity webhook configuration for Workload ID by running the following command: USD oc apply -f ./output_dir/manifests/azure-ad-pod-identity-webhook-config.yaml Apply the secrets generated by the ccoctl utility by running the following command: USD find ./output_dir/manifests -iname "openshift*yaml" -print0 | xargs -I {} -0 -t oc replace -f {} This process might take several minutes. Restart all of the pods in the cluster by running the following command: USD oc adm reboot-machine-config-pool mcp/worker mcp/master Restarting a pod updates the serviceAccountIssuer field and refreshes the service account public signing key. Monitor the restart and update process by running the following command: USD oc adm wait-for-node-reboot nodes --all This process might take 15 minutes or longer. The following output indicates that the process is complete: All nodes rebooted Monitor the configuration update progress by running the following command: USD oc adm wait-for-stable-cluster This process might take 15 minutes or longer. The following output indicates that the process is complete: All clusteroperators are stable Optional: Remove the Azure root credentials secret by running the following command: USD oc delete secret -n kube-system azure-credentials Additional resources Microsoft Entra Workload ID Configuring an Azure cluster to use short-term credentials 10.4.3. Verifying that a cluster uses short-term credentials You can verify that a cluster uses short-term security credentials for individual components by checking the Cloud Credential Operator (CCO) configuration and other values in the cluster. Prerequisites You deployed an OpenShift Container Platform cluster using the Cloud Credential Operator utility ( ccoctl ) to implement short-term credentials. You installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Verify that the CCO is configured to operate in manual mode by running the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output confirms that the CCO is operating in manual mode: Example output Manual Verify that the cluster does not have root credentials by running the following command: USD oc get secrets \ -n kube-system <secret_name> where <secret_name> is the name of the root secret for your cloud provider. Platform Secret name Amazon Web Services (AWS) aws-creds Microsoft Azure azure-credentials Google Cloud Platform (GCP) gcp-credentials An error confirms that the root secret is not present on the cluster. Example output for an AWS cluster Error from server (NotFound): secrets "aws-creds" not found Verify that the components are using short-term security credentials for individual components by running the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the cluster is using manual mode with short-term credentials that are created and managed from outside of the cluster. Azure clusters: Verify that the components are assuming the Azure client ID that is specified in the secret manifests by running the following command: USD oc get secrets \ -n openshift-image-registry installer-cloud-credentials \ -o jsonpath='{.data}' An output that contains the azure_client_id and azure_federated_token_file felids confirms that the components are assuming the Azure client ID. Azure clusters: Verify that the pod identity webhook is running by running the following command: USD oc get pods \ -n openshift-cloud-credential-operator Example output NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m 10.5. Additional resources About the Cloud Credential Operator
|
[
"oc adm wait-for-stable-cluster --minimum-stable-period=5s",
"INFRA_ID=USD(oc get infrastructures cluster -o jsonpath='{.status.infrastructureName}') CLUSTER_NAME=USD{INFRA_ID%-*} 1",
"AWS_BUCKET=USD(oc get authentication cluster -o jsonpath={'.spec.serviceAccountIssuer'} | awk -F'://' '{printUSD2}' |awk -F'.' '{printUSD1}')",
"basename USD(oc get authentication cluster -o jsonpath={'.spec.serviceAccountIssuer'} )",
"<subdomain>.cloudfront.net",
"aws cloudfront list-distributions --query \"DistributionList.Items[].{DomainName: DomainName, OriginDomainName: Origins.Items[0].DomainName}[?contains(DomainName, '<subdomain>.cloudfront.net')]\"",
"[ { \"DomainName\": \"<subdomain>.cloudfront.net\", \"OriginDomainName\": \"<s3_bucket>.s3.us-east-2.amazonaws.com\" } ]",
"AWS_BUCKET=USD<s3_bucket>",
"TEMPDIR=USD(mktemp -d)",
"oc delete secrets/next-bound-service-account-signing-key -n openshift-kube-apiserver-operator",
"oc get secret/next-bound-service-account-signing-key -n openshift-kube-apiserver-operator -ojsonpath='{ .data.service-account\\.pub }' | base64 -d > USD{TEMPDIR}/serviceaccount-signer.public",
"ccoctl aws create-identity-provider --dry-run \\ 1 --output-dir USD{TEMPDIR} --name fake \\ 2 --region us-east-1 3",
"cp USD{TEMPDIR}/<number>-keys.json USD{TEMPDIR}/jwks.new.json",
"aws s3api get-object --bucket USD{AWS_BUCKET} --key keys.json USD{TEMPDIR}/jwks.current.json",
"jq -s '{ keys: map(.keys[])}' USD{TEMPDIR}/jwks.current.json USD{TEMPDIR}/jwks.new.json > USD{TEMPDIR}/jwks.combined.json",
"aws s3api put-object --bucket USD{AWS_BUCKET} --tagging \"openshift.io/cloud-credential-operator/USD{CLUSTER_NAME}=owned\" --key keys.json --body USD{TEMPDIR}/jwks.combined.json",
"oc adm wait-for-stable-cluster",
"All clusteroperators are stable",
"oc adm reboot-machine-config-pool mcp/worker mcp/master",
"oc adm wait-for-node-reboot nodes --all",
"All nodes rebooted",
"oc adm wait-for-stable-cluster",
"All clusteroperators are stable",
"aws s3api put-object --bucket USD{AWS_BUCKET} --tagging \"openshift.io/cloud-credential-operator/USD{CLUSTER_NAME}=owned\" --key keys.json --body USD{TEMPDIR}/jwks.new.json",
"oc adm wait-for-stable-cluster --minimum-stable-period=5s",
"CURRENT_ISSUER=USD(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}') GCP_BUCKET=USD(echo USD{CURRENT_ISSUER} | cut -d \"/\" -f4)",
"TEMPDIR=USD(mktemp -d)",
"oc delete secrets/next-bound-service-account-signing-key -n openshift-kube-apiserver-operator",
"oc get secret/next-bound-service-account-signing-key -n openshift-kube-apiserver-operator -ojsonpath='{ .data.service-account\\.pub }' | base64 -d > USD{TEMPDIR}/serviceaccount-signer.public",
"ccoctl gcp create-workload-identity-provider --dry-run \\ 1 --output-dir=USD{TEMPDIR} --name fake \\ 2 --project fake --workload-identity-pool fake",
"cp USD{TEMPDIR}/<number>-keys.json USD{TEMPDIR}/jwks.new.json",
"gcloud storage cp gs://USD{GCP_BUCKET}/keys.json USD{TEMPDIR}/jwks.current.json",
"jq -s '{ keys: map(.keys[])}' USD{TEMPDIR}/jwks.current.json USD{TEMPDIR}/jwks.new.json > USD{TEMPDIR}/jwks.combined.json",
"gcloud storage cp USD{TEMPDIR}/jwks.combined.json gs://USD{GCP_BUCKET}/keys.json",
"oc adm wait-for-stable-cluster",
"All clusteroperators are stable",
"oc adm reboot-machine-config-pool mcp/worker mcp/master",
"oc adm wait-for-node-reboot nodes --all",
"All nodes rebooted",
"oc adm wait-for-stable-cluster",
"All clusteroperators are stable",
"gcloud storage cp USD{TEMPDIR}/jwks.new.json gs://USD{GCP_BUCKET}/keys.json",
"oc adm wait-for-stable-cluster --minimum-stable-period=5s",
"CURRENT_ISSUER=USD(oc get authentication cluster -o jsonpath='{.spec.serviceAccountIssuer}') AZURE_STORAGE_ACCOUNT=USD(echo USD{CURRENT_ISSUER} | cut -d \"/\" -f3 | cut -d \".\" -f1) AZURE_STORAGE_CONTAINER=USD(echo USD{CURRENT_ISSUER} | cut -d \"/\" -f4)",
"TEMPDIR=USD(mktemp -d)",
"oc delete secrets/next-bound-service-account-signing-key -n openshift-kube-apiserver-operator",
"oc get secret/next-bound-service-account-signing-key -n openshift-kube-apiserver-operator -ojsonpath='{ .data.service-account\\.pub }' | base64 -d > USD{TEMPDIR}/serviceaccount-signer.public",
"ccoctl aws create-identity-provider \\ 1 --dry-run \\ 2 --output-dir USD{TEMPDIR} --name fake \\ 3 --region us-east-1 4",
"cp USD{TEMPDIR}/<number>-keys.json USD{TEMPDIR}/jwks.new.json",
"az storage blob download --container-name USD{AZURE_STORAGE_CONTAINER} --account-name USD{AZURE_STORAGE_ACCOUNT} --name 'openid/v1/jwks' -f USD{TEMPDIR}/jwks.current.json",
"jq -s '{ keys: map(.keys[])}' USD{TEMPDIR}/jwks.current.json USD{TEMPDIR}/jwks.new.json > USD{TEMPDIR}/jwks.combined.json",
"az storage blob upload --overwrite --account-name USD{AZURE_STORAGE_ACCOUNT} --container-name USD{AZURE_STORAGE_CONTAINER} --name 'openid/v1/jwks' -f USD{TEMPDIR}/jwks.combined.json",
"oc adm wait-for-stable-cluster",
"All clusteroperators are stable",
"oc adm reboot-machine-config-pool mcp/worker mcp/master",
"oc adm wait-for-node-reboot nodes --all",
"All nodes rebooted",
"oc adm wait-for-stable-cluster",
"All clusteroperators are stable",
"az storage blob upload --overwrite --account-name USD{AZURE_STORAGE_ACCOUNT} --container-name USD{AZURE_STORAGE_CONTAINER} --name 'openid/v1/jwks' -f USD{TEMPDIR}/jwks.new.json",
"ccoctl <provider_name> refresh-keys \\ 1 --kubeconfig <openshift_kubeconfig_file> \\ 2 --credentials-requests-dir <path_to_credential_requests_directory> \\ 3 --name <name> 4",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge",
"oc get co kube-controller-manager",
"oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'",
"{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }",
"oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2",
"oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers",
"RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image})",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"oc get configmap --namespace openshift-kube-apiserver bound-sa-token-signing-certs --output 'go-template={{index .data \"service-account-001.pub\"}}' > ./output_dir/serviceaccount-signer.public 1",
"./ccoctl azure create-oidc-issuer --name <azure_infra_name> \\ 1 --output-dir ./output_dir --region <azure_region> \\ 2 --subscription-id <azure_subscription_id> \\ 3 --tenant-id <azure_tenant_id> --public-key-file ./output_dir/serviceaccount-signer.public 4",
"ll ./output_dir/manifests",
"total 8 -rw-------. 1 cloud-user cloud-user 193 May 22 02:29 azure-ad-pod-identity-webhook-config.yaml 1 -rw-------. 1 cloud-user cloud-user 165 May 22 02:29 cluster-authentication-02-config.yaml",
"OIDC_ISSUER_URL=`awk '/serviceAccountIssuer/ { print USD2 }' ./output_dir/manifests/cluster-authentication-02-config.yaml`",
"oc patch authentication cluster --type=merge -p \"{\\\"spec\\\":{\\\"serviceAccountIssuer\\\":\\\"USD{OIDC_ISSUER_URL}\\\"}}\"",
"oc adm wait-for-stable-cluster",
"All clusteroperators are stable",
"oc adm reboot-machine-config-pool mcp/worker mcp/master",
"oc adm wait-for-node-reboot nodes --all",
"All nodes rebooted",
"oc patch cloudcredential cluster --type=merge --patch '{\"spec\":{\"credentialsMode\":\"Manual\"}}'",
"oc adm release extract --credentials-requests --included --to <path_to_directory_for_credentials_requests> --registry-config ~/.pull-secret",
"AZURE_INSTALL_RG=`oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'`",
"ccoctl azure create-managed-identities --name <azure_infra_name> --output-dir ./output_dir --region <azure_region> --subscription-id <azure_subscription_id> --credentials-requests-dir <path_to_directory_for_credentials_requests> --issuer-url \"USD{OIDC_ISSUER_URL}\" --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \\ 1 --installation-resource-group-name \"USD{AZURE_INSTALL_RG}\"",
"oc apply -f ./output_dir/manifests/azure-ad-pod-identity-webhook-config.yaml",
"find ./output_dir/manifests -iname \"openshift*yaml\" -print0 | xargs -I {} -0 -t oc replace -f {}",
"oc adm reboot-machine-config-pool mcp/worker mcp/master",
"oc adm wait-for-node-reboot nodes --all",
"All nodes rebooted",
"oc adm wait-for-stable-cluster",
"All clusteroperators are stable",
"oc delete secret -n kube-system azure-credentials",
"oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}",
"Manual",
"oc get secrets -n kube-system <secret_name>",
"Error from server (NotFound): secrets \"aws-creds\" not found",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"oc get secrets -n openshift-image-registry installer-cloud-credentials -o jsonpath='{.data}'",
"oc get pods -n openshift-cloud-credential-operator",
"NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/postinstallation_configuration/changing-cloud-credentials-configuration
|
Appendix D. LVM Object Tags
|
Appendix D. LVM Object Tags An LVM tag is a word that can be used to group LVM2 objects of the same type together. Tags can be attached to objects such as physical volumes, volume groups, and logical volumes. Tags can be attached to hosts in a cluster configuration. Tags can be given on the command line in place of PV, VG or LV arguments. Tags should be prefixed with @ to avoid ambiguity. Each tag is expanded by replacing it with all objects possessing that tag which are of the type expected by its position on the command line. LVM tags are strings of up to 1024 characters. LVM tags cannot start with a hyphen. A valid tag can consist of a limited range of characters only. The allowed characters are [A-Za-z0-9_+.-]. As of the Red Hat Enterprise Linux 6.1 release, the list of allowed characters was extended, and tags can contain the /, =, !, :, #, and & characters. Only objects in a volume group can be tagged. Physical volumes lose their tags if they are removed from a volume group; this is because tags are stored as part of the volume group metadata and that is deleted when a physical volume is removed. The following command lists all the logical volumes with the database tag. The following command lists the currently active host tags. D.1. Adding and Removing Object Tags To add or delete tags from physical volumes, use the --addtag or --deltag option of the pvchange command. To add or delete tags from volume groups, use the --addtag or --deltag option of the vgchange or vgcreate commands. To add or delete tags from logical volumes, use the --addtag or --deltag option of the lvchange or lvcreate commands. You can specify multiple --addtag and --deltag arguments within a single pvchange , vgchange , or lvchange command. For example, the following command deletes the tags T9 and T10 and adds the tags T13 and T14 to the volume group grant .
|
[
"lvs @database",
"lvm tags",
"vgchange --deltag T9 --deltag T10 --addtag T13 --addtag T14 grant"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lvm_tags
|
Chapter 35. Storage
|
Chapter 35. Storage DM Multipath no longer crashes when adding a feature to an empty string Previously, the DM Multipath service terminated unexpectedly when it attempted to add a feature to the features string of a built-in device configuration that had no features string. With this update, DM Multipath first checks if the features string exists, and creates one if necessary. As a result, DM Multipath no longer crashes when trying to modify a nonexistent features string. (BZ# 1459370 ) I/O operations no longer hang with RAID1 Previously, the kernel did not handle Multiple Devices (MD) I/O errors properly in dm-raid . As a consequence, the I/O sometimes became unresponsive. With this update, dm-raid now handles I/O errors correctly, and I/O operations no longer hang with RAID1. (BZ#1506338)
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/bug_fixes_storage
|
Provisioning APIs
|
Provisioning APIs OpenShift Container Platform 4.17 Reference guide for provisioning APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/provisioning_apis/index
|
Chapter 3. Removing disabled applications from the dashboard
|
Chapter 3. Removing disabled applications from the dashboard After your administrator has disabled your unused applications, you can manually remove them from the Red Hat OpenShift AI dashboard. Disabling and removing unused applications allows you to focus on the applications that you are most likely to use. Prerequisites You are logged in to Red Hat OpenShift AI. Your administrator has disabled the application that you want to remove, as described in Disabling applications connected to OpenShift AI . Procedure In the OpenShift AI interface, click Enabled . On the Enabled page, tiles for disabled applications are denoted with a Disabled label. Click Disabled on the tile for the application that you want to remove. Click the link to remove the application tile. Verification The tile for the disabled application no longer appears on the Enabled page.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_connected_applications/removing-disabled-applications_connected-apps
|
Chapter 33. File
|
Chapter 33. File The File Expression Language is an extension to the language, adding file related capabilities. These capabilities are related to common use cases working with file path and names. The goal is to allow expressions to be used with the components for setting dynamic file patterns for both consumer and producer. Note The file language is merged with language which means you can use all the file syntax directly within the simple language. 33.1. Dependencies The File language is part of camel-core . When using file with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency> 33.2. File Language options The File language supports 2 options, which are listed below. Name Default Java Type Description resultType String Sets the class name of the result type (type from output). trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 33.3. Syntax This language is an extension to the language so the syntax applies also. So the table below only lists the additional file related functions. All the file tokens use the same expression name as the method on the java.io.File object, for instance file:absolute refers to the java.io.File.getAbsolute() method. Notice that not all expressions are supported by the current Exchange. For instance the component supports some options, whereas the File component supports all of them. Expression Type File Consumer File Producer FTP Consumer FTP Producer Description file:name String yes no yes no refers to the file name (is relative to the starting directory, see note below) file:name.ext String yes no yes no refers to the file extension only file:name.ext.single String yes no yes no refers to the file extension. If the file extension has multiple dots, then this expression strips and only returns the last part. file:name.noext String yes no yes no refers to the file name with no extension (is relative to the starting directory, see note below) file:name.noext.single String yes no yes no refers to the file name with no extension (is relative to the starting directory, see note below). If the file extension has multiple dots, then this expression strips only the last part, and keep the others. file:onlyname String yes no yes no refers to the file name only with no leading paths. file:onlyname.noext String yes no yes no refers to the file name only with no extension and with no leading paths. file:onlyname.noext.single String yes no yes no refers to the file name only with no extension and with no leading paths. If the file extension has multiple dots, then this expression strips only the last part, and keep the others. file:ext String yes no yes no refers to the file extension only file:parent String yes no yes no refers to the file parent file:path String yes no yes no refers to the file path file:absolute Boolean yes no no no refers to whether the file is regarded as absolute or relative file:absolute.path String yes no no no refers to the absolute file path file:length Long yes no yes no refers to the file length returned as a Long type file:size Long yes no yes no refers to the file length returned as a Long type file:modified Date yes no yes no Refers to the file last modified returned as a Date type date:_command:pattern_ String yes yes yes yes for date formatting using the java.text.SimpleDateFormat patterns. Is an extension to the language. Additional command is: file (consumers only) for the last modified timestamp of the file. Notice: all the commands from the language can also be used. 33.4. File token example 33.4.1. Relative paths We have a java.io.File handle for the file hello.txt in the following relative directory: .\filelanguage\test . And we configure our endpoint to use this starting directory .\filelanguage . The file tokens will return as: Expression Returns file:name test\hello.txt file:name.ext txt file:name.noext test\hello file:onlyname hello.txt file:onlyname.noext hello file:ext txt file:parent filelanguage\test file:path filelanguage\test\hello.txt file:absolute false file:absolute.path \workspace\camel\camel-core\target\filelanguage\test\hello.txt 33.4.2. Absolute paths We have a java.io.File handle for the file hello.txt in the following absolute directory: \workspace\camel\camel-core\target\filelanguage\test . And we configure out endpoint to use the absolute starting directory \workspace\camel\camel-core\target\filelanguage . The file tokens will return as: Expression Returns file:name test\hello.txt file:name.ext txt file:name.noext test\hello file:onlyname hello.txt file:onlyname.noext hello file:ext txt file:parent \workspace\camel\camel-core\target\filelanguage\test file:path \workspace\camel\camel-core\target\filelanguage\test\hello.txt file:absolute true file:absolute.path \workspace\camel\camel-core\target\filelanguage\test\hello.txt 33.5. Samples You can enter a fixed file name such as myfile.txt : fileName="myfile.txt" Let's assume we use the file consumer to read files and want to move the read files to back up folder with the current date as a sub folder. This can be done using an expression like: fileName="backup/USD{date:now:yyyyMMdd}/USD{file:name.noext}.bak" relative folder names are also supported so suppose the backup folder should be a sibling folder then you can append .. as shown: fileName="../backup/USD{date:now:yyyyMMdd}/USD{file:name.noext}.bak" As this is an extension to the language we have access to all the goodies from this language also, so in this use case we want to use the in.header.type as a parameter in the dynamic expression: fileName="../backup/USD{date:now:yyyyMMdd}/type-USD{in.header.type}/backup-of-USD{file:name.noext}.bak" If you have a custom date you want to use in the expression then Camel supports retrieving dates from the message header: fileName="orders/order-USD{in.header.customerId}-USD{date:in.header.orderDate:yyyyMMdd}.xml" And finally we can also use a bean expression to invoke a POJO class that generates some String output (or convertible to String) to be used: fileName="uniquefile-USD{bean:myguidgenerator.generateid}.txt" Of course all this can be combined in one expression where you can use the and the language in one combined expression. This is pretty powerful for those common file path patterns. 33.6. Spring Boot Auto-Configuration The component supports 147 options, which are listed below. Name Description Default Type camel.cloud.consul.service-discovery.acl-token Sets the ACL token to be used with Consul. String camel.cloud.consul.service-discovery.block-seconds The seconds to wait for a watch event, default 10 seconds. 10 Integer camel.cloud.consul.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.consul.service-discovery.connect-timeout-millis Connect timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.datacenter The data center. String camel.cloud.consul.service-discovery.enabled Enable the component. true Boolean camel.cloud.consul.service-discovery.password Sets the password to be used for basic authentication. String camel.cloud.consul.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.consul.service-discovery.read-timeout-millis Read timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.url The Consul agent URL. String camel.cloud.consul.service-discovery.user-name Sets the username to be used for basic authentication. String camel.cloud.consul.service-discovery.write-timeout-millis Write timeout for OkHttpClient. Long camel.cloud.dns.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.dns.service-discovery.domain The domain name;. String camel.cloud.dns.service-discovery.enabled Enable the component. true Boolean camel.cloud.dns.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.dns.service-discovery.proto The transport protocol of the desired service. _tcp String camel.cloud.etcd.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.etcd.service-discovery.enabled Enable the component. true Boolean camel.cloud.etcd.service-discovery.password The password to use for basic authentication. String camel.cloud.etcd.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.etcd.service-discovery.service-path The path to look for for service discovery. /services/ String camel.cloud.etcd.service-discovery.timeout To set the maximum time an action could take to complete. Long camel.cloud.etcd.service-discovery.type To set the discovery type, valid values are on-demand and watch. on-demand String camel.cloud.etcd.service-discovery.uris The URIs the client can connect to. String camel.cloud.etcd.service-discovery.user-name The user name to use for basic authentication. String camel.cloud.kubernetes.service-discovery.api-version Sets the API version when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-data Sets the Certificate Authority data when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-file Sets the Certificate Authority data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-data Sets the Client Certificate data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-file Sets the Client Certificate data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-algo Sets the Client Keystore algorithm, such as RSA when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-data Sets the Client Keystore data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-file Sets the Client Keystore data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-passphrase Sets the Client Keystore passphrase when using client lookup. String camel.cloud.kubernetes.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.kubernetes.service-discovery.dns-domain Sets the DNS domain to use for DNS lookup. String camel.cloud.kubernetes.service-discovery.enabled Enable the component. true Boolean camel.cloud.kubernetes.service-discovery.lookup How to perform service lookup. Possible values: client, dns, environment. When using client, then the client queries the kubernetes master to obtain a list of active pods that provides the service, and then random (or round robin) select a pod. When using dns the service name is resolved as name.namespace.svc.dnsDomain. When using dnssrv the service name is resolved with SRV query for . ... svc... When using environment then environment variables are used to lookup the service. By default environment is used. environment String camel.cloud.kubernetes.service-discovery.master-url Sets the URL to the master when using client lookup. String camel.cloud.kubernetes.service-discovery.namespace Sets the namespace to use. Will by default use namespace from the ENV variable KUBERNETES_MASTER. String camel.cloud.kubernetes.service-discovery.oauth-token Sets the OAUTH token for authentication (instead of username/password) when using client lookup. String camel.cloud.kubernetes.service-discovery.password Sets the password for authentication when using client lookup. String camel.cloud.kubernetes.service-discovery.port-name Sets the Port Name to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.port-protocol Sets the Port Protocol to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.kubernetes.service-discovery.trust-certs Sets whether to turn on trust certificate check when using client lookup. false Boolean camel.cloud.kubernetes.service-discovery.username Sets the username for authentication when using client lookup. String camel.cloud.ribbon.load-balancer.client-name Sets the Ribbon client name. String camel.cloud.ribbon.load-balancer.configurations Define additional configuration definitions. Map camel.cloud.ribbon.load-balancer.enabled Enable the component. true Boolean camel.cloud.ribbon.load-balancer.namespace The namespace. String camel.cloud.ribbon.load-balancer.password The password. String camel.cloud.ribbon.load-balancer.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.ribbon.load-balancer.username The username. String camel.hystrix.allow-maximum-size-to-diverge-from-core-size Allows the configuration for maximumSize to take effect. That value can then be equal to, or higher, than coreSize. false Boolean camel.hystrix.circuit-breaker-enabled Whether to use a HystrixCircuitBreaker or not. If false no circuit-breaker logic will be used and all requests permitted. This is similar in effect to circuitBreakerForceClosed() except that continues tracking metrics and knowing whether it should be open/closed, this property results in not even instantiating a circuit-breaker. true Boolean camel.hystrix.circuit-breaker-error-threshold-percentage Error percentage threshold (as whole number such as 50) at which point the circuit breaker will trip open and reject requests. It will stay tripped for the duration defined in circuitBreakerSleepWindowInMilliseconds; The error percentage this is compared against comes from HystrixCommandMetrics.getHealthCounts(). 50 Integer camel.hystrix.circuit-breaker-force-closed If true the HystrixCircuitBreaker#allowRequest() will always return true to allow requests regardless of the error percentage from HystrixCommandMetrics.getHealthCounts(). The circuitBreakerForceOpen() property takes precedence so if it set to true this property does nothing. false Boolean camel.hystrix.circuit-breaker-force-open If true the HystrixCircuitBreaker.allowRequest() will always return false, causing the circuit to be open (tripped) and reject all requests. This property takes precedence over circuitBreakerForceClosed();. false Boolean camel.hystrix.circuit-breaker-request-volume-threshold Minimum number of requests in the metricsRollingStatisticalWindowInMilliseconds() that must exist before the HystrixCircuitBreaker will trip. If below this number the circuit will not trip regardless of error percentage. 20 Integer camel.hystrix.circuit-breaker-sleep-window-in-milliseconds The time in milliseconds after a HystrixCircuitBreaker trips open that it should wait before trying requests again. 5000 Integer camel.hystrix.configurations Define additional configuration definitions. Map camel.hystrix.core-pool-size Core thread-pool size that gets passed to java.util.concurrent.ThreadPoolExecutor#setCorePoolSize(int). 10 Integer camel.hystrix.enabled Enable the component. true Boolean camel.hystrix.execution-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.run(). Requests beyond the concurrent limit will be rejected. Applicable only when executionIsolationStrategy == SEMAPHORE. 20 Integer camel.hystrix.execution-isolation-strategy What isolation strategy HystrixCommand.run() will be executed with. If THREAD then it will be executed on a separate thread and concurrent requests limited by the number of threads in the thread-pool. If SEMAPHORE then it will be executed on the calling thread and concurrent requests limited by the semaphore count. THREAD String camel.hystrix.execution-isolation-thread-interrupt-on-timeout Whether the execution thread should attempt an interrupt (using Future#cancel ) when a thread times out. Applicable only when executionIsolationStrategy() == THREAD. true Boolean camel.hystrix.execution-timeout-enabled Whether the timeout mechanism is enabled for this command. true Boolean camel.hystrix.execution-timeout-in-milliseconds Time in milliseconds at which point the command will timeout and halt execution. If executionIsolationThreadInterruptOnTimeout == true and the command is thread-isolated, the executing thread will be interrupted. If the command is semaphore-isolated and a HystrixObservableCommand, that command will get unsubscribed. 1000 Integer camel.hystrix.fallback-enabled Whether HystrixCommand.getFallback() should be attempted when failure occurs. true Boolean camel.hystrix.fallback-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.getFallback(). Requests beyond the concurrent limit will fail-fast and not attempt retrieving a fallback. 10 Integer camel.hystrix.group-key Sets the group key to use. The default value is CamelHystrix. CamelHystrix String camel.hystrix.keep-alive-time Keep-alive time in minutes that gets passed to ThreadPoolExecutor#setKeepAliveTime(long,TimeUnit). 1 Integer camel.hystrix.max-queue-size Max queue size that gets passed to BlockingQueue in HystrixConcurrencyStrategy.getBlockingQueue(int) This should only affect the instantiation of a threadpool - it is not eliglible to change a queue size on the fly. For that, use queueSizeRejectionThreshold(). -1 Integer camel.hystrix.maximum-size Maximum thread-pool size that gets passed to ThreadPoolExecutor#setMaximumPoolSize(int) . This is the maximum amount of concurrency that can be supported without starting to reject HystrixCommands. Please note that this setting only takes effect if you also set allowMaximumSizeToDivergeFromCoreSize. 10 Integer camel.hystrix.metrics-health-snapshot-interval-in-milliseconds Time in milliseconds to wait between allowing health snapshots to be taken that calculate success and error percentages and affect HystrixCircuitBreaker.isOpen() status. On high-volume circuits the continual calculation of error percentage can become CPU intensive thus this controls how often it is calculated. 500 Integer camel.hystrix.metrics-rolling-percentile-bucket-size Maximum number of values stored in each bucket of the rolling percentile. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-percentile-enabled Whether percentile metrics should be captured using HystrixRollingPercentile inside HystrixCommandMetrics. true Boolean camel.hystrix.metrics-rolling-percentile-window-buckets Number of buckets the rolling percentile window is broken into. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 6 Integer camel.hystrix.metrics-rolling-percentile-window-in-milliseconds Duration of percentile rolling window in milliseconds. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10000 Integer camel.hystrix.metrics-rolling-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-statistical-window-in-milliseconds This property sets the duration of the statistical rolling window, in milliseconds. This is how long metrics are kept for the thread pool. The window is divided into buckets and rolls by those increments. 10000 Integer camel.hystrix.queue-size-rejection-threshold Queue size rejection threshold is an artificial max size at which rejections will occur even if maxQueueSize has not been reached. This is done because the maxQueueSize of a BlockingQueue can not be dynamically changed and we want to support dynamically changing the queue size that affects rejections. This is used by HystrixCommand when queuing a thread for execution. 5 Integer camel.hystrix.request-log-enabled Whether HystrixCommand execution and events should be logged to HystrixRequestLog. true Boolean camel.hystrix.thread-pool-key Sets the thread pool key to use. Will by default use the same value as groupKey has been configured to use. CamelHystrix String camel.hystrix.thread-pool-rolling-number-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10 Integer camel.hystrix.thread-pool-rolling-number-statistical-window-in-milliseconds Duration of statistical rolling window in milliseconds. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10000 Integer camel.language.constant.enabled Whether to enable auto configuration of the constant language. This is enabled by default. Boolean camel.language.constant.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.csimple.enabled Whether to enable auto configuration of the csimple language. This is enabled by default. Boolean camel.language.csimple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.exchangeproperty.enabled Whether to enable auto configuration of the exchangeProperty language. This is enabled by default. Boolean camel.language.exchangeproperty.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.file.enabled Whether to enable auto configuration of the file language. This is enabled by default. Boolean camel.language.file.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.header.enabled Whether to enable auto configuration of the header language. This is enabled by default. Boolean camel.language.header.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.ref.enabled Whether to enable auto configuration of the ref language. This is enabled by default. Boolean camel.language.ref.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.simple.enabled Whether to enable auto configuration of the simple language. This is enabled by default. Boolean camel.language.simple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.tokenize.enabled Whether to enable auto configuration of the tokenize language. This is enabled by default. Boolean camel.language.tokenize.group-delimiter Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. String camel.language.tokenize.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.resilience4j.automatic-transition-from-open-to-half-open-enabled Enables automatic transition from OPEN to HALF_OPEN state once the waitDurationInOpenState has passed. false Boolean camel.resilience4j.circuit-breaker-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreaker instance to lookup and use from the registry. When using this, then any other circuit breaker options are not in use. String camel.resilience4j.config-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreakerConfig instance to lookup and use from the registry. String camel.resilience4j.configurations Define additional configuration definitions. Map camel.resilience4j.enabled Enable the component. true Boolean camel.resilience4j.failure-rate-threshold Configures the failure rate threshold in percentage. If the failure rate is equal or greater than the threshold the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 50 percentage. Float camel.resilience4j.minimum-number-of-calls Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. For example, if minimumNumberOfCalls is 10, then at least 10 calls must be recorded, before the failure rate can be calculated. If only 9 calls have been recorded the CircuitBreaker will not transition to open even if all 9 calls have failed. Default minimumNumberOfCalls is 100. 100 Integer camel.resilience4j.permitted-number-of-calls-in-half-open-state Configures the number of permitted calls when the CircuitBreaker is half open. The size must be greater than 0. Default size is 10. 10 Integer camel.resilience4j.sliding-window-size Configures the size of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. slidingWindowSize configures the size of the sliding window. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. The slidingWindowSize must be greater than 0. The minimumNumberOfCalls must be greater than 0. If the slidingWindowType is COUNT_BASED, the minimumNumberOfCalls cannot be greater than slidingWindowSize . If the slidingWindowType is TIME_BASED, you can pick whatever you want. Default slidingWindowSize is 100. 100 Integer camel.resilience4j.sliding-window-type Configures the type of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. Default slidingWindowType is COUNT_BASED. COUNT_BASED String camel.resilience4j.slow-call-duration-threshold Configures the duration threshold (seconds) above which calls are considered as slow and increase the slow calls percentage. Default value is 60 seconds. 60 Integer camel.resilience4j.slow-call-rate-threshold Configures a threshold in percentage. The CircuitBreaker considers a call as slow when the call duration is greater than slowCallDurationThreshold Duration. When the percentage of slow calls is equal or greater the threshold, the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 100 percentage which means that all recorded calls must be slower than slowCallDurationThreshold. Float camel.resilience4j.wait-duration-in-open-state Configures the wait duration (in seconds) which specifies how long the CircuitBreaker should stay open, before it switches to half open. Default value is 60 seconds. 60 Integer camel.resilience4j.writable-stack-trace-enabled Enables writable stack traces. When set to false, Exception.getStackTrace returns a zero length array. This may be used to reduce log spam when the circuit breaker is open as the cause of the exceptions is already known (the circuit breaker is short-circuiting calls). true Boolean camel.rest.api-component The name of the Camel component to use as the REST API (such as swagger) If no API Component has been explicit configured, then Camel will lookup if there is a Camel component responsible for servicing and generating the REST API documentation, or if a org.apache.camel.spi.RestApiProcessorFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.api-context-path Sets a leading API context-path the REST API services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. String camel.rest.api-context-route-id Sets the route id to use for the route that services the REST API. The route will by default use an auto assigned route id. String camel.rest.api-host To use an specific hostname for the API documentation (eg swagger) This can be used to override the generated host with this configured hostname. String camel.rest.api-property Allows to configure as many additional properties for the api documentation (swagger). For example set property api.title to my cool stuff. Map camel.rest.api-vendor-extension Whether vendor extension is enabled in the Rest APIs. If enabled then Camel will include additional information as vendor extension (eg keys starting with x-) such as route ids, class names etc. Not all 3rd party API gateways and tools supports vendor-extensions when importing your API docs. false Boolean camel.rest.binding-mode Sets the binding mode to use. The default value is off. RestBindingMode camel.rest.client-request-validation Whether to enable validation of the client request to check whether the Content-Type and Accept headers from the client is supported by the Rest-DSL configuration of its consumes/produces settings. This can be turned on, to enable this check. In case of validation error, then HTTP Status codes 415 or 406 is returned. The default value is false. false Boolean camel.rest.component The Camel Rest component to use for the REST transport (consumer), such as netty-http, jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.component-property Allows to configure as many additional properties for the rest component in use. Map camel.rest.consumer-property Allows to configure as many additional properties for the rest consumer in use. Map camel.rest.context-path Sets a leading context-path the REST services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. Or for components such as camel-jetty or camel-netty-http that includes a HTTP server. String camel.rest.cors-headers Allows to configure custom CORS headers. Map camel.rest.data-format-property Allows to configure as many additional properties for the data formats in use. For example set property prettyPrint to true to have json outputted in pretty mode. The properties can be prefixed to denote the option is only for either JSON or XML and for either the IN or the OUT. The prefixes are: json.in. json.out. xml.in. xml.out. For example a key with value xml.out.mustBeJAXBElement is only for the XML data format for the outgoing. A key without a prefix is a common key for all situations. Map camel.rest.enable-cors Whether to enable CORS headers in the HTTP response. The default value is false. false Boolean camel.rest.endpoint-property Allows to configure as many additional properties for the rest endpoint in use. Map camel.rest.host The hostname to use for exposing the REST service. String camel.rest.host-name-resolver If no hostname has been explicit configured, then this resolver is used to compute the hostname the REST service will be using. RestHostNameResolver camel.rest.json-data-format Name of specific json data format to use. By default json-jackson will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.port The port number to use for exposing the REST service. Notice if you use servlet component then the port number configured here does not apply, as the port number in use is the actual port number the servlet component is using. eg if using Apache Tomcat its the tomcat http port, if using Apache Karaf its the HTTP service in Karaf that uses port 8181 by default etc. Though in those situations setting the port number here, allows tooling and JMX to know the port number, so its recommended to set the port number to the number that the servlet engine uses. String camel.rest.producer-api-doc Sets the location of the api document (swagger api) the REST producer will use to validate the REST uri and query parameters are valid accordingly to the api document. This requires adding camel-swagger-java to the classpath, and any miss configuration will let Camel fail on startup and report the error(s). The location of the api document is loaded from classpath by default, but you can use file: or http: to refer to resources to load from file or http url. String camel.rest.producer-component Sets the name of the Camel component to use as the REST producer. String camel.rest.scheme The scheme to use for exposing the REST service. Usually http or https is supported. The default value is http. String camel.rest.skip-binding-on-error-code Whether to skip binding on output if there is a custom HTTP error code header. This allows to build custom error messages that do not bind to json / xml etc, as success messages otherwise will do. false Boolean camel.rest.use-x-forward-headers Whether to use X-Forward headers for Host and related setting. The default value is true. true Boolean camel.rest.xml-data-format Name of specific XML data format to use. By default jaxb will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.api-context-id-pattern Deprecated Sets an CamelContext id pattern to only allow Rest APIs from rest services within CamelContext's which name matches the pattern. The pattern name refers to the CamelContext name, to match on the current CamelContext only. For any other value, the pattern uses the rules from PatternHelper#matchPattern(String,String). String camel.rest.api-context-listing Deprecated Sets whether listing of all available CamelContext's with REST services in the JVM is enabled. If enabled it allows to discover these contexts, if false then only the current CamelContext is in use. false Boolean
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency>",
"fileName=\"myfile.txt\"",
"fileName=\"backup/USD{date:now:yyyyMMdd}/USD{file:name.noext}.bak\"",
"fileName=\"../backup/USD{date:now:yyyyMMdd}/USD{file:name.noext}.bak\"",
"fileName=\"../backup/USD{date:now:yyyyMMdd}/type-USD{in.header.type}/backup-of-USD{file:name.noext}.bak\"",
"fileName=\"orders/order-USD{in.header.customerId}-USD{date:in.header.orderDate:yyyyMMdd}.xml\"",
"fileName=\"uniquefile-USD{bean:myguidgenerator.generateid}.txt\""
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-file-language-starter
|
Chapter 5. Migrating from internal Satellite databases to external databases
|
Chapter 5. Migrating from internal Satellite databases to external databases When you install Red Hat Satellite, the satellite-installer command installs PostgreSQL databases on the same server as Satellite. If you are using the default internal databases but want to start using external databases to help with the server load, you can migrate your internal databases to external databases. To confirm whether your Satellite Server has internal or external databases, you can query the status of your databases: For PostgreSQL, enter the following command: Red Hat does not provide support or tools for external database maintenance. This includes backups, upgrades, and database tuning. You must have your own database administrator to support and maintain external databases. To migrate from the default internal databases to external databases, you must complete the following procedures: Section 5.2, "Preparing a host for external databases" . Prepare a Red Hat Enterprise Linux 8 server to host the external databases. Section 5.3, "Installing PostgreSQL" . Prepare PostgreSQL with databases for Satellite, Pulp and Candlepin with dedicated users owning them. Section 5.4, "Migrating to external databases" . Edit the parameters of satellite-installer to point to the new databases, and run satellite-installer . 5.1. PostgreSQL as an external database considerations Foreman, Katello, and Candlepin use the PostgreSQL database. If you want to use PostgreSQL as an external database, the following information can help you decide if this option is right for your Satellite configuration. Satellite supports PostgreSQL version 12. Advantages of external PostgreSQL Increase in free memory and free CPU on Satellite Flexibility to set shared_buffers on the PostgreSQL database to a high number without the risk of interfering with other services on Satellite Flexibility to tune the PostgreSQL server's system without adversely affecting Satellite operations Disadvantages of external PostgreSQL Increase in deployment complexity that can make troubleshooting more difficult The external PostgreSQL server is an additional system to patch and maintain If either Satellite or the PostgreSQL database server suffers a hardware or storage failure, Satellite is not operational If there is latency between the Satellite server and database server, performance can suffer If you suspect that the PostgreSQL database on your Satellite is causing performance problems, use the information in Satellite 6: How to enable postgres query logging to detect slow running queries to determine if you have slow queries. Queries that take longer than one second are typically caused by performance issues with large installations, and moving to an external database might not help. If you have slow queries, contact Red Hat Support. 5.2. Preparing a host for external databases Install a freshly provisioned system with the latest Red Hat Enterprise Linux 8 to host the external databases. Subscriptions for Red Hat Enterprise Linux do not provide the correct service level agreement for using Satellite with external databases. You must also attach a Satellite subscription to the base operating system that you want to use for the external databases. Prerequisites The prepared host must meet Satellite's Storage Requirements . Procedure Use the instructions in Attaching the Satellite Infrastructure Subscription to attach a Satellite subscription to your server. Disable all repositories and enable only the following repositories: Enable the following module: Note Enablement of the module satellite:el8 warns about a conflict with postgresql:10 and ruby:2.5 as these modules are set to the default module versions on Red Hat Enterprise Linux 8. The module satellite:el8 has a dependency for the modules postgresql:12 and ruby:2.7 that will be enabled with the satellite:el8 module. These warnings do not cause installation process failure, hence can be ignored safely. For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Lifecycle . 5.3. Installing PostgreSQL You can install only the same version of PostgreSQL that is installed with the satellite-installer tool during an internal database installation. Satellite supports PostgreSQL version 12. Procedure To install PostgreSQL, enter the following command: To initialize PostgreSQL, enter the following command: Edit the /var/lib/pgsql/data/postgresql.conf file: Note that the default configuration of external PostgreSQL needs to be adjusted to work with Satellite. The base recommended external database configuration adjustments are as follows: checkpoint_completion_target: 0.9 max_connections: 500 shared_buffers: 512MB work_mem: 4MB Remove the # and edit to listen to inbound connections: Edit the /var/lib/pgsql/data/pg_hba.conf file: Add the following line to the file: To start, and enable PostgreSQL service, enter the following commands: Open the postgresql port on the external PostgreSQL server: Make the changes persistent: Switch to the postgres user and start the PostgreSQL client: Create three databases and dedicated roles: one for Satellite, one for Candlepin, and one for Pulp: Connect to the Pulp database: Create the hstore extension: Exit the postgres user: From Satellite Server, test that you can access the database. If the connection succeeds, the commands return 1 . 5.4. Migrating to external databases Back up and transfer existing data, then use the satellite-installer command to configure Satellite to connect to an external PostgreSQL database server. Prerequisites You have installed and configured a PostgreSQL server on a Red Hat Enterprise Linux server. Procedure On Satellite Server, stop Satellite services: Start the PostgreSQL services: Back up the internal databases: Transfer the data to the new external databases: Use the satellite-installer command to update Satellite to point to the new databases:
|
[
"satellite-maintain service status --only postgresql",
"subscription-manager repos --disable '*' subscription-manager repos --enable=satellite-6.15-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.15-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"dnf module enable satellite:el8",
"dnf install postgresql-server postgresql-evr postgresql-contrib",
"postgresql-setup initdb",
"vi /var/lib/pgsql/data/postgresql.conf",
"listen_addresses = '*'",
"vi /var/lib/pgsql/data/pg_hba.conf",
"host all all Satellite_ip /32 md5",
"systemctl enable --now postgresql",
"firewall-cmd --add-service=postgresql",
"firewall-cmd --runtime-to-permanent",
"su - postgres -c psql",
"CREATE USER \"foreman\" WITH PASSWORD ' Foreman_Password '; CREATE USER \"candlepin\" WITH PASSWORD ' Candlepin_Password '; CREATE USER \"pulp\" WITH PASSWORD ' Pulpcore_Password '; CREATE DATABASE foreman OWNER foreman; CREATE DATABASE candlepin OWNER candlepin; CREATE DATABASE pulpcore OWNER pulp;",
"postgres=# \\c pulpcore You are now connected to database \"pulpcore\" as user \"postgres\".",
"pulpcore=# CREATE EXTENSION IF NOT EXISTS \"hstore\"; CREATE EXTENSION",
"\\q",
"PGPASSWORD=' Foreman_Password ' psql -h postgres.example.com -p 5432 -U foreman -d foreman -c \"SELECT 1 as ping\" PGPASSWORD=' Candlepin_Password ' psql -h postgres.example.com -p 5432 -U candlepin -d candlepin -c \"SELECT 1 as ping\" PGPASSWORD=' Pulpcore_Password ' psql -h postgres.example.com -p 5432 -U pulp -d pulpcore -c \"SELECT 1 as ping\"",
"satellite-maintain service stop",
"systemctl start postgresql",
"satellite-maintain backup online --preserve-directory --skip-pulp-content /var/migration_backup",
"PGPASSWORD=' Foreman_Password ' pg_restore -h postgres.example.com -U foreman -d foreman < /var/migration_backup/foreman.dump PGPASSWORD=' Candlepin_Password ' pg_restore -h postgres.example.com -U candlepin -d candlepin < /var/migration_backup/candlepin.dump PGPASSWORD=' Pulpcore_Password ' pg_restore -h postgres.example.com -U pulp -d pulpcore < /var/migration_backup/pulpcore.dump",
"satellite-installer --foreman-db-database foreman --foreman-db-host postgres.example.com --foreman-db-manage false --foreman-db-password Foreman_Password --foreman-db-username foreman --foreman-proxy-content-pulpcore-manage-postgresql false --foreman-proxy-content-pulpcore-postgresql-db-name pulpcore --foreman-proxy-content-pulpcore-postgresql-host postgres.example.com --foreman-proxy-content-pulpcore-postgresql-password Pulpcore_Password --foreman-proxy-content-pulpcore-postgresql-user pulp --katello-candlepin-db-host postgres.example.com --katello-candlepin-db-name candlepin --katello-candlepin-db-password Candlepin_Password --katello-candlepin-db-user candlepin --katello-candlepin-manage-db false"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/Migrating_from_Internal_Databases_to_External_Databases_admin
|
Chapter 2. Shutting down the cluster gracefully
|
Chapter 2. Shutting down the cluster gracefully This document describes the process to gracefully shut down your cluster. You might need to temporarily shut down your cluster for maintenance reasons, or to save on resource costs. 2.1. Prerequisites Take an etcd backup prior to shutting down the cluster. Important It is important to take an etcd backup before performing this procedure so that your cluster can be restored if you encounter any issues when restarting the cluster. For example, the following conditions can cause the restarted cluster to malfunction: etcd data corruption during shutdown Node failure due to hardware Network connectivity issues If your cluster fails to recover, follow the steps to restore to a cluster state . 2.2. Shutting down the cluster You can shut down your cluster in a graceful manner so that it can be restarted at a later date. Note You can shut down a cluster until a year from the installation date and expect it to restart gracefully. After a year from the installation date, the cluster certificates expire. However, you might need to manually approve the pending certificate signing requests (CSRs) to recover kubelet certificates when the cluster restarts. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have taken an etcd backup. Procedure If you are shutting the cluster down for an extended period, determine the date on which certificates expire and run the following command: USD oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-signer -o jsonpath='{.metadata.annotations.auth\.openshift\.io/certificate-not-after}' Example output 2022-08-05T14:37:50Zuser@user:~ USD 1 1 To ensure that the cluster can restart gracefully, plan to restart it on or before the specified date. As the cluster restarts, the process might require you to manually approve the pending certificate signing requests (CSRs) to recover kubelet certificates. Mark all the nodes in the cluster as unschedulable. You can do this from your cloud provider's web console, or by running the following loop: USD for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm cordon USD{node} ; done Example output ci-ln-mgdnf4b-72292-n547t-master-0 node/ci-ln-mgdnf4b-72292-n547t-master-0 cordoned ci-ln-mgdnf4b-72292-n547t-master-1 node/ci-ln-mgdnf4b-72292-n547t-master-1 cordoned ci-ln-mgdnf4b-72292-n547t-master-2 node/ci-ln-mgdnf4b-72292-n547t-master-2 cordoned ci-ln-mgdnf4b-72292-n547t-worker-a-s7ntl node/ci-ln-mgdnf4b-72292-n547t-worker-a-s7ntl cordoned ci-ln-mgdnf4b-72292-n547t-worker-b-cmc9k node/ci-ln-mgdnf4b-72292-n547t-worker-b-cmc9k cordoned ci-ln-mgdnf4b-72292-n547t-worker-c-vcmtn node/ci-ln-mgdnf4b-72292-n547t-worker-c-vcmtn cordoned Evacuate the pods using the following method: USD for node in USD(oc get nodes -l node-role.kubernetes.io/worker -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm drain USD{node} --delete-emptydir-data --ignore-daemonsets=true --timeout=15s --force ; done Shut down all of the nodes in the cluster. You can do this from the web console for your cloud provider web console, or by running the following loop. Shutting down the nodes by using one of these methods allows pods to terminate gracefully, which reduces the chance for data corruption. Note Ensure that the control plane node with the API VIP assigned is the last node processed in the loop. Otherwise, the shutdown command fails. USD for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/USD{node} -- chroot /host shutdown -h 1; done 1 1 -h 1 indicates how long, in minutes, this process lasts before the control plane nodes are shut down. For large-scale clusters with 10 nodes or more, set to -h 10 or longer to make sure all the compute nodes have time to shut down first. Example output Starting pod/ip-10-0-130-169us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:17 UTC, use 'shutdown -c' to cancel. Removing debug pod ... Starting pod/ip-10-0-150-116us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:29 UTC, use 'shutdown -c' to cancel. Note It is not necessary to drain control plane nodes of the standard pods that ship with OpenShift Container Platform prior to shutdown. Cluster administrators are responsible for ensuring a clean restart of their own workloads after the cluster is restarted. If you drained control plane nodes prior to shutdown because of custom workloads, you must mark the control plane nodes as schedulable before the cluster will be functional again after restart. Shut off any cluster dependencies that are no longer needed, such as external storage or an LDAP server. Be sure to consult your vendor's documentation before doing so. Important If you deployed your cluster on a cloud-provider platform, do not shut down, suspend, or delete the associated cloud resources. If you delete the cloud resources of a suspended virtual machine, OpenShift Container Platform might not restore successfully. 2.3. Additional resources Restarting the cluster gracefully
|
[
"oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-signer -o jsonpath='{.metadata.annotations.auth\\.openshift\\.io/certificate-not-after}'",
"2022-08-05T14:37:50Zuser@user:~ USD 1",
"for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm cordon USD{node} ; done",
"ci-ln-mgdnf4b-72292-n547t-master-0 node/ci-ln-mgdnf4b-72292-n547t-master-0 cordoned ci-ln-mgdnf4b-72292-n547t-master-1 node/ci-ln-mgdnf4b-72292-n547t-master-1 cordoned ci-ln-mgdnf4b-72292-n547t-master-2 node/ci-ln-mgdnf4b-72292-n547t-master-2 cordoned ci-ln-mgdnf4b-72292-n547t-worker-a-s7ntl node/ci-ln-mgdnf4b-72292-n547t-worker-a-s7ntl cordoned ci-ln-mgdnf4b-72292-n547t-worker-b-cmc9k node/ci-ln-mgdnf4b-72292-n547t-worker-b-cmc9k cordoned ci-ln-mgdnf4b-72292-n547t-worker-c-vcmtn node/ci-ln-mgdnf4b-72292-n547t-worker-c-vcmtn cordoned",
"for node in USD(oc get nodes -l node-role.kubernetes.io/worker -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm drain USD{node} --delete-emptydir-data --ignore-daemonsets=true --timeout=15s --force ; done",
"for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/USD{node} -- chroot /host shutdown -h 1; done 1",
"Starting pod/ip-10-0-130-169us-east-2computeinternal-debug To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:17 UTC, use 'shutdown -c' to cancel. Removing debug pod Starting pod/ip-10-0-150-116us-east-2computeinternal-debug To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:29 UTC, use 'shutdown -c' to cancel."
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/backup_and_restore/graceful-shutdown-cluster
|
3.6. Appendix - Setting up Red Hat Gluster Storage in Microsoft Azure in ASM Mode
|
3.6. Appendix - Setting up Red Hat Gluster Storage in Microsoft Azure in ASM Mode This section provides step-by-step instructions to set up Red Hat Gluster Storage in Microsoft Azure. 3.6.1. Obtaining Red Hat Gluster Storage for Microsoft Azure To download the Red Hat Gluster Storage Server files using a Red Hat Subscription or a Red Hat Evaluation Subscription: Visit the Red Hat Customer Service Portal at https://access.redhat.com/login and enter your user name and password to log in. Click Downloads to visit the Software & Download Center . In the Red Hat Gluster Storage Server area, click Download Software to download the latest version of the VHD image. Navigate to the directory where the file was downloaded and execute the sha256sum command on the file. For example, The value generated by the sha256sum utility must match the value displayed on the Red Hat Customer Portal for the file. If they are not the same, your download is either incomplete or corrupt, and you will need to download the file again. If the checksum is not successfully validated after several attempted downloads, contact Red Hat Support for assistance. Unzip the downloaded file rhgs-azure-[version].zip to extract the archive contents. For example, 3.6.2. Define the Network Topology By default, deploying an instance into a cloud service will pick up a dynamically assigned, internal IP address. This address may change and vary from site to site. For some configurations, consider defining one or more virtual networks within your account for instances to connect to. That establishes a networking configuration similar to an on-premise environment. To create a simple network: Create the cloud service for the Gluster Storage nodes. For example, cloudapp.net will be appended to the service name, and the full service name will be exposed directly to the Internet. In this case, rhgs313-cluster.cloudapp.net. Create a virtual network for the Gluster Storage nodes to connect to. In this example, the network is created within the East US location. This defines a network within a single region. Features like geo-replication within Gluster Storage require a vnet-to-vnet configuration. A vnet-to-vnet configuration connects virtual networks through VPN gateways. Each virtual network can be within the same region or across regions to address disaster recovery scenarios. Joining VPNs together requires a shared key, and it is not possible to pass a shared key through the Microsoft Azure CLI. To define a vnet-to-vnet configuration, use the Windows Powershell or use the Microsoft Azure REST API. 3.6.3. Upload the Disk Image to Microsoft Azure The disk image can be uploaded and used as a template for creating Gluster Storage nodes. Note Microsoft Azure commands must be issued from the local account configured to use the xplat-cli. To upload the image to Microsoft Azure, navigate to the directory where the VHD image is stored and run the following command: For example, Once complete, confirm the image is available: Note The output of an instance image list will show public images as well as images specific to your account (User), so awk is used to display only the images added under the Microsoft Azure account. 3.6.4. Deploy the Gluster Storage Instances Individual Gluster Storage instances in Microsoft Azure can be configured into a cluster. You must first create the instances from the prepared image and then attach the data disks. To create instances from the prepared image For example, Adding 1023 GB data disk to each of the instances. For example Perform the above steps of creating instances and attaching disks for all the instances Confirm that the instances have been properly created: A Microsoft Azure availability set provides a level of fault tolerance to the instances it holds, protecting against system failure or planned outages. This is achieved by ensuring instances within the same availability set are deployed across different fault and upgrade domains within a Microsoft Azure datacenter. When Gluster Storage replicates data between bricks, associate the replica sets to a specific availability set. By using availability sets in the replication design, incidents within the Microsoft Azure infrastructure cannot affect all members of a replica set simultaneously. Each instance is assigned a static IP ( -S ) within the rhgs- - virtual network and an endpoint added to the cloud service to allow SSH access ( --ssh port ). There are single quotation marks (') around the password to prevent bash interpretation issues. Example Following is the example for creating four instances from the prepared image. They are named rhgs31-n . Their IP address are 10.18.0.11 to 10.18.0.14. As the instances are created ( azure vm create ), they can be added to the same availability set ( --availability-set ). Add four 1023 GB data disks to each of the instances. Confirm that the instances have been properly created: Note This example uses static IP addresses, but this is not required. If you're creating a single Gluster Storage cluster and do not need features like geo-replication, it is possible to use the dynamic IPs automatically assigned by Microsoft Azure. The only important thing is that the Gluster Storage cluster is defined by name. 3.6.5. Configure the Gluster Storage Cluster Configure these instances to form a trusted storage pool (cluster). Note If you are using Red Hat Enterprise Linux 7 machines, log in to the Microsoft Azure portal and reset the password for the VMs and also restart the VMs. On Red Hat Enterprise Linux 6 machines, password reset is not required. Log into each node. Register each node to Red Hat Network using the subscription-manager command, and attach the relevant Red Hat Storage subscriptions. For information on subscribing to the Red Hat Gluster Storage 3.5 channels, see the Installing Red Hat Gluster Storage chapter in the Red Hat Gluster Storage 3.5 Installation Guide . Update each node to ensure the latest enhancements and patches are in place. Follow the instructions in the Adding Servers to the Trusted Storage Pool chapter in the Red Hat Gluster Storage Administration Guide to create the trusted storage pool.
|
[
"sha256sum rhgs-azure-3.5-rhel-7-x86_64.tar.gz 2d083222d6a3c531fa2fbbd21c9ea5b2c965d3b8f06eb8ff3b2b0efce173325d rhgs-azure-3.5-rhel-7-x86_64.tar.gz",
"tar -xvzf rhgs-azure-3.5-rhel-7-x86_64.tar.gz",
"azure service create --serviceName service_name --location location",
"azure service create --serviceName rhgs313-cluster --location \"East US\" info: Executing command service create + Creating cloud service data: Cloud service name rhgs313-cluster info: service create command OK",
"azure network vnet create --vnet \"rhgs313-vnet\" --location \"East US\" --address-space 10.18.0.0 --cidr 16 info: Executing command network vnet create info: Using default subnet start IP: 10.18.0.0 info: Using default subnet cidr: 19 + Looking up network configuration + Looking up locations + Setting network configuration info: network vnet create command OK",
"azure vm image create image_name --location location --os linux VHD_image_name",
"azure vm image create rhgs-3.1.3 --location \"East US\" --os linux rhgs313.vhd info: Executing command vm image create + Retrieving storage accounts info: VHD size : 20 GB info: Uploading 20973568.5 KB Requested:100.0% Completed:100.0% Running: 0 Time: 7m50s Speed: 3876 KB/s info: https://bauderhel7.blob.core.windows.net/vm-images/rhgs313.vhd was uploaded successfully info: vm image create command OK",
"azure vm image list | awk 'USD3 == \"User\" {print USD2;}'",
"azure vm create --vm-name vm_name --availability-set name_of_the_availability_set --vm-size size --virtual-network-name vnet_name --ssh port_number --connect cluster_name username_and_password",
"azure vm create --vm-name rhgs313-1 --availability-set AS1 -S 10.18.0.11 --vm-size Medium --virtual-network-name rhgs313-vnet --ssh 50001 --connect rhgs313-cluster rhgs-3.1.3 rhgsuser 'AzureAdm1n!' info: Executing command vm create + Looking up image rhgs-313 + Looking up virtual network + Looking up cloud service + Getting cloud service properties + Looking up deployment + Creating VM info: OK info: vm create command OK",
"azure vm disk attach-new VM_name 1023",
"azure vm disk attach-new rhgs313-1 1023 info: Executing command vm disk attach-new + Getting virtual machines + Adding Data-Disk info: vm disk attach-new command OK",
"azure vm list azure vm show vm-name",
"for i in 1 2 3 4; do as=USD((i/3)); azure vm create --vm-name rhgs31-USDi --availability-set ASUSDas -S 10.18.0.1USDi --vm-size Medium --virtual-network-name rhgs-vnet --ssh 5000USDi --connect rhgs-cluster rhgs3.1 rhgsuser 'AzureAdm1n!'; done",
"for node in 1 2 3 4; do for disk in 1 2 3 4; do azure vm disk attach-new rhgs31-USDnode 1023; done ; done",
"azure vm list azure vm show vm-name",
"ssh [email protected] -p 50001",
"yum update"
] |
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/deployment_guide_for_public_cloud/chap-Documentation-Deployment_Guide_for_Public_Cloud-Azure-Setting_up_RHGS_Azure_ASM
|
About
|
About OpenShift Container Platform 4.17 Introduction to OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/about/index
|
Chapter 18. Firewalls
|
Chapter 18. Firewalls Information security is commonly thought of as a process and not a product. However, standard security implementations usually employ some form of dedicated mechanism to control access privileges and restrict network resources to users who are authorized, identifiable, and traceable. Red Hat Enterprise Linux includes several tools to assist administrators and security engineers with network-level access control issues. Firewalls are one of the core components of a network security implementation. Several vendors market firewall solutions catering to all levels of the marketplace: from home users protecting one PC to data center solutions safeguarding vital enterprise information. Firewalls can be stand-alone hardware solutions, such as firewall appliances by Cisco, Nokia, and Sonicwall. Vendors such as Checkpoint, McAfee, and Symantec have also developed proprietary software firewall solutions for home and business markets. Apart from the differences between hardware and software firewalls, there are also differences in the way firewalls function that separate one solution from another. Table 18.1, "Firewall Types" details three common types of firewalls and how they function: Table 18.1. Firewall Types Method Description Advantages Disadvantages NAT Network Address Translation (NAT) places private IP subnetworks behind one or a small pool of public IP addresses, masquerading all requests to one source rather than several. The Linux kernel has built-in NAT functionality through the Netfilter kernel subsystem. · Can be configured transparently to machines on a LAN · Protection of many machines and services behind one or more external IP addresses simplifies administration duties · Restriction of user access to and from the LAN can be configured by opening and closing ports on the NAT firewall/gateway · Cannot prevent malicious activity once users connect to a service outside of the firewall Packet Filter A packet filtering firewall reads each data packet that passes through a LAN. It can read and process packets by header information and filters the packet based on sets of programmable rules implemented by the firewall administrator. The Linux kernel has built-in packet filtering functionality through the Netfilter kernel subsystem. · Customizable through the iptables front-end utility · Does not require any customization on the client side, as all network activity is filtered at the router level rather than the application level · Since packets are not transmitted through a proxy, network performance is faster due to direct connection from client to remote host · Cannot filter packets for content like proxy firewalls · Processes packets at the protocol layer, but cannot filter packets at an application layer · Complex network architectures can make establishing packet filtering rules difficult, especially if coupled with IP masquerading or local subnets and DMZ networks Proxy Proxy firewalls filter all requests of a certain protocol or type from LAN clients to a proxy machine, which then makes those requests to the Internet on behalf of the local client. A proxy machine acts as a buffer between malicious remote users and the internal network client machines. · Gives administrators control over what applications and protocols function outside of the LAN · Some proxy servers can cache frequently-accessed data locally rather than having to use the Internet connection to request it. This helps to reduce bandwidth consumption · Proxy services can be logged and monitored closely, allowing tighter control over resource utilization on the network · Proxies are often application-specific (HTTP, Telnet, etc.), or protocol-restricted (most proxies work with TCP-connected services only) · Application services cannot run behind a proxy, so your application servers must use a separate form of network security · Proxies can become a network bottleneck, as all requests and transmissions are passed through one source rather than directly from a client to a remote service 18.1. Netfilter and IPTables The Linux kernel features a powerful networking subsystem called Netfilter . The Netfilter subsystem provides stateful or stateless packet filtering as well as NAT and IP masquerading services. Netfilter also has the ability to mangle IP header information for advanced routing and connection state management. Netfilter is controlled using the iptables tool. 18.1.1. IPTables Overview The power and flexibility of Netfilter is implemented using the iptables administration tool, a command line tool similar in syntax to its predecessor, ipchains . A similar syntax does not mean similar implementation, however. ipchains requires intricate rule sets for: filtering source paths; filtering destination paths; and filtering both source and destination connection ports. By contrast, iptables uses the Netfilter subsystem to enhance network connection, inspection, and processing. iptables features advanced logging, pre- and post-routing actions, network address translation, and port forwarding, all in one command line interface. This section provides an overview of iptables .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/ch-fw
|
Installing an on-premise cluster with the Agent-based Installer
|
Installing an on-premise cluster with the Agent-based Installer OpenShift Container Platform 4.15 Installing an on-premise OpenShift Container Platform cluster with the Agent-based Installer Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_an_on-premise_cluster_with_the_agent-based_installer/index
|
Chapter 32. Configuring a Fibre-Channel Over Ethernet Interface
|
Chapter 32. Configuring a Fibre-Channel Over Ethernet Interface Setting up and deploying a Fibre-channel over Ethernet (FCoE) interface requires two packages: fcoe-utils lldpad Once these packages are installed, perform the following procedure to enable FCoE over a virtual LAN (VLAN): Procedure 32.1. Configuring an Ethernet interface to use FCoE Configure a new VLAN by copying an existing network script (e.g. /etc/fcoe/cfg-eth0 ) to the name of the Ethernet device that supports FCoE. This will provide you with a default file to configure. Given that the FCoE device is eth X , run: Modify the contents of cfg-eth X as necessary. Of note, DCB_REQUIRED should be set to no for networking interfaces that implement a hardware DCBX client. If you want the device to automatically load during boot time, set ONBOOT=yes in the corresponding /etc/sysconfig/network-scripts/ifcfg-eth X file. For example, if the FCoE device is eth2, then edit /etc/sysconfig/network-scripts/ifcfg-eth2 accordingly. Start the data center bridging daemon ( dcbd ) using the following command: For networking interfaces that implement a hardware DCBX client, skip this step and move on to the . For interfaces that require a software DCBX client, enable data center bridging on the Ethernet interface using the following commands: Then, enable FCoE on the Ethernet interface by running: Note These commands will only work if the dcbd settings for the Ethernet interface were not changed. Load the FCoE device now using: Start FCoE using: The FCoE device should appear shortly, assuming all other settings on the fabric are correct. To view configured FCoE devices, run: After correctly configuring the Ethernet interface to use FCoE, Red Hat recommends that you set FCoE and lldpad to run at startup. To do so, use chkconfig , as in: Warning Do not run software-based DCB or LLDP on CNAs that implement DCB. Some Combined Network Adapters (CNAs) implement the Data Center Bridging (DCB) protocol in firmware. The DCB protocol assumes that there is just one originator of DCB on a particular network link. This means that any higher-level software implementation of DCB, or Link Layer Discovery Protocol (LLDP), must be disabled on CNAs that implement DCB. 32.1. Fibre-Channel over Ethernet (FCoE) Target Set up In addition to mounting LUNs over FCoE, as described in Chapter 32, Configuring a Fibre-Channel Over Ethernet Interface , exporting LUNs to other machines over FCoE is also supported. Important Before proceeding, refer to Chapter 32, Configuring a Fibre-Channel Over Ethernet Interface and verify that basic FCoE set up is completed, and that fcoeadm -i displays configured FCoE interfaces. Procedure 32.2. Configure FCoE target Setting up an FCoE target requires the installation of the fcoe-target-utils package, along with its dependencies. FCoE target support is based on the LIO kernel target and does not require a userspace daemon. However, it is still necessary to enable the fcoe-target service to load the needed kernel modules and maintain the configuration across reboots. Configuration of an FCoE target is performed using the targetcli utility, rather than by editing a .conf as may be expected. The settings are then saved so they may be restored if the system restarts. targetcli is a hierarchical configuration shell. Moving between nodes in the shell uses cd , and ls shows the contents at or below the current configuration node. To get more options, the command help is also available. Define the file, block device, or pass-through SCSI device to export as a backstore. Example 32.1. Example 1 of defining a device This creates a backstore called example1 that maps to the /dev/sda4 block device. Example 32.2. Example 2 of defining a device This creates a backstore called example2 which maps to the given file. If the file does not exist, it will be created. File size may use K, M, or G abbreviations and is only needed when the backing file does not exist. Note If the global auto_cd_after_create option is on (the default), executing a create command will change the current configuration node to the newly created object. This can be disabled with set global auto_cd_after_create=false . Returning to the root node is possible with cd / . Create an FCoE target instance on an FCoE interface. If FCoE interfaces are present on the system, tab-completing after create will list available interfaces. If not, ensure fcoeadm -i shows active interfaces. Map a backstore to the target instance. Example 32.3. Example of mapping a backstore to the target instance Allow access to the LUN from an FCoE initiator. The LUN should now be accessible to that initiator. Exit targetcli by typing exit or entering ctrl + D . Exiting targetcli will save the configuration by default. However it may be explicitly saved with the saveconfig command. Refer to the targetcli manpage for more information.
|
[
"cp /etc/fcoe/cfg-eth0 /etc/fcoe/cfg-eth X",
"/etc/init.d/lldpad start",
"dcbtool sc eth X dcb on",
"dcbtool sc eth X app:fcoe e:1",
"ifconfig eth X up",
"service fcoe start",
"fcoeadm -i",
"chkconfig lldpad on",
"chkconfig fcoe on",
"yum install fcoe-target-utils",
"service fcoe-target start",
"chkconfig fcoe-target on",
"targetcli",
"/> backstores/block create example1 /dev/sda4",
"/> backstores/fileio create example2 /srv/ example2.img 100M",
"/> tcm_fc/ create 00:11:22:33:44:55:66:77",
"/> cd tcm_fc/ 00:11:22:33:44:55:66:77",
"/> luns/ create /backstores/fileio/ example2",
"/> acls/ create 00:99:88:77:66:55:44:33"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/fcoe-config
|
probe::nfsd.proc.write
|
probe::nfsd.proc.write Name probe::nfsd.proc.write - NFS server writing data to file for client Synopsis nfsd.proc.write Values offset the offset of file gid requester's group id vlen read blocks fh file handle (the first part is the length of the file handle) size read bytes vec struct kvec, includes buf address in kernel address and length of each buffer stable argp->stable version nfs version uid requester's user id count read bytes client_ip the ip address of client proto transfer protocol
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfsd-proc-write
|
function::start_stopwatch
|
function::start_stopwatch Name function::start_stopwatch - Start a stopwatch Synopsis Arguments name the stopwatch name Description Start stopwatch name . Creates stopwatch name if it does not currently exist.
|
[
"start_stopwatch(name:string)"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-start-stopwatch
|
5.171. lm_sensors
|
5.171. lm_sensors 5.171.1. RHBA-2012:1309 - lm_sensors bug fixes Updated lm_sensors packages that fix three bugs are now available for Red Hat Enterprise Linux 6. The lm_sensors packages provide a set of modules for general SMBus access and hardware monitoring. Bug Fixes BZ# 610000 , BZ# 623587 Prior to this update, the sensors-detect script did not detect all GenuineIntel CPUs. As a consequence, lm_sensors did not load coretemp module automatically. This update uses a more generic detection for Intel CPUs. Now, the coretemp module is loaded as expected. BZ# 768365 Prior to this update, the sensors-detect script reported an error when running without user-defined input. This behavior had no impact on the function but could confuse users. This update modifies the underlying code to allow for the sensors-detect script to run without user. All users of lm_sensors are advised to upgrade to these updated packages, which fix these bugs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/lm_sensors
|
Operator APIs
|
Operator APIs OpenShift Container Platform 4.16 Reference guide for Operator APIs Red Hat OpenShift Documentation Team
|
[
"More specifically, given an OperatorPKI with <name>, the CNO will manage:",
"More specifically, given an OperatorPKI with <name>, the CNO will manage:"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/operator_apis/index
|
Chapter 3. Starting JDK Flight Recorder
|
Chapter 3. Starting JDK Flight Recorder 3.1. Starting JDK Flight Recorder when JVM starts You can start the JDK Flight Recorder (JFR) when a Java process starts. You can modify the behavior of the JFR by adding optional parameters. Procedure Run the java command using the --XX option. USD java -XX:StartFlightRecording Demo where Demo is the name of the Java application. The JFR starts with the Java application. Example The following command starts a Java process ( Demo ) and with it initiates an hour-long flight recording which is saved to a file called demorecording.jfr : USD java -XX:StartFlightRecording=duration=1h,filename=demorecording.jfr Demo Additional resources For a detailed list of JFR options, see Java tools reference . 3.2. Starting JDK Flight Recorder on a running JVM You can use the jcmd utility to send diagnostic command requests to a running JVM. jcmd includes commands for interacting with JFR, with the most basic commands being start , dump , and stop . To interact with a JVM, jcmd requires the process id (pid) of the JVM. You can retrieve the by using the jcmd -l command which displays a list of the running JVM process ids, as well as other information such as the main class and command-line arguments that were used to launch the processes. The jcmd utility is located under USDJAVA_HOME/bin . Procedure Start a flight recording using the following command: USD jcmd <pid> JFR.start <options> For example, the following command starts a recording named demorecording , which keeps data from the last four hours, and has size limit of 400 MB: USD jcmd <pid> JFR.start name=demorecording maxage=4h maxsize=400MB Additional resources For a detailed list of jcmd options, see jcmd Tools Reference . 3.3. Starting the JDK Flight Recorder on JVM by using the JDK Mission Control application The JDK Mission Control (JMC) application has a Flight Recording Wizard that allows for a streamlined experience of starting and configuring flight recordings. Procedure Open the JVM Browser. USD JAVA_HOME/bin/jmc Right-click a JVM in JVM Browser view and select Start Flight Recording . The Flight Recording Wizard opens. Figure 3.1. JMC JFR Wizard The JDK Flight Recording Wizard has three pages: The first page of the wizard contains general settings for the flight recording including: Name of the recording Path and filename to which the recording is saved Whether the recording is a fixed-time or continuous recording, which event template will be used Description of the recording The second page contains event options for the flight recording. You can configure the level of detail that Garbage Collections, Memory Profiling, and Method Sampling and other events record. The third page contains settings for the event details. You can turn events on or off, enable the recording of stack traces, and alter the time threshold required to record an event. Edit the settings for the recording. Click Finish . The wizard exits and the flight recording starts. 3.4. Defining and using the custom event API The JDK Flight Recorder (JFR) is an event recorder that includes the custom event API. The custom event API, stored in the jdk.jfr module, is the software interface that enables your application to communicate with the JFR. The JFR API includes classes that you can use to manage recordings and create custom events for your Java application, JVM, or operating system. Before you use the custom event API to monitor an event, you must define a name and metadata for your custom event type. You can define a JFR base event, such as a Duration , Instant , Requestable , or Time event , by extending the Event class. Specifically, you can add fields, such as duration values, to the class that matches data types defined by the application payload attributes. After you define an Event class, you can create event objects. This procedure demonstrates how to use a custom event type with JFR and JDK Mission Control (JMC) to analyze the runtime performance of a simple example program. Procedure In your custom event type, in the Event class, use the @name annotation to name the custom event. This name displays in the JMC graphical user interface (GUI). Example of defining a custom event type name in the Event class @Name("SampleCustomEvent") public class SampleCustomEvent extends Event {...} Define the metadata for your Event class and its attributes, such as name, category, and labels. Labels display event types for a client, such as JMC. Note Large recording files might cause performance issues, and this might affect how you would like to interact with the files. Make sure you correctly define the number of event recording annotations you need. Defining unnecessary annotations might increase the size of your recording files. Example of defining annotations for a sample Event class @Name("SampleCustomEvent") 1 @Label("Sample Custom Event") @Category("Sample events") @Description("Custom Event to demonstrate the Custom Events API") @StackTrace(false) 2 public class SampleCustomEvent extends Event { @Label("Method") 3 public String method; @Label("Generated Number") public int number; @Label("Size") @DataAmount 4 public int size; } 1 Details annotations, such as @Name , that define metadata for how the custom event displays on the JMC GUI. 2 The @StackTrace annotation increases the size of a flight recording. By default, the JFR does not include the stackTrace of the location that was created for the event. 3 The @Label annotations define parameters for each method, such as resource methods for HTTP requests. 4 The @DataAmount annotation includes an attribute that defines the data amount in bits of bytes. JMC automatically renders the data amount in other units, such as megabytes (MB). Define contextual information in your Event class. This information sets the request handling behavior of your custom event type, so that you configure an event type to collect specific JFR data. Example of defining a simple main class and an event loop In the preceding example, the simple main class registers events, and the event loop populates the event fields and then emits the custom events. Examine an event type in the application of your choice, such as the JMC or the JFR tool. Figure 3.2. Example of examining an event type in JMC A JFR recording can include different event types. You can examine each event type in your application. Additional resources For more information about JMC, see Introduction to JDK Mission Control .
|
[
"@Name(\"SampleCustomEvent\") public class SampleCustomEvent extends Event {...}",
"@Name(\"SampleCustomEvent\") 1 @Label(\"Sample Custom Event\") @Category(\"Sample events\") @Description(\"Custom Event to demonstrate the Custom Events API\") @StackTrace(false) 2 public class SampleCustomEvent extends Event { @Label(\"Method\") 3 public String method; @Label(\"Generated Number\") public int number; @Label(\"Size\") @DataAmount 4 public int size; }",
"public class Main { private static int requestsSent; public static void main(String[] args) { // Register the custom event FlightRecorder.register(SampleCustomEvent.class); // Do some work to generate the events while (requestsSent <= 1000) { try { eventLoopBody(); Thread.sleep(100); } catch (Exception e) { e.printStackTrace(); } } } private static void eventLoopBody() { // Create and begin the event SampleCustomEvent event = new SampleCustomEvent(); event.begin(); // Generate some data for the event Random r = new Random(); int someData = r.nextInt(1000000); // Set the event fields event.method = \"eventLoopBody\"; event.number = someData; event.size = 4; // End the event event.end(); event.commit(); requestsSent++; }"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_jdk_flight_recorder_with_red_hat_build_of_openjdk/starting-jdk-flight-recorder
|
Operating
|
Operating Red Hat Advanced Cluster Security for Kubernetes 4.5 Operating Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/operating/index
|
Chapter 65. registered
|
Chapter 65. registered This chapter describes the commands under the registered command. 65.1. registered limit create Create a registered limit Usage: Table 65.1. Positional arguments Value Summary <resource-name> The name of the resource to limit Table 65.2. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Description of the registered limit --region <region> Region for the registered limit to affect --service <service> Service responsible for the resource to limit (required) --default-limit <default-limit> The default limit for the resources to assume (required) Table 65.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 65.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 65.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 65.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 65.2. registered limit delete Delete a registered limit Usage: Table 65.7. Positional arguments Value Summary <registered-limit-id> Registered limit to delete (id) Table 65.8. Command arguments Value Summary -h, --help Show this help message and exit 65.3. registered limit list List registered limits Usage: Table 65.9. Command arguments Value Summary -h, --help Show this help message and exit --service <service> Service responsible for the resource to limit --resource-name <resource-name> The name of the resource to limit --region <region> Region for the limit to affect. Table 65.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 65.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 65.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 65.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 65.4. registered limit set Update information about a registered limit Usage: Table 65.14. Positional arguments Value Summary <registered-limit-id> Registered limit to update (id) Table 65.15. Command arguments Value Summary -h, --help Show this help message and exit --service <service> Service to be updated responsible for the resource to limit. Either --service, --resource-name or --region must be different than existing value otherwise it will be duplicate entry --resource-name <resource-name> Resource to be updated responsible for the resource to limit. Either --service, --resource-name or --region must be different than existing value otherwise it will be duplicate entry --default-limit <default-limit> The default limit for the resources to assume --description <description> Description to update of the registered limit --region <region> Region for the registered limit to affect. either --service, --resource-name or --region must be different than existing value otherwise it will be duplicate entry Table 65.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 65.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 65.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 65.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 65.5. registered limit show Display registered limit details Usage: Table 65.20. Positional arguments Value Summary <registered-limit-id> Registered limit to display (id) Table 65.21. Command arguments Value Summary -h, --help Show this help message and exit Table 65.22. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 65.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 65.24. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 65.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack registered limit create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--region <region>] --service <service> --default-limit <default-limit> <resource-name>",
"openstack registered limit delete [-h] <registered-limit-id> [<registered-limit-id> ...]",
"openstack registered limit list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--service <service>] [--resource-name <resource-name>] [--region <region>]",
"openstack registered limit set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--service <service>] [--resource-name <resource-name>] [--default-limit <default-limit>] [--description <description>] [--region <region>] <registered-limit-id>",
"openstack registered limit show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <registered-limit-id>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/registered
|
probe::signal.wakeup
|
probe::signal.wakeup Name probe::signal.wakeup - Sleeping process being wakened for signal Synopsis signal.wakeup Values pid_name Name of the process to wake resume Indicates whether to wake up a task in a STOPPED or TRACED state state_mask A string representation indicating the mask of task states to wake. Possible values are TASK_INTERRUPTIBLE, TASK_STOPPED, TASK_TRACED, TASK_WAKEKILL, and TASK_INTERRUPTIBLE. sig_pid The PID of the process to wake
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-signal-wakeup
|
Chapter 1. Eclipse Temurin 11 - End of full support
|
Chapter 1. Eclipse Temurin 11 - End of full support Important The 11.0.25 release is the last release of Eclipse Temurin 11 that Red Hat plans to fully support. The full support for Eclipse Temurin 11 ends on 31 October 2024. See the Eclipse Temurin Life Cycle and Support Policy for details. Red Hat will provide extended life cycle support (ELS) phase 1 support for Eclipse Temurin 11 until 31 October 2027. For more information about product life cycle phases and available support levels, see Life Cycle Phases .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.25/end_of_support
|
Chapter 1. Collecting diagnostic information for Support
|
Chapter 1. Collecting diagnostic information for Support Use the Red Hat OpenStack Services on OpenShift (RHOSO) must-gather tool to collect diagnostic information about your Red Hat OpenShift Container Platform (RHOCP) cluster, including the RHOSO control plane and the deployed RHOSO services. Use the RHOCP sosreport tool to collect diagnostic information about your RHOSO data plane. 1.1. Collecting data from the control plane or data plane You can use the Red Hat OpenStack Services on OpenShift (RHOSO) must-gather tool to create a local directory to store logs, configurations, and status of the RHOSO control plane or data plane services. You can use the must-gather tool to collect diagnostic information about your RHOSO deployment: Service logs retrieved by the output of the pods and operators associated with the deployed services. The configuration of RHOSO services, such as the Red Hat OpenShift Container Platform (RHOCP) Secrets and ConfigMaps . The status of the RHOSO services, cuch as the services that are deployed in the RHOSO control plane. The RHOSO Custom Resource Definitions (CRDs). The RHOSO applied Custom Resources (CRs). The openstack and openstack-operators namespaces. RHOCP Events that are related to the RHOSO namespaces. CSVs, pkgmanifests , subscriptions , installplans , operatorgroup . Pods, Deployments, Statefulsets , ReplicaSets , Service, Routes, ConfigMaps , relevant Secrets. Network information, such as IPAddressPool , L2Advertisements , NetConfig , IPSet . SOS reports for RHOCP nodes that are running RHOSO service pods. Prerequisites You can access the cluster as a user with cluster-admin privileges. You can access the registry directly to invoke podman commands, to allow RHOCP to pull images from the registry and run them with the oc adm command: Procedure Navigate to the directory where you want to store the must-gather data. Pass one or more images or image streams to the must-gather tool to specify the data to collect. For example, the following command gathers both the default cluster data and the information that is specific to the deployed RHOSO control plane: 1 The default RHOCP must-gather image that is used to gather RHOCP cluster information. 2 The RHOSO must-gather image. This command creates a local directory that stores the logs, services configuration, and the status of the RHOSO control plane services. steps You can provide SOS reports to Red Hat Support to help diagnose and troubleshoot issues in your deployment. For information on how to use the SOS report tool, see Getting the most from your Support experience . 1.2. Customizing the gathered data You can use environmental variables to configure Red Hat OpenStack Services on OpenShift (RHOSO) must-gather collectors. For example, you can pass an empty SOS_SERVICES environmental variable to disable SOS gathering. Procedure To provide environmental variables, invoke the gathering command manually: The following is a list of available environmental variables: OSP_NS: Namespace where the RHOSO services are running. The default value is openstack . OSP_OPERATORS_NS: Namespace where the RHOSO operators are running. The default value is openstack-operators . CONCURRENCY: Must-gather can run many operations, so for efficiency the operations run in parallel with a concurrency of 5 by default. SOS_SERVICES: Comma-separated list of services to gather the SOS reports from. You can set a value of an empty string to skip sos report gathering for that service. SOS_ONLY_PLUGINS: List of SOS report plugins to use. You can set a value of an empty string to run all of the reports. The default value is: block,cifs,crio,devicemapper,devices,iscsi,lvm2, memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev . SOS_EDPM: Comma-separated list of edpm nodes to gather SOS reports from. You can set a value of an empty string to skip sos report gathering, or use the keyword all to gather from all the nodes, such as: edpm-compute-0 , edpm-compute-1 . SOS_EDPM_PROFILES: List of sos report profiles to use. You can set a value of an empty string to run all of the reports. The default value is: container,openstack_edpm,system,storage,virt . SOS_EDPM_PLUGINS: Optional list of sos report plugins to use. OPENSTACK_DATABASES: Comma-separated list of RHOSO databases that you want to dump. You can set the value to the keyword all to dump all databases. The default value is an empty string and the database dump is skipped. ADDITIONAL_NAMESPACES: Comma-separated list of additional namespaces where you want to gather the associated resources. COMPRESSED_PATH: Defines the path to store the compressed form of the gathered data. DELETE_AFTER_COMPRESSION: The default value is 0 . If you set the value to 1 , the uncompressed data is deleted after the archive is created. 1.3. Inspect the gathered data You can use the Red Hat OpenStack Services on OpenShift (RHOSO) must-gather tool to get the kubernetes resources defined in the collection-scripts , and the sos-reports associated with both the CoreOS nodes and the EDPM ones. When the must-gather execution ends, it creates a directory containing all the gathered resources, such as: Global resources: You can use these to get some context about the status of the Red Hat OpenShift Container Platform (RHOCP) cluster and the RHOSO deployed resources. These resources include crds , apiservices , csvs , packagemanifests , webhooks , and network information such as nncp , nnce , IPAddressPool , and more. Namespaced resources: You need these to get the status of the RHOSO cluster and to troubleshoot any problems. sos-reports: You gather these from both the CoreOS nodes that are part of the RHOCP cluster, and the EDPM nodes that are part of the cluster. The information to connect to the EDPM nodes is retrieved by the OpenStackDataplaneNodeSets CR, and the resulting sos-report is retrieved from the remote nodes and then downloaded in the current must-gather directory. OpenStack Ctlplane Services: You can run command through the openstack-cli to check the relevant resources generated within the RHOSO cluster, such as endpoint list, networks, subnets, registered services, and so on. The following is an example of must-gather output: When you are troubleshooting, it is critical to check and analyze not only Secrets and services config files, but also the CRs associated with each service and the Pod logs. These are namespaced resources that you can find in the CRs and Pods directories. Additionally, some generic information for each namespace is collected. You can use the must-gather tool to retrieve: Events recorded for the current namespace. Network Attachment Definitions. PVCs attached to the deployed Pods. all_resources.log , which is an outline of the namespaces in terms of deployed resources. As depicted in the schema, the same pattern applies to the Pod resources. You can use the must-gather tool to retrieve the description and the associated logs for each Pod, including - if the Pod is in a CrashLookBackoff status.
|
[
"podman login registry.redhat.io",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/rhoso-operators/openstack-must-gather-rhel9:1.0 2",
"oc adm must-gather --image=quay.io/openstack-k8s-operators/openstack-must-gather -- SOS_SERVICES= gather",
"+-----------------------------------+ | . | +-----------------------------+ | ├── apiservices | | ctlplane/neutron/ | | ├── crd | | ├── agent_list | | ├── csv | (control plane resources) | ├── extension_list | | ├── ctlplane |------------------------------------| ├── floating_ip_list | | │ ├── neutron | | ├── network_list | | │ ├── nova |----------------- | ├── port_list | | │ └── placement | | | ├── router_list | | ├── dbs | +---------------------------+ | ├── security_group_list | | ├── namespaces | | namespaces/openstack/ | | └── subnet_list | | │ ├── cert-manager | | ├── all_resources.log | +-----------------------------+ | │ ├── openshift-machine-api | | ├── buildconfig |----------------------------------- | │ ├── openshift-nmstate | | ├── configmaps | | | │ ├── openstack | | ├── cronjobs | +--------------------------------------------------------------------+ | │ └── openstack-operators | | ├── crs | | namespaces/openstack/secrets/glance/ | | ├── network | | ├── daemonset | | ├── cert-glance-default-public-route.yaml | | │ ├── ipaddresspools | | ├── deployments | | ├── glance-config-data.yaml | | │ ├── nnce | | ├── events.log | | ├── glance-config-data.yaml-00-config.conf | | │ └── nncp | | ├── installplans | | ├── glance-default-single-config-data.yaml | | ├── nodes | | ├── jobs | | ├── glance-default-single-config-data.yaml-00-config.conf | | ├── sos-reports | | ├── nad.log | | ├── glance-default-single-config-data.yaml-10-glance-httpd.conf | | │ ├── _all_nodes | | ├── pods | | ├── glance-default-single-config-data.yaml-httpd.conf | | │ ├── barbican | | ├── pvc.log | | ├── glance-default-single-config-data.yaml-ssl.conf | | │ ├── ceilometer | | ├── replicaset | | └── glance-scripts.yaml | | │ ├── glance | | ├── routes | +--------------------------------------------------------------------+ | │ ├── keystone | | ├── secrets | | | │ ├── neutron | | ├── services | +--------------------------------------------------------------------+ | │ ├── nova | | ├── statefulsets | | Note: if DO_NOT_MASK is passed in CI, secrets are dumped without | | │ ├── ovn | | └── subscriptions | | hiding any sensitive information. | | │ ├── ovs | +---------------------------+ +--------------------------------------------------------------------+ | │ ├── placement | | │ └── swift | | └── webhooks | | ├── mutating | | └── validating | +-----------------------------------+",
"+---------------------------+ | namespaces/openstack/ | ------------------------------------ | ├── buildconfig | | | ├── cronjobs | +--------------------------------------------------------+ | ├── crs | | namespaces/openstack/crs/ | | ├── daemonset | | ├── barbicanapis.barbican.openstack.org | | ├── deployments | | ├── barbicankeystonelisteners.barbican.openstack.org | | ├── events.log | | ├── barbicans.barbican.openstack.org | | ├── installplans | | ├── barbicanworkers.barbican.openstack.org | | ├── jobs | | ... | | ├── nad.log | | ... | | ├── pods | | ├── glanceapis.glance.openstack.org | | ├── all_resources.log | | └── glance-default-single.yaml | | ├── configmaps | | ├── glances.glance.openstack.org | | ├── pvc.log | | └── glance.yaml | | ├── replicaset | | ├── keystoneapis.keystone.openstack.org | | ├── routes | | ├── keystoneendpoints.keystone.openstack.org | | ├── secrets | | ├── keystoneservices.keystone.openstack.org | | ├── services | | ... | | ├── statefulsets | | ├── telemetries.telemetry.openstack.org | | └── subscriptions | | └── transporturls.rabbitmq.openstack.org | +---------------------------+ +--------------------------------------------------------+",
"+---------------------------+ | namespaces/openstack/ | ------------------------------------ | ├── buildconfig | | | ├── cronjobs | +-----------------------------------------------------------+ | ├── crs | | namespaces/openstack/pods/glance-dbpurge-28500481-f4jk9 | | ├── daemonset | | ├── glance-dbpurge-28500481-f4jk9-describe | | ├── deployments | | └── logs | | ├── events.log | | └── glance-dbpurge.log | | ├── installplans | | namespaces/openstack/pods/glance-default-single-0 | | ├── jobs | | ├── glance-default-single-0-describe | | ├── nad.log | | └── logs | | ├── pods | | ├── glance-api.log | | ├── all_resources.log | | ├── glance-httpd.log | | ├── configmaps | | └── glance-log.log | | ├── pvc.log | | namespaces/openstack/pods/glance-default-single-1 | | ├── replicaset | | ├── glance-default-single-1-describe | | ├── routes | | └── logs | | ├── secrets | | ├── glance-api.log | | ├── services | | ├── glance-httpd.log | | ├── statefulsets | | └── glance-log.log | | └── subscriptions | | namespaces/openstack/pods/glance-default-single-2 | +---------------------------+ | ├── glance-default-single-2-describe | | └── logs | | ├── glance-api.log | | ├── glance-httpd.log | | └── glance-log.log | +-----------------------------------------------------------+"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/validating_and_troubleshooting_the_deployed_cloud/assembly_collecting-diagnostic-information-for-support
|
Using Red Hat Software Collections Container Images
|
Using Red Hat Software Collections Container Images Red Hat Software Collections 3 Basic Usage Instructions for Red Hat Software Collections 3.8 Container images Lenka Spackova [email protected] Olga Tikhomirova Robert Kratky Vladimir Slavik Red Hat Software Collections Documentation Team [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/using_red_hat_software_collections_container_images/index
|
Chapter 7. Updated Packages
|
Chapter 7. Updated Packages 7.1. 389-ds-base 7.1.1. RHBA-2015:1326 - 389-ds-base bug fix and enhancement update Updated 389-ds-base packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The 389 Directory Server is an LDAPv3 compliant server. The base packages include the LDAP server and command-line utilities for server administration. Bug Fixes BZ# 1193243 When a suffix-mapping tree entry was created without the corresponding back-end database, the server failed to start. This bug has been fixed. BZ# 1145072 If a value of a password policy attribute was deleted, it caused a null reference and an unexpected termination of the server. These crashes no longer occur. BZ# 1080185 , BZ# 1138745 This update fixes a memory leak caused by a patch for BZ#1080185. BZ# 1048987 If a Virtual List View search fails with the timelimit or adminlimit parameters exceeded, the allocated memory of the IDL no longer leaks. BZ# 1162704 If a search for "passwordAdminDN" in a "cn=config" entry returns a non-existing value, a memory leak no longer occurs. BZ# 1169975 Rebuilding the Class of Service (CoS) cache no longer causes a memory leak. BZ# 1115960 A bug in the nested CoS, when the closest above password policy was sometimes not selected as expected, has been fixed. BZ# 1169974 When a SASL bind operation fails and Account Lockout is enabled, the Root DSE entry no longer gets incorrectly updated with passwordRetryCount. BZ# 1145379 Password restrictions and syntax checks for Directory Manager and password administrators are now properly applied so that these roles are not affected by them. BZ# 1175868 , BZ# 1166313 Performance degradation with searches in large groups has been fixed by introducing normalized DN cache. BZ# 1153739 Due to a known vulnerability in SSLv3, this protocol is now disabled by default. BZ# 1207024 This update adds the flow control so that unbalanced process speed between a supplier and a consumer does not cause replication to become unresponsive. BZ# 1171308 A bug to replicate an "add: userPassword" operation has been fixed. BZ# 1145374 , BZ# 1183820 A bug in the Windows Sync plug-in code caused AD-only member values to be accidentally removed. Now, local and remote entries are handled properly, preventing data loss. BZ# 1144092 Performing a schema reload sometimes caused a running search to fail to return results. Now, the old schema is not removed until the reload is complete. The search results are no longer corrupted. BZ# 1203338 The Berkeley DB library terminated unexpectedly when the Directory Server simultaneously opened an index file and performed a search on the "cn=monitor" subtree. The two operations are now mutually exclusive, which prevents the crash. BZ# 1223068 , BZ# 1228402 When simple paged results requests were sent to the Directory Server asynchronously and then abandoned immediately, the search results could leak. Also, the implementation of simple paged results was not thread-safe. This update fixes the leak and modifies the code to be thread-safe. Enhancements BZ# 1167976 A new memberOf plug-in configuration attribute memberOfSkipNested has been added. This attribute allows you to skip the nested group check, which improves performance of delete operations. BZ# 1118285 The Directory Server now supports TLS versions supported by the NSS library. BZ# 1193241 The logconv.pl utility has been updated to include information about the SSL/TLS versions in the access log. Users of 389-ds-base are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. After installing this update, the 389 server service will be restarted automatically.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/ch07
|
Chapter 6. View OpenShift Data Foundation Topology
|
Chapter 6. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_ibm_z/viewing-odf-topology_mcg-verify
|
Chapter 3. Red Hat High Availability Add-On Resources
|
Chapter 3. Red Hat High Availability Add-On Resources This chapter provides a summary of Red Hat High Availability resources and their operation. 3.1. Red Hat High Availability Add-On Resource Overview A cluster resource is an instance of program, data, or application to be managed by the cluster service. These resources are abstracted by agents that provide a standard interface for managing the resource in a cluster environment. This standardization is based on industry approved frameworks and classes, which makes managing the availability of various cluster resources transparent to the cluster service itself.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_overview/ch-resources-haao
|
7.69. gnome-packagekit
|
7.69. gnome-packagekit 7.69.1. RHBA-2013:0280 - gnome-packagekit bug fix update An updated gnome-packagekit package that fixes four bugs is now available. gnome-packagekit provides session applications for the PackageKit API. Bug Fixes BZ# 744980 If a package adds or removes a .repo file while updates are being installed, PackageKit (packagekitd) sends a RepoListChanged() message. If Software Update (/usr/bin/gpk-update-viewer) was being used to install these updates it responded to the message by attempting to refresh the available updates list. This resulted in said list going blank. As of this update, gpk-update-viewer ignores such signals from packagekitd, leaving the available updates list visible and unchanged. BZ# 744906 When a 64-bit Red Hat Enterprise Linux instance had both 32-bit and 64-bit versions of a package installed, and an update for both packages was available and presented in the Software Update (/usr/bin/gpk-update-viewer) window, the summary and package name appeared for both architectures. Package size and the errata note only presented for the 32-bit version, however. For the 64-bit version, the size column remained blank. And, when the 64-bit version was selected in Software list, the display pane below presented a 'Loading...' message rather than the errata note. With this update, gpk-update-viewer seeks out the exact package ID before falling back to the package name, ensuring both package versions are found and associated meta-data displayed when more than one package architecture is installed. BZ# 694793 When an application is installed using the Add/Remove Software interface (/usr/bin/gpk-application), a dialogue box appears immediately post-install offering a Run button. Clicking this button launches the newly-installed program. Previously, under some circumstances, an improperly assigned pointer value meant clicking this Run button caused gpk-application to crash (segfault). With this update, the pointer is correctly assigned and gpk-application no longer crashes when launching a newly-installed application. BZ#669798 Previously, it was possible for an ordinary user to shutdown their system or log-out from a session while the PackageKit update tool was running. Depending on the transaction PackageKit was engaged in when the shutdown or logout was initiated, this could damage the RPM database and, consequently, damage the system. With this update, when ordinary users attempting to shutdown or log out while PackageKit is running an update, PackageKit inhibits the process and presents the user with an alert: Note: this update does not prevent a root user (or other user with equivalent administrative privileges) from shutting the system down or logging an ordinary user out of their session. All PackageKit users should install this update which resolves these issues.
|
[
"A transaction that cannot be interrupted is running."
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/gnome-packagekit
|
Chapter 32. ListeningEndpointsService
|
Chapter 32. ListeningEndpointsService 32.1. GetListeningEndpoints GET /v1/listening_endpoints/deployment/{deploymentId} GetListeningEndpoints returns the listening endpoints and the processes that opened them for a given deployment 32.1.1. Description 32.1.2. Parameters 32.1.2.1. Path Parameters Name Description Required Default Pattern deploymentId X null 32.1.3. Return Type V1GetProcessesListeningOnPortsResponse 32.1.4. Content Type application/json 32.1.5. Responses Table 32.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetProcessesListeningOnPortsResponse 0 An unexpected error response. RuntimeError 32.1.6. Samples 32.1.7. Common object reference 32.1.7.1. ProcessListeningOnPortEndpoint Field Name Required Nullable Type Description Format port Long int64 protocol StorageL4Protocol L4_PROTOCOL_UNKNOWN, L4_PROTOCOL_TCP, L4_PROTOCOL_UDP, L4_PROTOCOL_ICMP, L4_PROTOCOL_RAW, L4_PROTOCOL_SCTP, L4_PROTOCOL_ANY, 32.1.7.2. ProcessSignalLineageInfo Field Name Required Nullable Type Description Format parentUid Long int64 parentExecFilePath String 32.1.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 32.1.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 32.1.7.4. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 32.1.7.5. StorageL4Protocol Enum Values L4_PROTOCOL_UNKNOWN L4_PROTOCOL_TCP L4_PROTOCOL_UDP L4_PROTOCOL_ICMP L4_PROTOCOL_RAW L4_PROTOCOL_SCTP L4_PROTOCOL_ANY 32.1.7.6. StorageProcessListeningOnPort Field Name Required Nullable Type Description Format endpoint ProcessListeningOnPortEndpoint deploymentId String containerName String podId String podUid String signal StorageProcessSignal clusterId String namespace String containerStartTime Date date-time imageId String 32.1.7.7. StorageProcessSignal Field Name Required Nullable Type Description Format id String A unique UUID for identifying the message We have this here instead of at the top level because we want to have each message to be self contained. containerId String time Date date-time name String args String execFilePath String pid Long int64 uid Long int64 gid Long int64 lineage List of string scraped Boolean lineageInfo List of ProcessSignalLineageInfo 32.1.7.8. V1GetProcessesListeningOnPortsResponse Field Name Required Nullable Type Description Format listeningEndpoints List of StorageProcessListeningOnPort
|
[
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"The API returns an array of these"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/listeningendpointsservice
|
Appendix B. Administration settings
|
Appendix B. Administration settings This section contains information about settings that you can edit in the Satellite web UI by navigating to Administer > Settings . B.1. General settings Setting Default Value Description Administrator email address The default administrator email address Satellite URL URL where your Satellite instance is reachable. See also Provisioning > Unattended URL . Entries per page 20 Number of records shown per page in Satellite Fix DB cache No Satellite maintains a cache of permissions and roles. When set to Yes , Satellite recreates this cache on the restart. DB pending seed No Should the foreman-rake db:seed be executed on the run of the installer modules? Capsule request timeout 60 Open and read timeout for HTTP requests from Satellite to Capsule (in seconds). Login page footer text Text to be shown in the login-page footer. HTTP(S) proxy Set a proxy for outgoing HTTP(S) connections from the Satellite product. System-wide proxies must be configured at the operating system level. HTTP(S) proxy except hosts [] Set hostnames to which requests are not to be proxied. Requests to the local host are excluded by default. Show Experimental Labs No Whether or not to show a menu to access experimental lab features (requires reload of page). Display FQDN for hosts Yes If set to Yes , Satellite displays names of hosts as fully qualified domain names (FQDNs). Out of sync interval 30 Hosts report periodically, and if the time between reports exceeds this duration in minutes, hosts are considered out of sync. You can override this on your hosts by adding the outofsync_interval parameter, per host, at Hosts > All hosts > USDhost > Edit > Parameters > Add Parameter . Satellite UUID Satellite instance ID. Uniquely identifies a Satellite instance. Default language The UI for new users uses this language. Default timezone The timezone to use for new users. Instance title The instance title is shown on the top navigation bar (requires a page reload). Saved audits interval Duration in days to preserve audit data. Leave empty to disable the audits cleanup. New host details UI Yes Satellite loads the new UI for host details. B.2. Satellite task settings Setting Default Value Description Sync task timeout 120 Number of seconds to wait for a synchronous task to finish before an exception is raised. Enable dynflow console Yes Enable the dynflow console ( /foreman_tasks/dynflow ) for debugging. Require auth for dynflow console Yes The user must be authenticated as having administrative rights before accessing the dynflow console. Capsule action retry count 4 Number of attempts permitted to start a task on the Capsule before failing. Capsule action retry interval 15 Time in seconds between retries. Allow Capsule batch tasks Yes Enable batch triggering of tasks on the Capsule. Capsule tasks batch size 100 Number of tasks included in one request to the Capsule if foreman_tasks_proxy_batch_trigger is enabled. Tasks troubleshooting URL URL pointing to the task troubleshooting documentation. It should contain a %{label} placeholder that is replaced with a normalized task label (restricted to only alphanumeric characters)). A %{version} placeholder is also available. Polling intervals multiplier 1 Polling multiplier used to multiply the default polling intervals. You can use this to prevent polling too frequently for long running tasks. B.3. Template sync settings Setting Default Value Description Associate New Associate templates with operating system, organization and location. Branch Default branch in Git repo. Commit message Templates export made by a Satellite user Custom commit message for exported templates. Dirname / The directory within the Git repo containing the templates. Filter Import or export of names matching this regex. Case-insensitive. Snippets are not filtered. Force import No If set to Yes , locked templates are overwritten during an import. Lock templates Keep, do not lock new How to handle lock for imported templates. Metadata export mode Refresh Default metadata export mode. Possible options: refresh re-renders metadata. keep keeps existing metadata. remove exports the template without metadata. Negate No Negate the filter for import or export. Prefix A string added as a prefix to imported templates. Repo Target path from where to import or export templates. Different protocols can be used, for example: /tmp/dir git://example.com https://example.com ssh://example.com When exporting to /tmp , note that production deployments may be configured to use private tmp . Verbosity No Choose verbosity for Rake task importing templates. B.4. Discovery settings Setting Default Value Description Discovery location Indicates the default location to place discovered hosts in. Discovery organization Indicates the default organization to which discovered hosts are added. Interface fact discovery_bootif Fact name to use for primary interface detection. Create bond interfaces No Automatically create a bond interface if another interface is detected on the same VLAN using LLDP. Clean all facts No Clean all reported facts (except discovery facts) during provisioning. Hostname facts discovery_bootif List of facts to use for the hostname (comma separated, first wins). Auto provisioning No Use the provisioning rules to automatically provision newly discovered hosts. Reboot Yes Automatically reboot or kexec discovered hosts during provisioning. Hostname prefix mac The default prefix to use for the hostname. Must start with a letter. Fact columns Extra facter columns to show in host lists (comma separated). Highlighted facts Regex to organize facts for highlights section - e.g. ^(abc|cde)USD . Storage facts Regex to organize facts for the storage section. Software facts Regex to organize facts for the software section. Hardware facts Regex to organize facts for the hardware section. Network facts Regex to organize facts for the network section. IPMI facts Regex to organize facts for the Intelligent Platform Management Interface (IPMI) section. Lock PXE No Automatically generate a Preboot Execution Environment (PXE) configuration to pin a newly discovered host to discovery. Locked PXELinux template name pxelinux_discovery PXELinux template to be used when pinning a host to discovery. Locked PXEGrub template name pxegrub_discovery PXEGrub template to be used when pinning a host to discovery. Locked PXEGrub2 template name pxegrub2_discovery PXEGrub2 template to be used when pinning a host to discovery. Force DNS Yes Force the creation of DNS entries when provisioning a discovered host. Error on existing NIC No Do not permit to discover an existing host matching the MAC of a provisioning Network Interface Card (NIC) (errors out early). Type of name generator Fact + prefix Discovery hostname naming pattern. Prefer IPv6 No Prefer IPv6 to IPv4 when calling discovered nodes. B.5. Boot disk settings Setting Default Value Description iPXE directory /usr/share/ipxe Path to directory containing iPXE images. ISOLINUX directory /usr/share/syslinux Path to directory containing ISOLINUX images. SYSLINUX directory /usr/share/syslinux Path to directory containing SYSLINUX images. Grub2 directory /var/lib/tftpboot/grub2 Path to directory containing grubx64.efi and shimx64.efi . Host image template Boot disk iPXE - host iPXE template to use for host-specific boot disks. Generic image template Boot disk iPXE - generic host iPXE template to use for generic host boot disks. Generic Grub2 EFI image template Boot disk Grub2 EFI - generic host Grub2 template to use for generic Extensible Firmware Interface (EFI) host boot disks. ISO generation command genisoimage Command to generate ISO image, use genisoimage or mkisofs . Installation media caching Yes Installation media files are cached for full host images. Allowed bootdisk types [generic, host, full_host, subnet] List of permitted bootdisk types. Leave blank to disable it. B.6. Red Hat Cloud settings Setting Default Value Description Automatic inventory upload Yes Enable automatic upload of your host inventory to the Red Hat cloud. Synchronize recommendations Automatically No Enable automatic synchronization of Insights recommendations from the Red Hat cloud. Obfuscate host names No Obfuscate hostnames sent to the Red Hat cloud. Obfuscate host ipv4 addresses No Obfuscate IPv4 addresses sent to the Red Hat cloud. ID of the RHC daemon ***** RHC daemon id. B.7. Content settings Setting Default Value Description Default HTTP Proxy Default HTTP Proxy for syncing content. Default synced OS provisioning template Kickstart default Default provisioning template for operating systems created from synced content. Default synced OS finish template Kickstart default finish Default finish template for new operating systems created from synced content. Default synced OS user-data Kickstart default user data Default user data for new operating systems created from synced content. Default synced OS PXELinux template Kickstart default PXELinux Default PXELinux template for new operating systems created from synced content. Default synced OS PXEGrub template Kickstart default PXEGrub Default PXEGrub template for new operating systems created from synced content. Default synced OS PXEGrub2 template Kickstart default PXEGrub2 Default PXEGrub2 template for new operating systems created from synced content. Default synced OS iPXE template Kickstart default iPXE Default iPXE template for new operating systems created from synced content. Default synced OS partition table Kickstart default Default partitioning table for new operating systems created from synced content. Default synced OS kexec template Discovery Red Hat kexec Default kexec template for new operating systems created from synced content. Default synced OS Atomic template Atomic Kickstart default Default provisioning template for new atomic operating systems created from synced content. Manifest refresh timeout 1200 Timeout when refreshing a manifest (in seconds). Subscription connection enabled Yes Can communicate with the Red Hat Portal for subscriptions. Installable errata from Content View No Calculate errata host status based only on errata in a host's content view and lifecycle environment. Restrict Composite Content View promotion No If this is enabled, a composite content view cannot be published or promoted, unless the content view versions that it includes exist in the target environment. Check services before actions Yes Check the status of backend services such as pulp and candlepin before performing actions? Batch size to sync repositories in 100 How many repositories should be synced concurrently on a Capsule. A smaller number may lead to longer sync times. A larger number will increase dynflow load. Sync Capsules after Content View promotion Yes Whether or not to auto sync Capsules after a content view promotion. Default Custom Repository download policy immediate Default download policy for custom repositories. Either immediate or on_demand . Default Red Hat Repository download policy on_demand Default download policy for enabled Red Hat repositories. Either immediate or on_demand . Default Capsule download policy on_demand Default download policy for Capsule syncs. Either inherit , immediate , or on_demand . Pulp export destination filepath /var/lib/pulp/katello-export On-disk location for exported repositories. Pulp 3 export destination filepath /var/lib/pulp/exports On-disk location for Pulp 3 exported repositories. Pulp client key /etc/pki/katello/private/pulp-client.key Path for SSL key used for Pulp server authentication. Pulp client cert /etc/pki/katello/certs/pulp-client.crt Path for SSL certificate used for Pulp server authentication. Sync Connection Timeout 300 Total timeout in seconds for connections when syncing. Delete Host upon unregister No When unregistering a host using subscription-manager, also delete the host record. Managed resources linked to the host such as virtual machines and DNS records might also be deleted. Subscription manager name registration fact When registering a host using subscription-manager, force use the specified fact for the host name (in the form of fact.fact ). Subscription manager name registration fact strict matching No If this is enabled, and register_hostname_fact is set and provided, registration looks for a new host by name only using that fact, and skips all hostname matching. Default Location subscribed hosts Default Location Default location where new subscribed hosts are stored after registration. Expire soon days 120 The number of days remaining in a subscription before you are reminded about renewing it. Content View Dependency Solving Default No The default dependency solving value for new content views. Host Duplicate DMI UUIDs [] If hosts fail to register because of duplicate Desktop Management Interface (DMI) UUIDs, add their comma-separated values here. Subsequent registrations generate a unique DMI UUID for the affected hosts. Host Profile Assume Yes Enable new host registrations to assume registered profiles with matching hostname if the registering DMI UUID is not used by another host. Host Profile Can Change In Build No Enable host registrations to bypass Host Profile Assume if the host is in build mode. Host Can Re-Register Only In Build No Enable hosts to re-register only when they are in build mode. Host Tasks Workers Pool Size 5 Number of workers in the pool to handle the execution of host-related tasks. When set to 0, the default queue is used. Restart of the dynflowd/foreman-tasks service is required. Applicability Batch Size 50 Number of host applicability calculations to process per task. Autosearch Yes For pages that support it, automatically perform the search while typing in search input. Autosearch delay 500 If Autosearch is enabled, delay in milliseconds before executing searches while typing. Pulp bulk load size 2000 The number of items fetched from a single paged Pulp API call. Upload profiles without Dynflow Yes Enable Katello to update host installed packages, enabled repositories, and module inventory directly instead of wrapped in Dynflow tasks (try turning off if Puma processes are using too much memory). Orphaned Content Protection Time 1440 Time in minutes to consider orphan content as orphaned. Prefer registered through Capsule for remote execution No Prefer using a proxy to which a host is registered when using remote execution. Allow deleting repositories in published content views Yes Enable removal of repositories that the user has previously published in one or more content view versions. B.8. Authentication settings Setting Default Value Description OAuth active Yes Satellite will use OAuth for API authorization. OAuth consumer key ***** OAuth consumer key. OAuth consumer secret ***** OAuth consumer secret. OAuth map users No Satellite maps users by username in the request-header. If this is disabled, OAuth requests have administrator rights. Failed login attempts limit 30 Satellite blocks user logins from an incoming IP address for 5 minutes after the specified number of failed login attempts. Set to 0 to disable brute force protection. Restrict registered Capsules Yes Only known Capsules can access features that use Capsule authentication. Require SSL for capsules Yes Client SSL certificates are used to identify Capsules ( :require_ssl should also be enabled). Trusted hosts [] List of hostnames, IPv4, IPv6 addresses or subnets to be trusted in addition to Capsules for access to fact/report importers and ENC output. SSL certificate /etc/foreman/client_cert.pem SSL Certificate path that Satellite uses to communicate with its proxies. SSL CA file /etc/foreman/proxy_ca.pem SSL CA file path that Satellite uses to communicate with its proxies. SSL private key /etc/foreman/client_key.pem SSL Private Key path that Satellite uses to communicate with its proxies. SSL client DN env HTTP_SSL_CLIENT_S_DN Environment variable containing the subject DN from a client SSL certificate. SSL client verify env HTTP_SSL_CLIENT_VERIFY Environment variable containing the verification status of a client SSL certificate. SSL client cert env HTTP_SSL_CLIENT_CERT Environment variable containing a client's SSL certificate. Server CA file SSL CA file path used in templates to verify the connection to Satellite. Websockets SSL key etc/pki/katello/private/katello-apache.key Private key file path that Satellite uses to encrypt websockets. Websockets SSL certificate /etc/pki/katello/certs/katello-apache.crt Certificate path that Satellite uses to encrypt websockets. Websockets encryption Yes VNC/SPICE websocket proxy console access encryption ( websockets_ssl_key/cert setting required). Login delegation logout URL Redirect your users to this URL on logout. Enable Authorize login delegation also. Authorize login delegation auth source user autocreate External Name of the external authentication source where unknown externally authenticated users (see Authorize login delegation ) are created. Empty means no autocreation. Authorize login delegation No Authorize login delegation with REMOTE_USER HTTP header. Authorize login delegation API No Authorize login delegation with REMOTE_USER HTTP header for API calls too. Idle timeout 60 Log out idle users after the specified number of minutes. BCrypt password cost 9 Cost value of bcrypt password hash function for internal auth-sources (4 - 30). A higher value is safer but verification is slower, particularly for stateless API calls and UI logins. A password change is needed to affect existing passwords. BMC credentials access Yes Permits access to BMC interface passwords through ENC YAML output and in templates. OIDC JWKs URL OpenID Connect JSON Web Key Set (JWKS) URL. Typically https://keycloak.example.com/auth/realms/<realm name>/protocol/openid-connect/certs when using Keycloak as an OpenID provider. OIDC Audience [] Name of the OpenID Connect Audience that is being used for authentication. In the case of Keycloak this is the Client ID. OIDC Issuer The issuer claim identifies the principal that issued the JSON Web tokens (JWT), which exists at a /.well-known/openid-configuration in case of most of the OpenID providers. OIDC Algorithm The algorithm used to encode the JWT in the OpenID provider. B.9. Email settings Setting Default Value Description Email reply address Email reply address for emails that Satellite is sending. Email subject prefix Prefix to add to all outgoing email. Send welcome email No Send a welcome email including username and URL to new users. Delivery method Sendmail Method used to deliver email. SMTP enable StartTLS auto Yes SMTP automatically enables StartTLS. SMTP OpenSSL verify mode Default verification mode When using TLS, you can set how OpenSSL checks the certificate. SMTP address SMTP address to connect to. SMTP port 25 SMTP port to connect to. SMTP HELO/EHLO domain HELO/EHLO domain. SMTP username Username to use to authenticate, if required. SMTP password ***** Password to use to authenticate, if required. SMTP authentication none Specify authentication type, if required. Sendmail arguments -i Specify additional options to sendmail. Only used when the delivery method is set to sendmail. Sendmail location /usr/sbin/sendmail The location of the sendmail executable. Only used when the delivery method is set to sendmail. B.10. Notifications settings Setting Default Value Description RSS enable Yes Pull RSS notifications. RSS URL https://www.redhat.com/en/rss/blog/channel/red-hat-satellite URL from which to fetch RSS notifications. B.11. Provisioning settings Setting Default Value Description Host owner Default owner on provisioned hosts, if empty Satellite uses the current user. Root password ***** Default encrypted root password on provisioned hosts. Unattended URL URL that hosts retrieve templates from during the build. When it starts with https, unattended, or userdata, controllers cannot be accessed using HTTP. Safemode rendering Yes Enables safe mode rendering of provisioning templates. The default and recommended option Yes denies access to variables and any object that is not listed in Satellite. When set to No , any object may be accessed by a user with permission to use templating features, either by editing templates, parameters or smart variables. This permits users full remote code execution on Satellite Server, effectively disabling all authorization. This is not a safe option, especially in larger companies. Access unattended without build No Enable access to unattended URLs without build mode being used. Query local nameservers No Satellite queries the locally configured resolver instead of the SOA/NS authorities. Installation token lifetime 360 Time in minutes that installation tokens should be valid for. Set to 0 to disable the token. SSH timeout 120 Time in seconds before SSH provisioning times out. Libvirt default console address 0.0.0.0 The IP address that should be used for the console listen address when provisioning new virtual machines using libvirt. Update IP from built request No Satellite updates the host IP with the IP that made the build request. Use short name for VMs No Satellite uses the short hostname instead of the FQDN for creating new virtual machines. DNS timeout [5, 10, 15, 20] List of timeouts (in seconds) for DNS lookup attempts such as the dns_lookup macro and DNS record conflict validation. Clean up failed deployment Yes Satellite deletes the virtual machine if the provisioning script ends with a non-zero exit code. Type of name generator Random-based Specifies the method used to generate a hostname when creating a new host. The default Random-based option generates a unique random hostname which you can but do not have to use. This is useful for users who create many hosts and do not know how to name them. The MAC-based option is for bare-metal hosts only. If you delete a host and create it later on, it receives the same hostname based on the MAC address. This can be useful for users who recycle servers and want them to always get the same hostname. The Off option disables the name generator function and leaves the hostname field blank. Default PXE global template entry Default PXE menu item in a global template - local , discovery or custom, use blank for template default. Default PXE local template entry Default PXE menu item in local template - local , local_chain_hd0 , or custom, use blank for template default. iPXE intermediate script iPXE intermediate script Intermediate iPXE script for unattended installations. Destroy associated VM on host delete No Destroy associated VM on host delete. When enabled, VMs linked to hosts are deleted on Compute Resource. When disabled, VMs are unlinked when the host is deleted, meaning they remain on Compute Resource and can be re-associated or imported back to Satellite again. This does not automatically power off the VM Maximum structured facts 100 Maximum number of keys in structured subtree, statistics stored in satellite::dropped_subtree_facts . Default Global registration template Global Registration Global Registration template. Default 'Host initial configuration' template Linux host_init_config default Default 'Host initial configuration' template, automatically assigned when a new operating system is created. Global default PXEGrub2 template PXEGrub2 global default Global default PXEGrub2 template. This template is deployed to all configured TFTP servers. It is not affected by upgrades. Global default PXELinux template PXELinux global default Global default PXELinux template. This template is deployed to all configured TFTP servers. It is not affected by upgrades. Global default PXEGrub template PXEGrub global default Global default PXEGrub template. This template is deployed to all configured TFTP servers. It is not affected by upgrades. Global default iPXE template iPXE global default Global default iPXE template. This template is deployed to all configured TFTP servers. It is not affected by upgrades. Local boot PXEGrub2 template PXEGrub2 default local boot Template that is selected as PXEGrub2 default for local boot. Local boot PXELinux template PXELinux default local boot Template that is selected as PXELinux default for local boot. Local boot PXEGrub template PXEGrub default local boot Template that is selected as PXEGrub default for local boot. Local boot iPXE template iPXE default local boot Template that is selected as iPXE default for local boot. Manage PuppetCA Yes Satellite automates certificate signing upon provision of a new host. Use UUID for certificates No Satellite uses random UUIDs for certificate signing instead of hostnames. Show unsupported provisioning templates No Show unsupported provisioning templates. When enabled, all the available templates are shown. When disabled, only Red Hat supported templates are shown. B.12. Facts settings Setting Default Value Description Create new host when facts are uploaded Yes Satellite creates the host when new facts are received. Location fact satellite_location Hosts created after a Puppet run are placed in the location specified by this fact. Organization fact satellite_organization Hosts created after a Puppet run are placed in the organization specified by this fact. The content of this fact should be the full label of the organization. Default location Default Location Hosts created after a Puppet run that did not send a location fact are placed in this location. Default organization Default Organization Hosts created after a Puppet run that did not send an organization fact are placed in this organization. Update hostgroup from facts Yes Satellite updates a host's hostgroup from its facts. Ignore facts for operating system No Stop updating operating system from facts. Ignore facts for domain No Stop updating domain values from facts. Update subnets from facts None Satellite updates a host's subnet from its facts. Ignore interfaces facts for provisioning No Stop updating IP and MAC address values from facts (affects all interfaces). Ignore interfaces with matching identifier [ lo , en*v* , usb* , vnet* , macvtap* , ;vdsmdummy; , veth* , tap* , qbr* , qvb* , qvo* , qr-* , qg-* , vlinuxbr* , vovsbr* , br-int ] Skip creating or updating host network interfaces objects with identifiers matching these values from incoming facts. You can use a * wildcard to match identifiers with indexes, e.g. macvtap* . The ignored interface raw facts are still stored in the database, see the Exclude pattern setting for more details. Exclude pattern for facts stored in Satellite [ lo , en*v* , usb* , vnet* , macvtap* , ;vdsmdummy; , veth* , tap* , qbr* , qvb* , qvo* , qr-* , qg-* , vlinuxbr* , vovsbr* , br-int , load_averages::* , memory::swap::available* , memory::swap::capacity , memory::swap::used* , memory::system::available* , memory::system::capacity , memory::system::used* , memoryfree , memoryfree_mb , swapfree , swapfree_mb , uptime_hours , uptime_days ] Exclude pattern for all types of imported facts (Puppet, Ansible, rhsm). Those facts are not stored in the satellite database. You can use a * wildcard to match names with indexes, e.g. ignore* filters out ignore, ignore123 as well as a::ignore or even a::ignore123::b. B.13. Configuration management settings Setting Default Value Description Create new host when report is uploaded Yes Satellite creates the host when a report is received. Matchers inheritance Yes Satellite matchers are inherited by children when evaluating smart class parameters for hostgroups, organizations, and locations. Default parameters lookup path [ fqdn , hostgroup , os , domain ] Satellite evaluates host smart class parameters in this order by default. Interpolate ERB in parameters Yes Satellite parses ERB in parameters value in the ENC output. Always show configuration status No All hosts show a configuration status even when a Puppet Capsule is not assigned. B.14. Remote execution settings Setting Default Value Description Fallback to Any Capsule No Search the host for any proxy with Remote Execution. This is useful when the host has no subnet or the subnet does not have an execution proxy. Enable Global Capsule Yes Search for Remote Execution proxy outside of the proxies assigned to the host. The search is limited to the host's organization and location. SSH User root Default user to use for SSH. You can override per host by setting the remote_execution_ssh_user parameter. Effective User root Default user to use for executing the script. If the user differs from the SSH user, su or sudo is used to switch the user. Effective User Method sudo The command used to switch to the effective user. One of [ sudo , dzdo , su ] Effective user password ***** Effective user password. See Effective User . Sync Job Templates Yes Whether to sync templates from disk when running db:seed . SSH Port 22 Port to use for SSH communication. Default port 22. You can override per host by setting the remote_execution_ssh_port parameter. Connect by IP No Whether the IP addresses on host interfaces are preferred over the FQDN. It is useful when the DNS is not resolving the FQDNs properly. You can override this per host by setting the remote_execution_connect_by_ip parameter. For dual-stacked hosts, consider the remote_execution_connect_by_ip_prefer_ipv6 setting. Prefer IPv6 over IPv4 No When connecting using an IP address, are IPv6 addresses preferred? If no IPv6 address is set, it falls back to IPv4 automatically. You can override this per host by setting the remote_execution_connect_by_ip_prefer_ipv6 parameter. By default and for compatibility, IPv4 is preferred over IPv6. Default SSH password ***** Default password to use for SSH. You can override per host by setting the remote_execution_ssh_password parameter. Default SSH key passphrase ***** Default key passphrase to use for SSH. You can override per host by setting the remote_execution_ssh_key_passphrase parameter. Workers pool size 5 Number of workers in the pool to handle the execution of the remote execution jobs. Restart of the dynflowd/satellite-tasks service is required. Cleanup working directories Yes Whether working directories are removed after task completion. You can override this per host by setting the remote_execution_cleanup_working_dirs parameter. Cockpit URL Where to find the Cockpit instance for the Web Console button. By default, no button is shown. Form Job Template Run Command - SSH Default Choose a job template that is pre-selected in job invocation form. Job Invocation Report Template Jobs - Invocation report template Select a report template used for generating a report for a particular remote execution job. Time to pickup 86400 Time in seconds within which the host has to pick up a job. If the job is not picked up within this limit, the job will be cancelled. Applies only to pull-mqtt based jobs. Defaults to one day. B.15. Ansible settings Setting Default Value Description Private Key Path Use this to supply a path to an SSH Private Key that Ansible uses instead of a password. Override with the ansible_ssh_private_key_file host parameter. Connection type ssh Use this connection type by default when running Ansible Playbooks. You can override this on hosts by adding the ansible_connection parameter. WinRM cert Validation validate Enable or disable WinRM server certificate validation when running Ansible Playbooks. You can override this on hosts by adding the ansible_winrm_server_cert_validation parameter. Default verbosity level Disabled Satellite adds this level of verbosity for additional debugging output when running Ansible Playbooks. Post-provision timeout 360 Timeout (in seconds) to set when Satellite triggers an Ansible roles task playbook after a host is fully provisioned. Set this to the maximum time you expect a host to take until it is ready after a reboot. Ansible report timeout 30 Timeout (in minutes) when hosts should have reported. Ansible out of sync disabled No Disable host configuration status turning to out of sync for Ansible after a report does not arrive within the configured interval. Default Ansible inventory report template Ansible - Ansible Inventory Satellite uses this template to schedule the report with Ansible inventory. Ansible roles to ignore [] The roles to exclude when importing roles from Capsule. The expected input is comma separated values and you can use * wildcard metacharacters. For example: foo* , *b* , *bar . Capsule tasks batch size for Ansible Number of tasks which should be sent to the Capsule in one request if satellite_tasks_proxy_batch_trigger is enabled. If set, it overrides satellite_tasks_proxy_batch_size setting for Ansible jobs.
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/administration_settings_admin
|
Chapter 4. ResourceAccessReview [authorization.openshift.io/v1]
|
Chapter 4. ResourceAccessReview [authorization.openshift.io/v1] Description ResourceAccessReview is a means to request a list of which users and groups are authorized to perform the action specified by spec Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required namespace verb resourceAPIGroup resourceAPIVersion resource resourceName path isNonResourceURL 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources content RawExtension Content is the actual content of the request for create and update isNonResourceURL boolean IsNonResourceURL is true if this is a request for a non-resource URL (outside of the resource hierarchy) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces path string Path is the path of a non resource URL resource string Resource is one of the existing resource types resourceAPIGroup string Group is the API group of the resource Serialized as resourceAPIGroup to avoid confusion with the 'groups' field when inlined resourceAPIVersion string Version is the API version of the resource Serialized as resourceAPIVersion to avoid confusion with TypeMeta.apiVersion and ObjectMeta.resourceVersion when inlined resourceName string ResourceName is the name of the resource being requested for a "get" or deleted for a "delete" verb string Verb is one of: get, list, watch, create, update, delete 4.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/resourceaccessreviews POST : create a ResourceAccessReview 4.2.1. /apis/authorization.openshift.io/v1/resourceaccessreviews Table 4.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a ResourceAccessReview Table 4.2. Body parameters Parameter Type Description body ResourceAccessReview schema Table 4.3. HTTP responses HTTP code Reponse body 200 - OK ResourceAccessReview schema 201 - Created ResourceAccessReview schema 202 - Accepted ResourceAccessReview schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authorization_apis/resourceaccessreview-authorization-openshift-io-v1
|
Chapter 12. Multimap cache
|
Chapter 12. Multimap cache MutimapCache is a type of Data Grid Cache that maps keys to values in which each key can contain multiple values. 12.1. Multimap Cache MutimapCache is a type of Data Grid Cache that maps keys to values in which each key can contain multiple values. 12.1.1. Installation and configuration pom.xml <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-multimap</artifactId> </dependency> 12.1.2. MultimapCache API MultimapCache API exposes several methods to interact with the Multimap Cache. These methods are non-blocking in most cases; see limitations for more information. public interface MultimapCache<K, V> { CompletableFuture<Optional<CacheEntry<K, Collection<V>>>> getEntry(K key); CompletableFuture<Void> remove(SerializablePredicate<? super V> p); CompletableFuture<Void> put(K key, V value); CompletableFuture<Collection<V>> get(K key); CompletableFuture<Boolean> remove(K key); CompletableFuture<Boolean> remove(K key, V value); CompletableFuture<Void> remove(Predicate<? super V> p); CompletableFuture<Boolean> containsKey(K key); CompletableFuture<Boolean> containsValue(V value); CompletableFuture<Boolean> containsEntry(K key, V value); CompletableFuture<Long> size(); boolean supportsDuplicates(); } CompletableFuture<Void> put(K key, V value) Puts a key-value pair in the multimap cache. MultimapCache<String, String> multimapCache = ...; multimapCache.put("girlNames", "marie") .thenCompose(r1 -> multimapCache.put("girlNames", "oihana")) .thenCompose(r3 -> multimapCache.get("girlNames")) .thenAccept(names -> { if(names.contains("marie")) System.out.println("Marie is a girl name"); if(names.contains("oihana")) System.out.println("Oihana is a girl name"); }); The output of this code is as follows: Marie is a girl name Oihana is a girl name CompletableFuture<Collection<V>> get(K key) Asynchronous that returns a view collection of the values associated with key in this multimap cache, if any. Any changes to the retrieved collection won't change the values in this multimap cache. When this method returns an empty collection, it means the key was not found. CompletableFuture<Boolean> remove(K key) Asynchronous that removes the entry associated with the key from the multimap cache, if such exists. CompletableFuture<Boolean> remove(K key, V value) Asynchronous that removes a key-value pair from the multimap cache, if such exists. CompletableFuture<Void> remove(Predicate<? super V> p) Asynchronous method. Removes every value that match the given predicate. CompletableFuture<Boolean> containsKey(K key) Asynchronous that returns true if this multimap contains the key. CompletableFuture<Boolean> containsValue(V value) Asynchronous that returns true if this multimap contains the value in at least one key. CompletableFuture<Boolean> containsEntry(K key, V value) Asynchronous that returns true if this multimap contains at least one key-value pair with the value. CompletableFuture<Long> size() Asynchronous that returns the number of key-value pairs in the multimap cache. It doesn't return the distinct number of keys. boolean supportsDuplicates() Asynchronous that returns true if the multimap cache supports duplicates. This means that the content of the multimap can be 'a' ['1', '1', '2']. For now this method will always return false, as duplicates are not yet supported. The existence of a given value is determined by 'equals' and `hashcode' method's contract. 12.1.3. Creating a Multimap Cache Currently the MultimapCache is configured as a regular cache. This can be done either by code or XML configuration. See how to configure a regular cache in Configuring Data Grid caches . 12.1.3.1. Embedded mode // create or obtain your EmbeddedCacheManager EmbeddedCacheManager cm = ... ; // create or obtain a MultimapCacheManager passing the EmbeddedCacheManager MultimapCacheManager multimapCacheManager = EmbeddedMultimapCacheManagerFactory.from(cm); // define the configuration for the multimap cache multimapCacheManager.defineConfiguration(multimapCacheName, c.build()); // get the multimap cache multimapCache = multimapCacheManager.get(multimapCacheName); 12.1.4. Limitations In almost every case the Multimap Cache will behave as a regular Cache, but some limitations exist in the current version, as follows: 12.1.4.1. Support for duplicates A multimap can be configured to store duplicate values for a single key. A duplicate is determined by the value's equals method. Whenever the put method is called, if multimap is configured to support duplicates, the key-value pair will be added to the collection. Invoking remove on the multimap will remove all duplicates if present. 12.1.4.2. Eviction For now, the eviction works per key, and not per key-value pair. This means that whenever a key is evicted, all the values associated with the key will be evicted too. 12.1.4.3. Transactions Implicit transactions are supported through the auto-commit and all the methods are non blocking. Explicit transactions work without blocking in most of the cases. Methods that will block are size , containsEntry and remove(Predicate<? super V> p)
|
[
"<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-multimap</artifactId> </dependency>",
"public interface MultimapCache<K, V> { CompletableFuture<Optional<CacheEntry<K, Collection<V>>>> getEntry(K key); CompletableFuture<Void> remove(SerializablePredicate<? super V> p); CompletableFuture<Void> put(K key, V value); CompletableFuture<Collection<V>> get(K key); CompletableFuture<Boolean> remove(K key); CompletableFuture<Boolean> remove(K key, V value); CompletableFuture<Void> remove(Predicate<? super V> p); CompletableFuture<Boolean> containsKey(K key); CompletableFuture<Boolean> containsValue(V value); CompletableFuture<Boolean> containsEntry(K key, V value); CompletableFuture<Long> size(); boolean supportsDuplicates(); }",
"MultimapCache<String, String> multimapCache = ...; multimapCache.put(\"girlNames\", \"marie\") .thenCompose(r1 -> multimapCache.put(\"girlNames\", \"oihana\")) .thenCompose(r3 -> multimapCache.get(\"girlNames\")) .thenAccept(names -> { if(names.contains(\"marie\")) System.out.println(\"Marie is a girl name\"); if(names.contains(\"oihana\")) System.out.println(\"Oihana is a girl name\"); });",
"Marie is a girl name Oihana is a girl name",
"// create or obtain your EmbeddedCacheManager EmbeddedCacheManager cm = ... ; // create or obtain a MultimapCacheManager passing the EmbeddedCacheManager MultimapCacheManager multimapCacheManager = EmbeddedMultimapCacheManagerFactory.from(cm); // define the configuration for the multimap cache multimapCacheManager.defineConfiguration(multimapCacheName, c.build()); // get the multimap cache multimapCache = multimapCacheManager.get(multimapCacheName);"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/embedding_data_grid_in_java_applications/multimap-cache
|
Chapter 6. Post-deployment configuration
|
Chapter 6. Post-deployment configuration After the overcloud deployment finishes, complete the following steps to validate the functionality. Procedure Create a test instance in the availability zones. In this example, the new instance runs on the distributed compute node (DCN). The specific AZ is targeted using the --availability-zone parameter: Create a volume on the first availability zone. This volume uses the cinder active/active service running on the dcn0 nodes. Note This step depends on the cinder availability zone configuration, which is defined by CinderStorageAvailabilityZone . For more information, see Deploying availability zones in the Storage Guide . You now have two separate HCI stacks, with a Ceph cluster deployed by each one. For more information on HCI, see Hyperconverged Infrastructure Guide . 6.1. Checking container health Verify that the container is functioning correctly. Procedure Log in to the node that is running the Ceph MON service by using SSH. Run the following command to view container health: Replace CLUSTERNAME with the name of the cluster, for example, dcn0 . The default value is ceph . Confirm that the health status of the cluster is HEALTH_OK and that all of the OSDs are up .
|
[
"openstack server create --flavor m1.tiny --image cirros --network private --security-group basic dcn-instance --availability-zone dcn0",
"openstack volume create --size 1 --availability-zone dcn0 myvol",
"podman exec ceph-mon-USDHOSTNAME ceph -s --cluster CLUSTERNAME"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_distributed_compute_nodes_with_separate_heat_stacks/proc_post-deployment-configuration
|
Chapter 36. Virtualization
|
Chapter 36. Virtualization SeaBIOS recognizes SCSI devices with a non-zero LUN Previously, SeaBIOS only recognized SCSI devices when the logical unit number (LUN) was set to zero. Consequently, if a SCSI device was defined with a LUN other than zero, SeaBIOS failed to boot. With this update, SeaBIOS recognizes SCSI devices with LUNs other than zero. As a result, SeaBIOS boots successfully. (BZ# 1020622 ) The libguestfs tools now correctly handle guests where /usr/ is not on the same partition as root Previously, the libguestfs library did not recognize the guest operating system when the /usr/ directory was not located on the same partition as the root directory. As a consequence, multiple libguestfs tools, such as the virt-v2v utility, did not perform as expected when used on such guests. This update ensures that libguestfs recognizes guest operating systems when /usr/ is not on the same partition as root. As a result, the affected libguestfs tools perform as expected. (BZ# 1401474 ) virt-v2v can convert Windows guests with corrupted or damaged Windows registries Previously, the hivex library used by libguestfs to manipulate the Windows registry could not handle corrupted registries. Consequently, the virt-v2v utility was not able to convert Windows guests with corrupted or damaged Windows registries. With this update, libguestfs configures hivex to be less strict when reading the Windows registry. As a result, virt-v2v can now convert most Windows guests with corrupted or damaged Windows registries. (BZ# 1311890 , BZ# 1423436 ) Converting Windows guests with non-system dynamic disks using virt-v2v now works correctly Previously, using the virt-v2v utility to convert a Windows guest virtual machine with non-system dynamic disks did not work correctly, and the guest were not usable after the conversion. This update fixes the underlying code and thus prevents the described problem. Note that the conversion of Windows guests using dynamic disks on the system disk (C: drive) is still not supported. (BZ# 1265588 ) Guests can be converted to Glance images, regardless of the Glance client version Previously, if the Glance command-line client version 1.0.0 or greater was installed on the virt-v2v conversion server, using the virt-v2v utility to convert a guest virtual machine to a Glance image failed. With this release, when exporting images, virt-v2v directly sets all the properties of images. As a result, the conversion to Glance works regardless of the version of the Glance client installed on the virt-v2v conversion server. (BZ# 1374405 ) Red Hat Enterprise Linux 6.2 - 6.5 guest virtual machines can now be converted using virt-v2v Previously, an error in the SELinux file_contexts file in Red Hat Enterprise Linux versions 6.2 - 6.5 prevented conversion of these guests using the virt-v2v utiltiy. With this update, virt-v2v automatically fixes the error in the SElinux file_contexts file. As a result, Red Hat Enterprise Linux 6.2-6.5 guest virtual machines can now be converted using virt-v2v . (BZ# 1374232 ) Btrfs entries in /etc/fstab are now parsed correctly by libguestfs Previously, Btrfs sub-volume entries with more than one comma-separated option in /etc/fstab were not parsed properly by libguestfs . Consequently, Linux guest virtual machines with these configurations could not be inspected, and the virt-v2v utility could not convert them. With this update, libguestfs parses Btrfs sub-volume entries with more than one comma-separated option in /etc/fstab correctly. As a result, these entries can be inspected and converted by virt-v2v . (BZ# 1383517 ) libguestfs can now correctly open libvirt domain disks that require authentication Previously, when adding disks from a libvirt domain, libguestfs did not read any disk secrets. Consequently, libguestfs could not open disks that required authentication. With this update, libguestfs reads secrets of disks in libvirt domains, if present. As a result, libguestfs can now correctly open disks of libvirt domains that require authentication. (BZ# 1392798 ) Converted Windows UEFI guests boot properly Previously, when converting Windows 8 UEFI guests, virtio drivers were not installed correctly. Consequently, the converted guests did not boot. With this update, virtio drivers are installed correctly in Windows UEFI guests. As a result, converted Windows UEFI guests boot properly. (BZ# 1431579 ) The virt-v2v utility now ignores proxy environment variables consistently Prior to this update, when using the virt-v2v utility to convert a VMware guest virtual machine, virt-v2v used the proxy environment variables for some connections to VMware, but not for others. This in some cases caused conversions to fail. Now, virt-v2v ignores all proxy environment settings during the conversion, which prevents the described problem. (BZ# 1354507 ) virt-v2v only copies rhev-apt.exe and rhsrvany.exe when needed Previously, virt-v2v always copied the rhev-apt.exe and rhsrvany.exe files when converting Windows guests. Consequently, they were present in the converted Windows guests, even when they were not needed. With this update, virt-v2v only copies these files when they are needed in the Windows guest. (BZ# 1161019 ) Guests with VLAN over a bonded interaface no longer stop passing traffic after a failover Previously, on guest virtual machines with VLAN configured over a bonded interface that used ixgbe virtual functions (VFs), the bonded network interface stopped passing traffic when a failover occurred. The hypervisor console also logged this error as a requested MACVLAN filter but is administratively denied message. With this update ensures that failovers are handled correctly and thus prevents the described problem. (BZ#1379787) virt-v2v imports OVAs that do not have the <ovf:Name> attribute Previously, the virt-v2v utility rejected the import of Open Virtual Appliances (OVAs) without the <ovf:Name> attribute. As a consequence, the virt-v2v utility did not import OVAs exported by Amazon Web Services (AWS). In this release, if the <ovf:Name> attribute is missing, virt-v2v uses the base name of the disk image file as the name of the virtual machine. As a result, the virt-v2v utility now imports OVAs exported by AWS. (BZ# 1402301 )
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/bug_fixes_virtualization
|
11.8. Procedures and Functions
|
11.8. Procedures and Functions A user can define one of the following functions: Source Procedure ("CREATE FOREIGN PROCEDURE") - a stored procedure in source Source Function ("CREATE FOREIGN FUNCTION") - A function that is supported by the source, where JBoss Data Virtualization will pushdown to source instead of evaluating in the JBoss Data Virtualization engine. Virtual Procedure ("CREATE VIRTUAL PROCEDURE") - Similar to stored procedure, however this is defined using the JBoss Data Virtualization Procedure language and evaluated in the JBoss Data Virtualization engine. Function/UDF ("CREATE VIRTUAL FUNCTION") - A user defined function, that can be defined using the Teiid procedure language or can have the implementation defined using a JAVA Class. Here is an example procedure: Here is an example function:
|
[
"CREATE VIRTUAL PROCEDURE CustomerActivity(customerid integer) RETURNS (name varchar(25), activitydate date, amount decimal) AS BEGIN END",
"CREATE VIRTUAL FUNCTION CustomerRank(customerid integer) RETURNS integer AS BEGIN END"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/procedures_and_functions
|
Chapter 4. Monitoring model bias
|
Chapter 4. Monitoring model bias As a data scientist, you might want to monitor your machine learning models for bias. This means monitoring for algorithmic deficiencies that might skew the outcomes or decisions that the model produces. Importantly, this type of monitoring helps you to ensure that the model is not biased against particular protected groups or features. Red Hat OpenShift AI provides a set of metrics that help you to monitor your models for bias. You can use the OpenShift AI interface to choose an available metric and then configure model-specific details such as a protected attribute, the privileged and unprivileged groups, the outcome you want to monitor, and a threshold for bias. You then see a chart of the calculated values for a specified number of model inferences. For more information about the specific bias metrics, see Supported bias metrics . 4.1. Creating a bias metric To monitor a deployed model for bias, you must first create bias metrics. When you create a bias metric, you specify details relevant to your model such as a protected attribute, privileged and unprivileged groups, a model outcome and a value that you want to monitor, and the acceptable threshold for bias. For information about the specific bias metrics, see Supported bias metrics . For the complete list of TrustyAI metrics, see TrustyAI service API . You can create a bias metric for a model by using the OpenShift AI dashboard or by using the OpenShift command-line interface (CLI). 4.1.1. Creating a bias metric by using the dashboard You can use the OpenShift AI dashboard to create a bias metric for a model. Prerequisites You are familiar with the bias metrics that OpenShift AI supports and how to interpret them. You are familiar with the specific data set schema and understand the names and meanings of the inputs and outputs. Your OpenShift cluster administrator added you as a user to the OpenShift cluster and has installed the TrustyAI service for the data science project that contains the deployed models. You set up TrustyAI for your data science project, as described in Setting up TrustyAI for your project . Procedure Optional: To set the TRUSTY_ROUTE variable, follow these steps. In a terminal window, log in to the OpenShift cluster where OpenShift AI is deployed. Set the TRUSTY_ROUTE variable to the external route for the TrustyAI service pod. In the left menu of the OpenShift AI dashboard, click Model Serving . On the Deployed models page, select your project from the drop-down list. Click the name of the model that you want to configure bias metrics for. On the metrics page for the model, click the Model bias tab. Click Configure . In the Configure bias metrics dialog, complete the following steps to configure bias metrics: In the Metric name field, type a unique name for your bias metric. Note that you cannot change the name of this metric later. From the Metric type list, select one of the metrics types that are available in OpenShift AI. In the Protected attribute field, type the name of an attribute in your model that you want to monitor for bias. Tip You can use a curl command to query the metadata endpoint and view input attribute names and values. For example: curl -H "Authorization: Bearer USDTOKEN" USDTRUSTY_ROUTE/info | jq ".[0].data.inputSchema" In the Privileged value field, type the name of a privileged group for the protected attribute that you specified. In the Unprivileged value field, type the name of an unprivileged group for the protected attribute that you specified. In the Output field, type the name of the model outcome that you want to monitor for bias. Tip You can use a curl command to query the metadata endpoint and view output attribute names and values. For example: curl -H "Authorization: Bearer USDTOKEN" USDTRUSTY_ROUTE/info | jq ".[0].data.outputSchema" In the Output value field, type the value of the outcome that you want to monitor for bias. In the Violation threshold field, type the bias threshold for your selected metric type. This threshold value defines how far the specified metric can be from the fairness value for your metric, before the model is considered biased. In the Metric batch size field, type the number of model inferences that OpenShift AI includes each time it calculates the metric. Ensure that the values you entered are correct. Note You cannot edit a model bias metric configuration after you create it. Instead, you can duplicate a metric and then edit (configure) it; however, the history of the original metric is not applied to the copy. Click Configure . Verification The Bias metric configuration page shows the bias metrics that you configured for your model. step To view metrics, on the Bias metric configuration page, click View metrics in the upper-right corner. 4.1.2. Creating a bias metric by using the CLI You can use the OpenShift command-line interface (CLI) to create a bias metric for a model. Prerequisites You are familiar with the bias metrics that OpenShift AI supports and how to interpret them. You are familiar with the specific data set schema and understand the names and meanings of the inputs and outputs. Your OpenShift cluster administrator added you as a user to the OpenShift cluster and has installed the TrustyAI service for the data science project that contains the deployed models. You set up TrustyAI for your data science project, as described in Setting up TrustyAI for your project . Procedure In a terminal window, log in to the OpenShift cluster where OpenShift AI is deployed. Set the TRUSTY_ROUTE variable to the external route for the TrustyAI service pod. Optionally, get the full list of TrustyAI service endpoints and payloads. Use POST /metrics/group/fairness/spd/request to schedule a recurring bias monitoring metric with the following syntax and payload structure: Syntax : Payload structure : modelId The name of the model to query. protectedAttribute The name of the feature that distinguishes the groups that you are checking for fairness. privilegedAttribute The suspected favored (positively biased) class. unprivilegedAttribute The suspected unfavored (negatively biased) class. outcomeName The name of the output that provides the output you are examining for fairness. favorableOutcome The value of the outcomeName output that describes the favorable or desired model prediction. batchSize The number of inferences to include in the calculation. For example: Verification The bias metrics request should return output similar to the following: The specificDefinition field helps you understand the real-world interpretation of these metric values. For this example, the model is fair over the Is Male-Identifying? field, with the rate of positive outcome only differing by about -0.3%. 4.1.3. Duplicating a bias metric If you want to edit an existing metric, you can duplicate (copy) it in the OpenShift AI interface and then edit the values in the copy. However, note that the history of the original metric is not applied to the copy. Prerequisites You are familiar with the bias metrics that OpenShift AI supports and how to interpret them. You are familiar with the specific data set schema and understand the names and meanings of the inputs and outputs. There is an existing bias metric that you want to duplicate. Procedure In the left menu of the OpenShift AI dashboard, click Model Serving . On the Deployed models page, click the name of the model with the bias metric that you want to duplicate. On the metrics page for the model, click the Model bias tab. Click Configure . On the Bias metric configuration page, click the action menu (...) to the metric that you want to copy and then click Duplicate . In the Configure bias metric dialog, follow these steps: In the Metric name field, type a unique name for your bias metric. Note that you cannot change the name of this metric later. Change the values of the fields as needed. For a description of these fields, see Creating a bias metric by using the dashboard . Ensure that the values you entered are correct, and then click Configure . Verification The Bias metric configuration page shows the bias metrics that you configured for your model. step To view metrics, on the Bias metric configuration page, click View metrics in the upper-right corner. 4.2. Deleting a bias metric You can delete a bias metric for a model by using the OpenShift AI dashboard or by using the OpenShift command-line interface (CLI). 4.2.1. Deleting a bias metric by using the dashboard You can use the OpenShift AI dashboard to delete a bias metric for a model. Prerequisites You have logged in to Red Hat OpenShift AI. There is an existing bias metric that you want to delete. Procedure In the left menu of the OpenShift AI dashboard, click Model Serving . On the Deployed models page, click the name of the model with the bias metric that you want to delete. On the metrics page for the model, click the Model bias tab. Click Configure . Click the action menu (...) to the metric that you want to delete and then click Delete . In the Delete bias metric dialog, type the metric name to confirm the deletion. Note You cannot undo deleting a bias metric. Click Delete bias metric . Verification The Bias metric configuration page does not show the bias metric that you deleted. 4.2.2. Deleting a bias metric by using the CLI You can use the OpenShift command-line interface (CLI) to delete a bias metric for a model. Prerequisites You have installed the OpenShift CLI ( oc ). You have a user token for authentication as described in Authenticating the TrustyAI service . There is an existing bias metric that you want to delete. Procedure Open a new terminal window. Follow these steps to log in to your OpenShift cluster: In the upper-right corner of the OpenShift web console, click your user name and select Copy login command . After you have logged in, click Display token . Copy the Log in with this token command and paste it in the OpenShift command-line interface (CLI). In the OpenShift CLI, get the route to the TrustyAI service: Optional: To list all currently active requests for a metric, use GET /metrics/{{metric}}/requests . For example, to list all currently scheduled SPD metrics, type: Alternatively, to list all currently scheduled metric requests, use GET /metrics/all/requests . To delete a metric, send an HTTP DELETE request to the /metrics/USDMETRIC/request endpoint to stop the periodic calculation, including the id of periodic task that you want to cancel in the payload. For example: Verification Use GET /metrics/{{metric}}/requests to list all currently active requests for the metric and verify the metric that you deleted is not shown. For example: 4.3. Viewing bias metrics for a model After you create bias monitoring metrics, you can use the OpenShift AI dashboard to view and update the metrics that you configured. Prerequisite You configured bias metrics for your model as described in Creating a bias metric . Procedure In the OpenShift AI dashboard, click Model Serving . On the Deployed models page, click the name of a model that you want to view bias metrics for. On the metrics page for the model, click the Model bias tab. To update the metrics shown on the page, follow these steps: In the Metrics to display section, use the Select a metric list to select a metric to show on the page. Note Each time you select a metric to show on the page, an additional Select a metric list appears. This enables you to show multiple metrics on the page. From the Time range list in the upper-right corner, select a value. From the Refresh interval list in the upper-right corner, select a value. The metrics page shows the metrics that you selected. Optional: To remove one or more metrics from the page, in the Metrics to display section, perform one of the following actions: To remove an individual metric, click the cancel icon (✖) to the metric name. To remove all metrics, click the cancel icon (✖) in the Select a metric list. Optional: To return to configuring bias metrics for the model, on the metrics page, click Configure in the upper-right corner. Verification The metrics page shows the metrics selections that you made. 4.4. Supported bias metrics Red Hat OpenShift AI supports the following bias metrics: Statistical Parity Difference Statistical Parity Difference (SPD) is the difference in the probability of a favorable outcome prediction between unprivileged and privileged groups. The formal definition of SPD is the following: y = 1 is the favorable outcome. Du and Dp are the unprivileged and privileged group data. You can interpret SPD values as follows: A value of 0 means that the model is behaving fairly for a selected attribute (for example, race, gender). A value in the range -0.1 to 0.1 means that the model is reasonably fair for a selected attribute. Instead, you can attribute the difference in probability to other factors, such as the sample size. A value outside the range -0.1 to 0.1 indicates that the model is unfair for a selected attribute. A negative value indicates that the model has bias against the unprivileged group. A positive value indicates that the model has bias against the privileged group. Disparate Impact Ratio Disparate Impact Ratio (DIR) is the ratio of the probability of a favorable outcome prediction for unprivileged groups to that of privileged groups. The formal definition of DIR is the following: y = 1 is the favorable outcome. Du and Dp are the unprivileged and privileged group data. The threshold to identify bias depends on your own criteria and specific use case. For example, if your threshold for identifying bias is represented by a DIR value below 0.8 or above 1.2 , you can interpret the DIR values as follows: A value of 1 means that the model is fair for a selected attribute. A value of between 0.8 and 1.2 means that the model is reasonably fair for a selected attribute. A value below 0.8 or above 1.2 indicates bias.
|
[
"login",
"TRUSTY_ROUTE=https://USD(oc get route/trustyai-service --template={{.spec.host}})",
"login",
"TRUSTY_ROUTE=https://USD(oc get route/trustyai-service --template={{.spec.host}})",
"curl -H \"Authorization: Bearer USDTOKEN\" --location USDTRUSTY_ROUTE/q/openapi",
"curl -sk -H \"Authorization: Bearer USDTOKEN\" -X POST --location USDTRUSTY_ROUTE/metrics/group/fairness/spd/request --header 'Content-Type: application/json' --data <payload>",
"curl -sk -H \"Authorization: Bearer USDTOKEN\" -X POST --location USDTRUSTY_ROUTE /metrics/group/fairness/spd/request --header 'Content-Type: application/json' --data \"{ \\\"modelId\\\": \\\"demo-loan-nn-onnx-alpha\\\", \\\"protectedAttribute\\\": \\\"Is Male-Identifying?\\\", \\\"privilegedAttribute\\\": 1.0, \\\"unprivilegedAttribute\\\": 0.0, \\\"outcomeName\\\": \\\"Will Default?\\\", \\\"favorableOutcome\\\": 0, \\\"batchSize\\\": 5000 }\"",
"{ \"timestamp\":\"2023-10-24T12:06:04.586+00:00\", \"type\":\"metric\", \"value\":-0.0029676404469311524, \"namedValues\":null, \"specificDefinition\":\"The SPD of -0.002968 indicates that the likelihood of Group:Is Male-Identifying?=1.0 receiving Outcome:Will Default?=0 was -0.296764 percentage points lower than that of Group:Is Male-Identifying?=0.0.\", \"name\":\"SPD\", \"id\":\"d2707d5b-cae9-41aa-bcd3-d950176cbbaf\", \"thresholds\":{\"lowerBound\":-0.1,\"upperBound\":0.1,\"outsideBounds\":false} }",
"oc login --token= <token> --server= <openshift_cluster_url>",
"TRUSTY_ROUTE=https://USD(oc get route/trustyai-service --template={{.spec.host}})",
"curl -H \"Authorization: Bearer USDTOKEN\" -X GET --location \"USDTRUSTY_ROUTE/metrics/spd/requests\"",
"curl -H \"Authorization: Bearer USDTOKEN\" -X GET --location \"USDTRUSTY_ROUTE/metrics/all/requests\"",
"curl -H \"Authorization: Bearer USDTOKEN\" -X DELETE --location \"USDTRUSTY_ROUTE/metrics/spd/request\" -H \"Content-Type: application/json\" -d \"{ \\\"requestId\\\": \\\"3281c891-e2a5-4eb3-b05d-7f3831acbb56\\\" }\"",
"curl -H \"Authorization: Bearer USDTOKEN\" -X GET --location \"USDTRUSTY_ROUTE/metrics/spd/requests\""
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/monitoring_data_science_models/monitoring-model-bias_bias-monitoring
|
Chapter 2. Authenticating with Red Hat Single Sign-On (RHSSO)
|
Chapter 2. Authenticating with Red Hat Single Sign-On (RHSSO) To authenticate users with Red Hat Single Sign-On (RHSSO): Enable the OpenID Connect (OIDC) authentication provider in RHDH . Provision users from Red Hat Single-Sign On (RHSSO) to the software catalog . 2.1. Enabling authentication with Red Hat Single-Sign On (RHSSO) To authenticate users with Red Hat Single Sign-On (RHSSO), enable the OpenID Connect (OIDC) authentication provider in Red Hat Developer Hub. Prerequisites You added a custom Developer Hub application configuration , and have sufficient permissions to modify it. You have sufficient permissions in RHSSO to create and manage a realm. Procedure To allow Developer Hub to authenticate with RHSSO, complete the steps in RHSSO, to create a realm and a user and register the Developer Hub application : Use an existing realm, or create a realm , with a distinctive Name such as <my_realm> . Save the value for the step: RHSSO realm base URL , such as: <your_rhsso_URL> /auth/realms/ <your_realm> . To register your Developer Hub in RHSSO, in the created realm, create a Client ID , with: Client ID : A distinctive client ID, such as <RHDH> . Valid redirect URIs : Set to the OIDC handler URL: https:// <RHDH_URL> /api/auth/oidc/handler/frame . Navigate to the Credentials tab and copy the Client secret . Save the values for the step: Client ID Client Secret To prepare for the verification steps, in the same realm, get the credential information for an existing user or create a user . Save the user credential information for the verification steps. To add your RHSSO credentials to your Developer Hub secrets, edit your Developer Hub secrets, such as secrets-rhdh , and add the following key/value pairs: AUTH_OIDC_CLIENT_ID Enter the saved Client ID . AUTH_OIDC_CLIENT_SECRET Enter the saved Client Secret . AUTH_OIDC_METADATA_URL Enter the saved RHSSO realm base URL . To set up the RHSSO authentication provider in your Developer Hub custom configuration, edit your custom Developer Hub ConfigMap such as app-config-rhdh , and add the following lines to the app-config-rhdh.yaml content: app-config-rhdh.yaml fragment with mandatory fields to enable authentication with RHSSO auth: environment: production providers: oidc: production: metadataUrl: USD{AUTH_OIDC_METADATA_URL} clientId: USD{AUTH_OIDC_CLIENT_ID} clientSecret: USD{AUTH_OIDC_CLIENT_SECRET} signInPage: oidc environment: production Mark the environment as production to hide the Guest login in the Developer Hub home page. metadataUrl , clientId , clientSecret To configure the OIDC provider with your secrets. sigInPage: oidc To enable the OIDC provider as default sign-in provider. Optional: Consider adding the following optional fields: dangerouslyAllowSignInWithoutUserInCatalog: true To enable authentication without requiring to provision users in the Developer Hub software catalog. Warning Use this option to explore Developer Hub features, but do not use it in production. app-config-rhdh.yaml fragment with optional field to allow authenticating users absent from the software catalog auth: environment: production providers: oidc: production: metadataUrl: USD{AUTH_OIDC_METADATA_URL} clientId: USD{AUTH_OIDC_CLIENT_ID} clientSecret: USD{AUTH_OIDC_CLIENT_SECRET} signInPage: oidc dangerouslyAllowSignInWithoutUserInCatalog: true callbackUrl RHSSO callback URL. app-config-rhdh.yaml fragment with optional callbackURL field auth: providers: oidc: production: callbackUrl: USD{AUTH_OIDC_CALLBACK_URL} tokenEndpointAuthMethod Token endpoint authentication method. app-config-rhdh.yaml fragment with optional tokenEndpointAuthMethod field auth: providers: oidc: production: tokenEndpointAuthMethod: USD{AUTH_OIDC_TOKEN_ENDPOINT_METHOD} tokenSignedResponseAlg Token signed response algorithm. app-config-rhdh.yaml fragment with optional tokenSignedResponseAlg field auth: providers: oidc: production: tokenSignedResponseAlg: USD{AUTH_OIDC_SIGNED_RESPONSE_ALG} scope RHSSO scope. app-config-rhdh.yaml fragment with optional scope field auth: providers: oidc: production: scope: USD{AUTH_OIDC_SCOPE} signIn.resolvers Declarative resolvers to override the default resolver: emailLocalPartMatchingUserEntityName . The authentication provider tries each sign-in resolver until it succeeds, and fails if none succeed. app-config-rhdh.yaml fragment with optional callbackURL field auth: providers: oidc: production: signIn: resolvers: - resolver: preferredUsernameMatchingUserEntityName - resolver: emailMatchingUserEntityProfileEmail - resolver: emailLocalPartMatchingUserEntityName auth.backstageTokenExpiration To modify the Developer Hub token expiration from its default value of one hour, note that this refers to the validity of short-term cryptographic tokens, not the session duration. The expiration value must be set between 10 minutes and 24 hours. app-config-rhdh.yaml fragment with optional auth.backstageTokenExpiration field auth: backstageTokenExpiration: { minutes: <user_defined_value> } Security consideration If multiple valid refresh tokens are issued due to frequent refresh token requests, older tokens will remain valid until they expire. To enhance security and prevent potential misuse of older tokens, enable a refresh token rotation strategy in your RHBK realm. From the Configure section of the navigation menu, click Realm Settings . From the Realm Settings page, click the Tokens tab. From the Refresh tokens section of the Tokens tab, toggle the Revoke Refresh Token to the Enabled position. Verification Go to the Developer Hub login page. Your Developer Hub sign-in page displays Sign in using OIDC and the Guest user sign-in is disabled. Log in with OIDC by using the saved Username and Password values. 2.2. Provisioning users from Red Hat Single-Sign On (RHSSO) to the software catalog Prerequisites You enabled authentication with RHSSO . Procedure To enable RHSSO member discovery, edit your custom Developer Hub ConfigMap, such as app-config-rhdh , and add the following lines to the app-config-rhdh.yaml content: app-config.yaml fragment with mandatory keycloakOrg fields dangerouslyAllowSignInWithoutUserInCatalog: false catalog: providers: keycloakOrg: default: baseUrl: USD{AUTH_OIDC_METADATA_URL} clientId: USD{AUTH_OIDC_CLIENT_ID} clientSecret: USD{AUTH_OIDC_CLIENT_SECRET} dangerouslyAllowSignInWithoutUserInCatalog: false Allow authentication only for users present in the Developer Hub software catalog. baseUrl Your RHSSO server URL, defined when enabling authentication with RHSSO . clientId Your Developer Hub application client ID in RHSSO, defined when enabling authentication with RHSSO . clientSecret Your Developer Hub application client secret in RHSSO, defined when enabling authentication with RHSSO . Optional: Consider adding the following optional fields: realm Realm to synchronize. Default value: master . app-config.yaml fragment with optional realm field catalog: providers: keycloakOrg: default: realm: master loginRealm Realm used to authenticate. Default value: master . app-config.yaml fragment with optional loginRealm field catalog: providers: keycloakOrg: default: loginRealm: master userQuerySize User number to query simultaneously. Default value: 100 . app-config.yaml fragment with optional userQuerySize field catalog: providers: keycloakOrg: default: userQuerySize: 100 groupQuerySize Group number to query simultaneously. Default value: 100 . app-config.yaml fragment with optional groupQuerySize field catalog: providers: keycloakOrg: default: groupQuerySize: 100 schedule.frequency To specify custom schedule frequency. Supports cron, ISO duration, and "human duration" as used in code. app-config.yaml fragment with optional schedule.frequency field catalog: providers: keycloakOrg: default: schedule: frequency: { hours: 1 } schedule.timeout To specify custom timeout. Supports ISO duration and "human duration" as used in code. app-config.yaml fragment with optional schedule.timeout field catalog: providers: keycloakOrg: default: schedule: timeout: { minutes: 50 } schedule.initialDelay To specify custom initial delay. Supports ISO duration and "human duration" as used in code. app-config.yaml fragment with optional schedule.initialDelay field catalog: providers: keycloakOrg: default: schedule: initialDelay: { seconds: 15} Verification Check the console logs to verify that the synchronization is completed. Successful synchronization example: {"class":"KeycloakOrgEntityProvider","level":"info","message":"Read 3 Keycloak users and 2 Keycloak groups in 1.5 seconds. Committing...","plugin":"catalog","service":"backstage","taskId":"KeycloakOrgEntityProvider:default:refresh","taskInstanceId":"bf0467ff-8ac4-4702-911c-380270e44dea","timestamp":"2024-09-25 13:58:04"} {"class":"KeycloakOrgEntityProvider","level":"info","message":"Committed 3 Keycloak users and 2 Keycloak groups in 0.0 seconds.","plugin":"catalog","service":"backstage","taskId":"KeycloakOrgEntityProvider:default:refresh","taskInstanceId":"bf0467ff-8ac4-4702-911c-380270e44dea","timestamp":"2024-09-25 13:58:04"} Log in with an RHSSO account. 2.3. Creating a custom transformer to provision users from Red Hat Single-Sign On (RHSSO) to the software catalog To customize how RHSSO users and groups are mapped to Red Hat Developer Hub entities, you can create a backend module that uses the keycloakTransformerExtensionPoint to provide custom user and group transformers for the Keycloak backend. Prerequisites You have enabled provisioning users from Red Hat Single-Sign On (RHSSO) to the software catalog . Procedure Create a new backend module with the yarn new command. Add your custom user and group transformers to the keycloakTransformerExtensionPoint . The following is an example of how the backend module can be defined: plugins/ <module-name> /src/module.ts import { GroupTransformer, keycloakTransformerExtensionPoint, UserTransformer, } from '@janus-idp/backstage-plugin-keycloak-backend'; const customGroupTransformer: GroupTransformer = async ( entity, // entity output from default parser realm, // Keycloak realm name groups, // Keycloak group representation ) => { /* apply transformations */ return entity; }; const customUserTransformer: UserTransformer = async ( entity, // entity output from default parser user, // Keycloak user representation realm, // Keycloak realm name groups, // Keycloak group representation ) => { /* apply transformations */ return entity; }; export const keycloakBackendModuleTransformer = createBackendModule({ pluginId: 'catalog', moduleId: 'keycloak-transformer', register(reg) { reg.registerInit({ deps: { keycloak: keycloakTransformerExtensionPoint, }, async init({ keycloak }) { keycloak.setUserTransformer(customUserTransformer); keycloak.setGroupTransformer(customGroupTransformer); /* highlight-add-end */ }, }); }, }); Important The module's pluginId must be set to catalog to match the pluginId of the keycloak-backend ; otherwise, the module fails to initialize. Install this new backend module into your Developer Hub backend. backend.add(import(backstage-plugin-catalog-backend-module-keycloak-transformer)) Verification Developer Hub imports the users and groups each time when started. Check the console logs to verify that the synchronization is completed. Successful synchronization example: {"class":"KeycloakOrgEntityProvider","level":"info","message":"Read 3 Keycloak users and 2 Keycloak groups in 1.5 seconds. Committing...","plugin":"catalog","service":"backstage","taskId":"KeycloakOrgEntityProvider:default:refresh","taskInstanceId":"bf0467ff-8ac4-4702-911c-380270e44dea","timestamp":"2024-09-25 13:58:04"} {"class":"KeycloakOrgEntityProvider","level":"info","message":"Committed 3 Keycloak users and 2 Keycloak groups in 0.0 seconds.","plugin":"catalog","service":"backstage","taskId":"KeycloakOrgEntityProvider:default:refresh","taskInstanceId":"bf0467ff-8ac4-4702-911c-380270e44dea","timestamp":"2024-09-25 13:58:04"} After the first import is complete, navigate to the Catalog page and select User to view the list of users. When you select a user, you see the information imported from RHSSO. You can select a group, view the list, and access or review the information imported from RHSSO. You can log in with an RHSSO account.
|
[
"auth: environment: production providers: oidc: production: metadataUrl: USD{AUTH_OIDC_METADATA_URL} clientId: USD{AUTH_OIDC_CLIENT_ID} clientSecret: USD{AUTH_OIDC_CLIENT_SECRET} signInPage: oidc",
"auth: environment: production providers: oidc: production: metadataUrl: USD{AUTH_OIDC_METADATA_URL} clientId: USD{AUTH_OIDC_CLIENT_ID} clientSecret: USD{AUTH_OIDC_CLIENT_SECRET} signInPage: oidc dangerouslyAllowSignInWithoutUserInCatalog: true",
"auth: providers: oidc: production: callbackUrl: USD{AUTH_OIDC_CALLBACK_URL}",
"auth: providers: oidc: production: tokenEndpointAuthMethod: USD{AUTH_OIDC_TOKEN_ENDPOINT_METHOD}",
"auth: providers: oidc: production: tokenSignedResponseAlg: USD{AUTH_OIDC_SIGNED_RESPONSE_ALG}",
"auth: providers: oidc: production: scope: USD{AUTH_OIDC_SCOPE}",
"auth: providers: oidc: production: signIn: resolvers: - resolver: preferredUsernameMatchingUserEntityName - resolver: emailMatchingUserEntityProfileEmail - resolver: emailLocalPartMatchingUserEntityName",
"auth: backstageTokenExpiration: { minutes: <user_defined_value> }",
"dangerouslyAllowSignInWithoutUserInCatalog: false catalog: providers: keycloakOrg: default: baseUrl: USD{AUTH_OIDC_METADATA_URL} clientId: USD{AUTH_OIDC_CLIENT_ID} clientSecret: USD{AUTH_OIDC_CLIENT_SECRET}",
"catalog: providers: keycloakOrg: default: realm: master",
"catalog: providers: keycloakOrg: default: loginRealm: master",
"catalog: providers: keycloakOrg: default: userQuerySize: 100",
"catalog: providers: keycloakOrg: default: groupQuerySize: 100",
"catalog: providers: keycloakOrg: default: schedule: frequency: { hours: 1 }",
"catalog: providers: keycloakOrg: default: schedule: timeout: { minutes: 50 }",
"catalog: providers: keycloakOrg: default: schedule: initialDelay: { seconds: 15}",
"{\"class\":\"KeycloakOrgEntityProvider\",\"level\":\"info\",\"message\":\"Read 3 Keycloak users and 2 Keycloak groups in 1.5 seconds. Committing...\",\"plugin\":\"catalog\",\"service\":\"backstage\",\"taskId\":\"KeycloakOrgEntityProvider:default:refresh\",\"taskInstanceId\":\"bf0467ff-8ac4-4702-911c-380270e44dea\",\"timestamp\":\"2024-09-25 13:58:04\"} {\"class\":\"KeycloakOrgEntityProvider\",\"level\":\"info\",\"message\":\"Committed 3 Keycloak users and 2 Keycloak groups in 0.0 seconds.\",\"plugin\":\"catalog\",\"service\":\"backstage\",\"taskId\":\"KeycloakOrgEntityProvider:default:refresh\",\"taskInstanceId\":\"bf0467ff-8ac4-4702-911c-380270e44dea\",\"timestamp\":\"2024-09-25 13:58:04\"}",
"import { GroupTransformer, keycloakTransformerExtensionPoint, UserTransformer, } from '@janus-idp/backstage-plugin-keycloak-backend'; const customGroupTransformer: GroupTransformer = async ( entity, // entity output from default parser realm, // Keycloak realm name groups, // Keycloak group representation ) => { /* apply transformations */ return entity; }; const customUserTransformer: UserTransformer = async ( entity, // entity output from default parser user, // Keycloak user representation realm, // Keycloak realm name groups, // Keycloak group representation ) => { /* apply transformations */ return entity; }; export const keycloakBackendModuleTransformer = createBackendModule({ pluginId: 'catalog', moduleId: 'keycloak-transformer', register(reg) { reg.registerInit({ deps: { keycloak: keycloakTransformerExtensionPoint, }, async init({ keycloak }) { keycloak.setUserTransformer(customUserTransformer); keycloak.setGroupTransformer(customGroupTransformer); /* highlight-add-end */ }, }); }, });",
"backend.add(import(backstage-plugin-catalog-backend-module-keycloak-transformer))",
"{\"class\":\"KeycloakOrgEntityProvider\",\"level\":\"info\",\"message\":\"Read 3 Keycloak users and 2 Keycloak groups in 1.5 seconds. Committing...\",\"plugin\":\"catalog\",\"service\":\"backstage\",\"taskId\":\"KeycloakOrgEntityProvider:default:refresh\",\"taskInstanceId\":\"bf0467ff-8ac4-4702-911c-380270e44dea\",\"timestamp\":\"2024-09-25 13:58:04\"} {\"class\":\"KeycloakOrgEntityProvider\",\"level\":\"info\",\"message\":\"Committed 3 Keycloak users and 2 Keycloak groups in 0.0 seconds.\",\"plugin\":\"catalog\",\"service\":\"backstage\",\"taskId\":\"KeycloakOrgEntityProvider:default:refresh\",\"taskInstanceId\":\"bf0467ff-8ac4-4702-911c-380270e44dea\",\"timestamp\":\"2024-09-25 13:58:04\"}"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/authentication/assembly-authenticating-with-rhsso
|
21.8. virt-resize: Resizing Guest Virtual Machines Offline
|
21.8. virt-resize: Resizing Guest Virtual Machines Offline This section provides information about resizing offline guest virtual machines. 21.8.1. Introduction This section describes virt-resize , a tool for expanding or shrinking guest virtual machines. It only works for guest virtual machines that are offline (shut down). It works by copying the guest virtual machine image and leaving the original disk image untouched. This is ideal because you can use the original image as a backup, however there is a trade-off as you need twice the amount of disk space. 21.8.2. Expanding a Disk Image This section demonstrates a simple case of expanding a disk image: Locate the disk image to be resized. You can use the command virsh dumpxml GuestName for a libvirt guest virtual machine. Decide on how you wish to expand the guest virtual machine. Run virt-df -h and virt-filesystems on the guest virtual machine disk, as shown in the following output: The following example demonstrates how to: Increase the size of the first (boot) partition, from approximately 100MB to 500MB. Increase the total disk size from 8GB to 16GB. Expand the second partition to fill the remaining space. Expand /dev/VolGroup00/LogVol00 to fill the new space in the second partition. Make sure the guest virtual machine is shut down. Rename the original disk as the backup. How you do this depends on the host physical machine storage environment for the original disk. If it is stored as a file, use the mv command. For logical volumes (as demonstrated in this example), use lvrename : Create the new disk. The requirements in this example are to expand the total disk size up to 16GB. Since logical volumes are used here, the following command is used: The requirements from step 2 are expressed by this command: The first two arguments are the input disk and output disk. --resize /dev/sda1=500M resizes the first partition up to 500MB. --expand /dev/sda2 expands the second partition to fill all remaining space. --LV-expand /dev/VolGroup00/LogVol00 expands the guest virtual machine logical volume to fill the extra space in the second partition. virt-resize describes what it is doing in the output: Try to boot the virtual machine. If it works (and after testing it thoroughly) you can delete the backup disk. If it fails, shut down the virtual machine, delete the new disk, and rename the backup disk back to its original name. Use virt-df or virt-filesystems to show the new size: Note that resizing guest virtual machines in some cases may become problematic. If virt-resize fails, there are a number of tips that you can review and attempt in the virt-resize(1) man page. For some older Red Hat Enterprise Linux guest virtual machines, you may need to pay particular attention to the tip regarding GRUB.
|
[
"virt-df -h -a /dev/vg_guests/RHEL6 Filesystem Size Used Available Use% RHEL6:/dev/sda1 98.7M 10.0M 83.6M 11% RHEL6:/dev/VolGroup00/LogVol00 6.8G 2.2G 4.3G 32% virt-filesystems -a disk.img --all --long -h /dev/sda1 ext3 101.9M /dev/sda2 pv 7.9G",
"lvrename /dev/vg_guests/RHEL6 /dev/vg_guests/RHEL6.backup",
"lvcreate -L 16G -n RHEL6 /dev/vg_guests Logical volume \"RHEL6\" created",
"virt-resize /dev/vg_guests/RHEL6.backup /dev/vg_guests/RHEL6 --resize /dev/sda1=500M --expand /dev/sda2 --LV-expand /dev/VolGroup00/LogVol00",
"Summary of changes: /dev/sda1: partition will be resized from 101.9M to 500.0M /dev/sda1: content will be expanded using the 'resize2fs' method /dev/sda2: partition will be resized from 7.9G to 15.5G /dev/sda2: content will be expanded using the 'pvresize' method /dev/VolGroup00/LogVol00: LV will be expanded to maximum size /dev/VolGroup00/LogVol00: content will be expanded using the 'resize2fs' method Copying /dev/sda1 [#####################################################] Copying /dev/sda2 [#####################################################] Expanding /dev/sda1 using the 'resize2fs' method Expanding /dev/sda2 using the 'pvresize' method Expanding /dev/VolGroup00/LogVol00 using the 'resize2fs' method",
"virt-df -h -a /dev/vg_pin/RHEL6 Filesystem Size Used Available Use% RHEL6:/dev/sda1 484.4M 10.8M 448.6M 3% RHEL6:/dev/VolGroup00/LogVol00 14.3G 2.2G 11.4G 16%"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Guest_virtual_machine_disk_access_with_offline_tools-virt_resize_resizing_guest_virtual_machines_offline
|
Chapter 6. ImageStreamLayers [image.openshift.io/v1]
|
Chapter 6. ImageStreamLayers [image.openshift.io/v1] Description ImageStreamLayers describes information about the layers referenced by images in this image stream. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required blobs images 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources blobs object blobs is a map of blob name to metadata about the blob. blobs{} object ImageLayerData contains metadata about an image layer. images object images is a map between an image name and the names of the blobs and config that comprise the image. images{} object ImageBlobReferences describes the blob references within an image. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 6.1.1. .blobs Description blobs is a map of blob name to metadata about the blob. Type object 6.1.2. .blobs{} Description ImageLayerData contains metadata about an image layer. Type object Required size mediaType Property Type Description mediaType string MediaType of the referenced object. size integer Size of the layer in bytes as defined by the underlying store. This field is optional if the necessary information about size is not available. 6.1.3. .images Description images is a map between an image name and the names of the blobs and config that comprise the image. Type object 6.1.4. .images{} Description ImageBlobReferences describes the blob references within an image. Type object Property Type Description config string config, if set, is the blob that contains the image config. Some images do not have separate config blobs and this field will be set to nil if so. imageMissing boolean imageMissing is true if the image is referenced by the image stream but the image object has been deleted from the API by an administrator. When this field is set, layers and config fields may be empty and callers that depend on the image metadata should consider the image to be unavailable for download or viewing. layers array (string) layers is the list of blobs that compose this image, from base layer to top layer. All layers referenced by this array will be defined in the blobs map. Some images may have zero layers. manifests array (string) manifests is the list of other image names that this image points to. For a single architecture image, it is empty. For a multi-arch image, it consists of the digests of single architecture images, such images shouldn't have layers nor config. 6.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/layers GET : read layers of the specified ImageStream 6.2.1. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreams/{name}/layers Table 6.1. Global path parameters Parameter Type Description name string name of the ImageStreamLayers HTTP method GET Description read layers of the specified ImageStream Table 6.2. HTTP responses HTTP code Reponse body 200 - OK ImageStreamLayers schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/image_apis/imagestreamlayers-image-openshift-io-v1
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_configure_server_security/making-open-source-more-inclusive
|
Chapter 48. Next steps
|
Chapter 48. steps Testing a decision service using test scenarios Packaging and deploying an Red Hat Process Automation Manager project
| null |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/next_steps_3
|
Chapter 10. Live migration
|
Chapter 10. Live migration 10.1. About live migration Live migration is the process of moving a running virtual machine (VM) to another node in the cluster without interrupting the virtual workload. Live migration enables smooth transitions during cluster upgrades or any time a node needs to be drained for maintenance or configuration changes. By default, live migration traffic is encrypted using Transport Layer Security (TLS). 10.1.1. Live migration requirements Live migration has the following requirements: The cluster must have shared storage with ReadWriteMany (RWX) access mode. The cluster must have sufficient RAM and network bandwidth. Note You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation: The default number of migrations that can run in parallel in the cluster is 5. If a VM uses a host model CPU, the nodes must support the CPU. Configuring a dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration. 10.1.2. VM migration tuning You can adjust your cluster-wide live migration settings based on the type of workload and migration scenario. This enables you to control how many VMs migrate at the same time, the network bandwidth you want to use for each migration, and how long OpenShift Virtualization attempts to complete the migration before canceling the process. Configure these settings in the HyperConverged custom resource (CR). If you are migrating multiple VMs per node at the same time, set a bandwidthPerMigration limit to prevent a large or busy VM from using a large portion of the node's network bandwidth. By default, the bandwidthPerMigration value is 0 , which means unlimited. A large VM running a heavy workload (for example, database processing), with higher memory dirty rates, requires a higher bandwidth to complete the migration. Note Post copy mode, when enabled, triggers if the initial pre-copy phase does not complete within the defined timeout. During post copy, the VM CPUs pause on the source host while transferring the minimum required memory pages. Then the VM CPUs activate on the destination host, and the remaining memory pages transfer into the destination node at runtime. This can impact performance during the transfer. Post copy mode should not be used for critical data, or with unstable networks. 10.1.3. Common live migration tasks You can perform the following live migration tasks: Configure live migration settings Configure live migration for heavy workloads Initiate and cancel live migration Monitor the progress of all live migrations in the Migration tab of the OpenShift Container Platform web console. View VM migration metrics in the Metrics tab of the web console. 10.1.4. Additional resources Prometheus queries for live migration VM run strategies VM and cluster eviction strategies 10.2. Configuring live migration You can configure live migration settings to ensure that the migration processes do not overwhelm the cluster. You can configure live migration policies to apply different migration configurations to groups of virtual machines (VMs). 10.2.1. Configuring live migration limits and timeouts Configure live migration limits and timeouts for the cluster by updating the HyperConverged custom resource (CR), which is located in the openshift-cnv namespace. Procedure Edit the HyperConverged CR and add the necessary live migration parameters: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Example configuration file apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 64Mi 1 completionTimeoutPerGiB: 800 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 2 4 progressTimeout: 150 5 allowPostCopy: false 6 1 Bandwidth limit of each migration, where the value is the quantity of bytes per second. For example, a value of 2048Mi means 2048 MiB/s. Default: 0 , which is unlimited. 2 The migration is canceled if it has not completed in this time, in seconds per GiB of memory. For example, a VM with 6GiB memory times out if it has not completed migration in 4800 seconds. If the Migration Method is BlockMigration , the size of the migrating disks is included in the calculation. 3 Number of migrations running in parallel in the cluster. Default: 5 . 4 Maximum number of outbound migrations per node. Default: 2 . 5 The migration is canceled if memory copy fails to make progress in this time, in seconds. Default: 150 . 6 If a VM is running a heavy workload and the memory dirty rate is too high, this can prevent the migration from one node to another from converging. To prevent this, you can enable post copy mode. By default, allowPostCopy is set to false . Note You can restore the default value for any spec.liveMigrationConfig field by deleting that key/value pair and saving the file. For example, delete progressTimeout: <value> to restore the default progressTimeout: 150 . 10.2.2. Configure live migration for heavy workloads When migrating a VM running a heavy workload (for example, database processing) with higher memory dirty rates, you need a higher bandwidth to complete the migration. If the dirty rate is too high, the migration from one node to another does not converge. To prevent this, enable post copy mode. Post copy mode triggers if the initial pre-copy phase does not complete within the defined timeout. During post copy, the VM CPUs pause on the source host while transferring the minimum required memory pages. Then the VM CPUs activate on the destination host, and the remaining memory pages transfer into the destination node at runtime. Configure live migration for heavy workloads by updating the HyperConverged custom resource (CR), which is located in the openshift-cnv namespace. Procedure Edit the HyperConverged CR and add the necessary parameters for migrating heavy workloads: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Example configuration file apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 0Mi 1 completionTimeoutPerGiB: 150 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 1 4 progressTimeout: 150 5 allowPostCopy: true 6 1 Bandwidth limit of each migration, where the value is the quantity of bytes per second. The default is 0 , which is unlimited. 2 The migration is canceled if it is not completed in this time, and triggers post copy mode, when post copy is enabled. This value is measured in seconds per GiB of memory. You can lower completionTimeoutPerGiB to trigger post copy mode earlier in the migration process, or raise the completionTimeoutPerGiB to trigger post copy mode later in the migration process. 3 Number of migrations running in parallel in the cluster. The default is 5 . Keeping the parallelMigrationsPerCluster setting low is better when migrating heavy workloads. 4 Maximum number of outbound migrations per node. Configure a single VM per node for heavy workloads. 5 The migration is canceled if memory copy fails to make progress in this time. This value is measured in seconds. Increase this parameter for large memory sizes running heavy workloads. 6 Use post copy mode when memory dirty rates are high to ensure the migration converges. Set allowPostCopy to true to enable post copy mode. Optional: If your main network is too busy for the migration, configure a secondary, dedicated migration network. Note Post copy mode can impact performance during the transfer, and should not be used for critical data, or with unstable networks. 10.2.3. Additional resources Configuring a dedicated network for live migration 10.2.4. Live migration policies You can create live migration policies to apply different migration configurations to groups of VMs that are defined by VM or project labels. Tip You can create live migration policies by using the OpenShift Container Platform web console. 10.2.4.1. Creating a live migration policy by using the command line You can create a live migration policy by using the command line. KubeVirt applies the live migration policy to selected virtual machines (VMs) by using any combination of labels: VM labels such as size , os , or gpu Project labels such as priority , bandwidth , or hpc-workload For the policy to apply to a specific group of VMs, all labels on the group of VMs must match the labels of the policy. Note If multiple live migration policies apply to a VM, the policy with the greatest number of matching labels takes precedence. If multiple policies meet this criteria, the policies are sorted by alphabetical order of the matching label keys, and the first one in that order takes precedence. Procedure Edit the VM object to which you want to apply a live migration policy, and add the corresponding VM labels. Open the YAML configuration of the resource: USD oc edit vm <vm_name> Adjust the required label values in the .spec.template.metadata.labels section of the configuration. For example, to mark the VM as a production VM for the purposes of migration policies, add the kubevirt.io/environment: production line: apiVersion: migrations.kubevirt.io/v1alpha1 kind: VirtualMachine metadata: name: <vm_name> namespace: default labels: app: my-app environment: production spec: template: metadata: labels: kubevirt.io/domain: <vm_name> kubevirt.io/size: large kubevirt.io/environment: production # ... Save and exit the configuration. Configure a MigrationPolicy object with the corresponding labels. The following example configures a policy that applies to all VMs that are labeled as production : apiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: <migration_policy> spec: selectors: namespaceSelector: 1 hpc-workloads: "True" xyz-workloads-type: "" virtualMachineInstanceSelector: 2 kubevirt.io/environment: "production" 1 Specify project labels. 2 Specify VM labels. Create the migration policy by running the following command: USD oc create -f <migration_policy>.yaml 10.2.5. Additional resources Configuring a dedicated Multus network for live migration 10.3. Initiating and canceling live migration You can initiate the live migration of a virtual machine (VM) to another node by using the OpenShift Container Platform web console or the command line . You can cancel a live migration by using the web console or the command line . The VM remains on its original node. Tip You can also initiate and cancel live migration by using the virtctl migrate <vm_name> and virtctl migrate-cancel <vm_name> commands. 10.3.1. Initiating live migration 10.3.1.1. Initiating live migration by using the web console You can live migrate a running virtual machine (VM) to a different node in the cluster by using the OpenShift Container Platform web console. Note The Migrate action is visible to all users but only cluster administrators can initiate a live migration. Prerequisites The VM must be migratable. If the VM is configured with a host model CPU, the cluster must have an available node that supports the CPU model. Procedure Navigate to Virtualization VirtualMachines in the web console. Select Migrate from the Options menu beside a VM. Click Migrate . 10.3.1.2. Initiating live migration by using the command line You can initiate the live migration of a running virtual machine (VM) by using the command line to create a VirtualMachineInstanceMigration object for the VM. Procedure Create a VirtualMachineInstanceMigration manifest for the VM that you want to migrate: apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: <migration_name> spec: vmiName: <vm_name> Create the object by running the following command: USD oc create -f <migration_name>.yaml The VirtualMachineInstanceMigration object triggers a live migration of the VM. This object exists in the cluster for as long as the virtual machine instance is running, unless manually deleted. Verification Obtain the VM status by running the following command: USD oc describe vmi <vm_name> -n <namespace> Example output # ... Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true 10.3.2. Canceling live migration 10.3.2.1. Canceling live migration by using the web console You can cancel the live migration of a virtual machine (VM) by using the OpenShift Container Platform web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select Cancel Migration on the Options menu beside a VM. 10.3.2.2. Canceling live migration by using the command line Cancel the live migration of a virtual machine by deleting the VirtualMachineInstanceMigration object associated with the migration. Procedure Delete the VirtualMachineInstanceMigration object that triggered the live migration, migration-job in this example: USD oc delete vmim migration-job
|
[
"Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 64Mi 1 completionTimeoutPerGiB: 800 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 2 4 progressTimeout: 150 5 allowPostCopy: false 6",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 0Mi 1 completionTimeoutPerGiB: 150 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 1 4 progressTimeout: 150 5 allowPostCopy: true 6",
"oc edit vm <vm_name>",
"apiVersion: migrations.kubevirt.io/v1alpha1 kind: VirtualMachine metadata: name: <vm_name> namespace: default labels: app: my-app environment: production spec: template: metadata: labels: kubevirt.io/domain: <vm_name> kubevirt.io/size: large kubevirt.io/environment: production",
"apiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: <migration_policy> spec: selectors: namespaceSelector: 1 hpc-workloads: \"True\" xyz-workloads-type: \"\" virtualMachineInstanceSelector: 2 kubevirt.io/environment: \"production\"",
"oc create -f <migration_policy>.yaml",
"apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: <migration_name> spec: vmiName: <vm_name>",
"oc create -f <migration_name>.yaml",
"oc describe vmi <vm_name> -n <namespace>",
"Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true",
"oc delete vmim migration-job"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/virtualization/live-migration
|
Performance considerations for operator environments
|
Performance considerations for operator environments Red Hat Ansible Automation Platform 2.5 Configure automation controller for improved performance on operator based installations Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/performance_considerations_for_operator_environments/index
|
Chapter 1. Red Hat OpenShift support for Windows Containers overview
|
Chapter 1. Red Hat OpenShift support for Windows Containers overview Red Hat OpenShift support for Windows Containers is a feature providing the ability to run Windows compute nodes in an OpenShift Container Platform cluster. This is possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes. With a Red Hat subscription, you can get support for running Windows workloads in OpenShift Container Platform. For more information, see the release notes . For workloads including both Linux and Windows, OpenShift Container Platform allows you to deploy Windows workloads running on Windows Server containers while also providing traditional Linux workloads hosted on Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL). For more information, see getting started with Windows container workloads . You need the WMCO to run Windows workloads in your cluster. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. For more information, see how to enable Windows container workloads . You can create a Windows MachineSet object to create infrastructure Windows machine sets and related machines so that you can move supported Windows workloads to the new Windows machines. You can create a Windows MachineSet object on multiple platforms. You can schedule Windows workloads to Windows compute nodes. You can perform Windows Machine Config Operator upgrades to ensure that your Windows nodes have the latest updates. You can remove a Windows node by deleting a specific machine. You can use Bring-Your-Own-Host (BYOH) Windows instances to repurpose Windows Server VMs and bring them to OpenShift Container Platform. BYOH Windows instances benefit users who are looking to mitigate major disruptions in the event that a Windows server goes offline. You can use BYOH Windows instances as nodes on OpenShift Container Platform 4.8 and later versions. You can disable Windows container workloads by performing the following: Uninstalling the Windows Machine Config Operator Deleting the Windows Machine Config Operator namespace
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/windows_container_support_for_openshift/windows-container-overview
|
Chapter 2. Next steps for managing your costs
|
Chapter 2. steps for managing your costs After adding your OpenShift Container Platform and cloud infrastructure integrations, in addition to showing cost data by integration, cost management will automatically show AWS and Microsoft Azure cost and usage related to running your OpenShift Container Platform clusters on their platforms. On the cost management Overview page, your cost data is sorted into OpenShift and Infrastructure tabs. Select Perspective to toggle through different views of your cost data. You can also use the global navigation menu to view additional details about your costs by cloud provider. Additional resources Integrating Amazon Web Services (AWS) data into cost management Integrating Google Cloud data into cost management Integrating Microsoft Azure data into cost management Integrating Amazon Web Services (AWS) data into cost management 2.1. Limiting access to cost management resources After you add and configure integrations in cost management, you can limit access to cost data and resources. You might not want users to have access to all of your cost data. Instead, you can grant users access only to data that is specific to their projects or organizations. With role-based access control, you can limit the visibility of resources in cost management reports. For example, you can restrict a user's view to only AWS integrations, rather than the entire environment. To learn how to limit access, see the more in-depth guide Limiting access to cost management resources . 2.2. Configuring tagging for your integrations The cost management application tracks cloud and infrastructure costs with tags. Tags are also known as labels in OpenShift. You can refine tags in cost management to filter and attribute resources, organize your resources by cost, and allocate costs to different parts of your cloud infrastructure. Important You can only configure tags and labels directly on an integration. You can choose the tags that you activate in cost management, however, you cannot edit tags and labels in the cost management application. To learn more about the following topics, see Managing cost data using tagging : Planning your tagging strategy to organize your view of cost data Understanding how cost management associates tags Configuring tags and labels on your integrations 2.3. Configuring cost models to accurately report costs Now that you configured your integrations to collect cost and usage data in cost management, you can configure cost models to associate prices to metrics and usage. A cost model is a framework that uses raw costs and metrics to define calculations for the costs in cost management. You can record, categorize, and distribute the costs that the cost model generates to specific customers, business units, or projects. In Cost Models , you can complete the following tasks: Classifying your costs as infrastructure or supplementary costs Capturing monthly costs for OpenShift nodes and clusters Applying a markup to account for additional support costs To learn how to configure a cost model, see Using cost models . 2.4. Visualizing your costs with Cost Explorer Use cost management Cost Explorer to create custom graphs of time-scaled cost and usage information and ultimately better visualize and interpret your costs. To learn more about the following topics, see Visualizing your costs using Cost Explorer : Using Cost Explorer to identify abnormal events Understanding how your cost data changes over time Creating custom bar charts of your cost and usage data Exporting custom cost data tables
| null |
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_openshift_container_platform_data_into_cost_management/assembly-cost-management-next-steps-ocp
|
Chapter 1. Initial Troubleshooting
|
Chapter 1. Initial Troubleshooting This chapter includes information on: How to start troubleshooting Ceph errors ( Identifying problems ) Most common ceph health error messages ( Understanding Ceph Health ) Most common Ceph log error messages ( Understanding Ceph log ) 1.1. Prerequisites A running Red Hat Ceph Storage cluster. 1.2. Identifying problems To determine possible causes of the error with the Red Hat Ceph Storage cluster, answer the questions in the Procedure section. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Certain problems can arise when using unsupported configurations. Ensure that your configuration is supported. Do you know what Ceph component causes the problem? No. Follow Diagnosing the health of a Ceph storage cluster procedure in the Red Hat Ceph Storage Troubleshooting Guide . Ceph Monitors. See Troubleshooting Ceph Monitors section in the Red Hat Ceph Storage Troubleshooting Guide . Ceph OSDs. See Troubleshooting Ceph OSDs section in the Red Hat Ceph Storage Troubleshooting Guide . Ceph placement groups. See Troubleshooting Ceph placement groups section in the Red Hat Ceph Storage Troubleshooting Guide . Multi-site Ceph Object Gateway. See Troubleshooting a multi-site Ceph Object Gateway section in the Red Hat Ceph Storage Troubleshooting Guide . Additional Resources See the Red Hat Ceph Storage: Supported configurations article for details. 1.2.1. Diagnosing the health of a storage cluster This procedure lists basic steps to diagnose the health of a Red Hat Ceph Storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Check the overall status of the storage cluster: If the command returns HEALTH_WARN or HEALTH_ERR see Understanding Ceph health for details. Check the Ceph logs for any error messages listed in Understanding Ceph logs . The logs are located by default in the /var/log/ceph/ directory. If the logs do not include sufficient amount of information, increase the debugging level and try to reproduce the action that failed. See Configuring logging for details. 1.3. Understanding Ceph health The ceph health command returns information about the status of the Red Hat Ceph Storage cluster: HEALTH_OK indicates that the cluster is healthy. HEALTH_WARN indicates a warning. In some cases, the Ceph status returns to HEALTH_OK automatically. For example when Red Hat Ceph Storage cluster finishes the rebalancing process. However, consider further troubleshooting if a cluster is in the HEALTH_WARN state for longer time. HEALTH_ERR indicates a more serious problem that requires your immediate attention. Use the ceph health detail and ceph -s commands to get a more detailed output. Additional Resources See the Ceph Monitor error messages table in the Red Hat Ceph Storage Troubleshooting Guide . See the Ceph OSD error messages table in the Red Hat Ceph Storage Troubleshooting Guide . See the Placement group error messages table in the Red Hat Ceph Storage Troubleshooting Guide . 1.4. Understanding Ceph logs 1.4.1. Non containerized deployment By default, Ceph stores its logs in the /var/log/ceph/ directory. The CLUSTER_NAME .log is the main storage cluster log file that includes global events. By default, the log file name is ceph.log . Only the Ceph Monitor nodes include the main storage cluster log. Each Ceph OSD and Monitor has its own log file, named CLUSTER_NAME -osd. NUMBER .log and CLUSTER_NAME -mon. HOSTNAME .log . When you increase debugging level for Ceph subsystems, Ceph generates new log files for those subsystems as well. 1.4.2. Container-based deployment For container-based deployment, by default, Ceph log to journald , accessible using the journactl command. However, you can configure Ceph to log to files in /var/log/ceph in the configuration settings. To enable logging Ceph Monitors, Ceph Manager, Ceph Object Gateway, and any other daemons, set log_to_file to true under [global] settings. Example To enable logging for Ceph Monitor cluster and audit logs, set mon_cluster_log_to_file to true . Example Note If you choose to log to files, it is recommended to disable logging to journald or else everything is logged twice. Run the following commands to disable logging to journald : Additional Resources For details about logging, see Configuring logging in the Red Hat Ceph Storage Troubleshooting Guide . See the Common Ceph Monitor error messages in the Ceph logs table in the Red Hat Ceph Storage Troubleshooting Guide . See the Common Ceph OSD error messages in the Ceph logs table in the Red Hat Ceph Storage Troubleshooting Guide . 1.5. Gathering logs from multiple hosts in a Ceph cluster using Ansible Starting with Red Hat Ceph Storage 4.2, you can use ceph-ansible to gather logs from multiple hosts in a Ceph cluster. It captures etc/ceph and /var/log/ceph directories from the Ceph nodes. This playbook can be used to collect logs for a bare-metal and containerized storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the nodes. The ceph-ansible package is installed on the node. Procedure Log into the Ansible administration node as an ansible user. Note Ensure the node has adequate space to collect the logs from the hosts. Navigate to /usr/share/ceph-ansible directory: Example Run the Ansible playbook to gather the logs: Example The logs are stored in the /tmp directory of the Ansible node.
|
[
"ceph health detail",
"ceph config set global log_to_file true",
"ceph config set mon mon_cluster_log_to_file true",
"ceph config set global log_to_journald false ceph config set global mon_cluster_log_to_journald false",
"cd /usr/share/ceph-ansible",
"ansible@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/gather-ceph-logs.yml -i hosts"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/troubleshooting_guide/initial-troubleshooting
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.