title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using local storage devices, you can create internal cluster resources. This approach internally provisions base services and all applications can access additional storage classes. Before you begin the deployment of Red Hat OpenShift Data Foundation using local storage, ensure that your resource requirements are met. See requirements for installing OpenShift Data Foundation using local storage devices . On the external key management system (KMS), When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . Ensure that you are using signed certificates on your Vault servers. After you have addressed the above, follow these steps in the order given: Install the Red Hat OpenShift Data Foundation Operator . Install Local Storage Operator . Find the available storage devices . Create the OpenShift Data Foundation cluster service on IBM Z . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . 1.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy:
[ "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_ibm_z/preparing_to_deploy_openshift_data_foundation
Installation Guide
Installation Guide Red Hat Ceph Storage 8 Installing Red Hat Ceph Storage on Red Hat Enterprise Linux Red Hat Ceph Storage Documentation Team
[ "ceph soft nofile unlimited", "USER_NAME soft nproc unlimited", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches ' Red Hat Ceph Storage '", "subscription-manager attach --pool= POOL_ID", "subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms", "dnf update", "subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms", "dnf install cephadm-ansible", "cd /usr/share/cephadm-ansible", "mkdir -p inventory/staging inventory/production", "[defaults] inventory = ./inventory/staging", "touch inventory/staging/hosts touch inventory/production/hosts", "NODE_NAME_1 NODE_NAME_2 [admin] ADMIN_NODE_NAME_1", "host02 host03 host04 [admin] host01", "ansible-playbook -i inventory/staging/hosts PLAYBOOK.yml", "ansible-playbook -i inventory/production/hosts PLAYBOOK.yml", "ssh root@myhostname root@myhostname password: Permission denied, please try again.", "echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config.d/01-permitrootlogin.conf", "systemctl restart sshd.service", "ssh root@ HOST_NAME", "ssh root@host01", "ssh root@ HOST_NAME", "ssh root@host01", "adduser USER_NAME", "adduser ceph-admin", "passwd USER_NAME", "passwd ceph-admin", "cat << EOF >/etc/sudoers.d/ USER_NAME USDUSER_NAME ALL = (root) NOPASSWD:ALL EOF", "cat << EOF >/etc/sudoers.d/ceph-admin ceph-admin ALL = (root) NOPASSWD:ALL EOF", "chmod 0440 /etc/sudoers.d/ USER_NAME", "chmod 0440 /etc/sudoers.d/ceph-admin", "[ceph-admin@admin ~]USD ssh-keygen", "ssh-copy-id USER_NAME @ HOST_NAME", "[ceph-admin@admin ~]USD ssh-copy-id ceph-admin@host01", "[ceph-admin@admin ~]USD touch ~/.ssh/config", "Host host01 Hostname HOST_NAME User USER_NAME Host host02 Hostname HOST_NAME User USER_NAME", "Host host01 Hostname host01 User ceph-admin Host host02 Hostname host02 User ceph-admin Host host03 Hostname host03 User ceph-admin", "[ceph-admin@admin ~]USD chmod 600 ~/.ssh/config", "host02 host03 host04 [admin] host01", "host02 host03 host04 [admin] host01", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit GROUP_NAME | NODE_NAME", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit clients [ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host01", "cephadm bootstrap --cluster-network NETWORK_CIDR --mon-ip IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD --yes-i-know", "cephadm bootstrap --cluster-network 10.10.128.0/24 --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1 --yes-i-know", "Ceph Dashboard is now available at: URL: https://host01:8443/ User: admin Password: i8nhu7zham Enabling client.admin keyring and conf on hosts with \"admin\" label You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete.", "cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --allow-fqdn-hostname --registry-json REGISTRY_JSON", "cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --allow-fqdn-hostname --registry-json /etc/mylogin.json", "{ \"url\":\" REGISTRY_URL \", \"username\":\" USER_NAME \", \"password\":\" PASSWORD \" }", "{ \"url\":\"registry.redhat.io\", \"username\":\"myuser1\", \"password\":\"mypassword1\" }", "cephadm bootstrap --mon-ip IP_ADDRESS --registry-json /etc/mylogin.json", "cephadm bootstrap --mon-ip 10.10.128.68 --registry-json /etc/mylogin.json", "service_type: host addr: host01 hostname: host01 --- service_type: host addr: host02 hostname: host02 --- service_type: host addr: host03 hostname: host03 --- service_type: host addr: host04 hostname: host04 --- service_type: mon placement: host_pattern: \"host[0-2]\" --- service_type: osd service_id: my_osds placement: host_pattern: \"host[1-3]\" data_devices: all: true", "cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR_IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD", "cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1", "su - SSH_USER_NAME", "su - ceph Last login: Tue Sep 14 12:00:29 EST 2021 on pts/0", "[ceph@host01 ~]USD ssh host01 Last login: Tue Sep 14 12:03:29 EST 2021 on pts/0", "sudo cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD", "sudo cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --all --matches=\"*Ceph*\"", "subscription-manager attach --pool= POOL_ID", "subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms", "dnf install -y podman httpd-tools", "mkdir -p /opt/registry/{auth,certs,data}", "htpasswd -bBc /opt/registry/auth/htpasswd PRIVATE_REGISTRY_USERNAME PRIVATE_REGISTRY_PASSWORD", "htpasswd -bBc /opt/registry/auth/htpasswd myregistryusername myregistrypassword1", "openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext \"subjectAltName = DNS: LOCAL_NODE_FQDN \"", "openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext \"subjectAltName = DNS:admin.lab.redhat.com\"", "ln -s /opt/registry/certs/domain.crt /opt/registry/certs/domain.cert", "cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i \" LOCAL_NODE_FQDN \"", "cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i \"admin.lab.redhat.com\" label: admin.lab.redhat.com", "scp /opt/registry/certs/domain.crt root@host01:/etc/pki/ca-trust/source/anchors/ ssh root@host01 update-ca-trust trust list | grep -i \"admin.lab.redhat.com\" label: admin.lab.redhat.com", "run --restart=always --name NAME_OF_CONTAINER -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e \"REGISTRY_AUTH=htpasswd\" -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm\" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -e \"REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt\" -e \"REGISTRY_HTTP_TLS_KEY=/certs/domain.key\" -e REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true -d registry:2", "podman run --restart=always --name myprivateregistry -p 5000:5000 -v /opt/registry/data:/var/lib/registry:z -v /opt/registry/auth:/auth:z -v /opt/registry/certs:/certs:z -e \"REGISTRY_AUTH=htpasswd\" -e \"REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm\" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -e \"REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt\" -e \"REGISTRY_HTTP_TLS_KEY=/certs/domain.key\" -e REGISTRY_COMPATIBILITY_SCHEMA1_ENABLED=true -d registry:2", "unqualified-search-registries = [\"registry.redhat.io\", \"registry.access.redhat.com\", \"registry.fedoraproject.org\", \"registry.centos.org\", \"docker.io\"]", "login registry.redhat.io", "run -v / CERTIFICATE_DIRECTORY_PATH :/certs:Z -v / CERTIFICATE_DIRECTORY_PATH /domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds RED_HAT_CUSTOMER_PORTAL_LOGIN : RED_HAT_CUSTOMER_PORTAL_PASSWORD --dest-cert-dir=./certs/ --dest-creds PRIVATE_REGISTRY_USERNAME : PRIVATE_REGISTRY_PASSWORD docker://registry.redhat.io/ SRC_IMAGE : SRC_TAG docker:// LOCAL_NODE_FQDN :5000/ DST_IMAGE : DST_TAG", "podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/rhceph/rhceph-8-rhel9:latest docker://admin.lab.redhat.com:5000/rhceph/rhceph-8-rhel9:latest podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.12 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus-node-exporter:v4.12 podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/rhceph/grafana-rhel9:latest docker://admin.lab.redhat.com:5000/rhceph/grafana-rhel9:latest podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus:v4.12 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus:v4.12 podman run -v /opt/registry/certs:/certs:Z -v /opt/registry/certs/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel9/skopeo skopeo copy --remove-signatures --src-creds myusername:mypassword1 --dest-cert-dir=./certs/ --dest-creds myregistryusername:myregistrypassword1 docker://registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.12 docker://admin.lab.redhat.com:5000/openshift4/ose-prometheus-alertmanager:v4.12", "curl -u PRIVATE_REGISTRY_USERNAME : PRIVATE_REGISTRY_PASSWORD https:// LOCAL_NODE_FQDN :5000/v2/_catalog", "curl -u myregistryusername:myregistrypassword1 https://admin.lab.redhat.com:5000/v2/_catalog {\"repositories\":[\"openshift4/ose-prometheus\",\"openshift4/ose-prometheus-alertmanager\",\"openshift4/ose-prometheus-node-exporter\",\"rhceph/rhceph-8-dashboard-rhel9\",\"rhceph/rhceph-8-rhel9\"]}", "host02 host03 host04 [admin] host01", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url= CUSTOM_REPO_URL \"", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\"", "ansible-playbook -vvv -i INVENTORY_HOST_FILE_ cephadm-set-container-insecure-registries.yml -e insecure_registry= REGISTRY_URL", "ansible-playbook -vvv -i hosts cephadm-set-container-insecure-registries.yml -e insecure_registry=host01:5050", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url= CUSTOM_REPO_URL \" --limit GROUP_NAME | NODE_NAME", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\" --limit clients [ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=custom\" -e \"custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/\" --limit host02", "cephadm --image PRIVATE_REGISTRY_NODE_FQDN :5000/ CUSTOM_IMAGE_NAME : IMAGE_TAG bootstrap --mon-ip IP_ADDRESS --registry-url PRIVATE_REGISTRY_NODE_FQDN :5000 --registry-username PRIVATE_REGISTRY_USERNAME --registry-password PRIVATE_REGISTRY_PASSWORD", "cephadm --image admin.lab.redhat.com:5000/rhceph-8-rhel9:latest bootstrap --mon-ip 10.10.128.68 --registry-url admin.lab.redhat.com:5000 --registry-username myregistryusername --registry-password myregistrypassword1", "Ceph Dashboard is now available at: URL: https://host01:8443/ User: admin Password: i8nhu7zham Enabling client.admin keyring and conf on hosts with \"admin\" label You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid 266ee7a8-2a05-11eb-b846-5254002d4916 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/master/mgr/telemetry/ Bootstrap complete.", "ceph cephadm registry-login --registry-url CUSTOM_REGISTRY_NAME --registry_username REGISTRY_USERNAME --registry_password REGISTRY_PASSWORD", "ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1", "ceph config set mgr mgr/cephadm/ OPTION_NAME CUSTOM_REGISTRY_NAME / CONTAINER_NAME", "container_image_prometheus container_image_grafana container_image_alertmanager container_image_node_exporter", "ceph config set mgr mgr/cephadm/container_image_prometheus myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_grafana myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_alertmanager myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_node_exporter myregistry/mycontainer", "ceph orch redeploy node-exporter", "ceph config rm mgr mgr/cephadm/ OPTION_NAME", "ceph config rm mgr mgr/cephadm/container_image_prometheus", "[ansible@admin ~]USD cd /usr/share/cephadm-ansible", "ansible-playbook -i INVENTORY_HOST_FILE cephadm-distribute-ssh-key.yml -e cephadm_ssh_user= USER_NAME -e cephadm_pubkey_path= home/cephadm/ceph.key -e admin_node= ADMIN_NODE_NAME_1", "[ansible@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e cephadm_pubkey_path=/home/cephadm/ceph.key -e admin_node=host01 [ansible@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e admin_node=host01", "cephadm shell ceph -s", "cephadm shell ceph -s", "exit", "podman ps", "cephadm shell ceph -s cluster: id: f64f341c-655d-11eb-8778-fa163e914bcc health: HEALTH_OK services: mon: 3 daemons, quorum host01,host02,host03 (age 94m) mgr: host01.lbnhug(active, since 59m), standbys: host02.rofgay, host03.ohipra mds: 1/1 daemons up, 1 standby osd: 18 osds: 18 up (since 10m), 18 in (since 10m) rgw: 4 daemons active (2 hosts, 1 zones) data: volumes: 1/1 healthy pools: 8 pools, 225 pgs objects: 230 objects, 9.9 KiB usage: 271 MiB used, 269 GiB / 270 GiB avail pgs: 225 active+clean io: client: 85 B/s rd, 0 op/s rd, 0 op/s wr", ".Syntax [source,subs=\"verbatim,quotes\"] ---- ceph cephadm registry-login --registry-url _CUSTOM_REGISTRY_NAME_ --registry_username _REGISTRY_USERNAME_ --registry_password _REGISTRY_PASSWORD_ ----", ".Example ---- ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1 ----", "ssh-copy-id -f -i /etc/ceph/ceph.pub user@ NEWHOST", "ssh-copy-id -f -i /etc/ceph/ceph.pub root@host02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host03", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "[ceph-admin@admin ~]USD cat hosts host02 host03 host04 [admin] host01", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02", "ceph orch host add NEWHOST", "ceph orch host add host02 Added host 'host02' with addr '10.10.128.69' ceph orch host add host03 Added host 'host03' with addr '10.10.128.70'", "ceph orch host add HOSTNAME IP_ADDRESS", "ceph orch host add host02 10.10.128.69 Added host 'host02' with addr '10.10.128.69'", "ceph orch host ls", "ceph orch host add HOSTNAME IP_ADDR", "ceph orch host add host01 10.10.128.68", "ceph orch host set-addr HOSTNAME IP_ADDR", "ceph orch host set-addr HOSTNAME IPV4_ADDRESS", "service_type: host addr: hostname: host02 labels: - mon - osd - mgr --- service_type: host addr: hostname: host03 labels: - mon - osd - mgr --- service_type: host addr: hostname: host04 labels: - mon - osd", "ceph orch apply -i hosts.yaml Added host 'host02' with addr '10.10.128.69' Added host 'host03' with addr '10.10.128.70' Added host 'host04' with addr '10.10.128.71'", "cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yaml", "ceph orch host ls HOST ADDR LABELS STATUS host02 host02 mon osd mgr host03 host03 mon osd mgr host04 host04 mon osd", "cephadm shell", "ceph orch host add HOST_NAME HOST_ADDRESS", "ceph orch host add host03 10.10.128.70", "cephadm shell", "ceph orch host ls", "ceph orch host drain HOSTNAME", "ceph orch host drain host02", "ceph orch osd rm status", "ceph orch ps HOSTNAME", "ceph orch ps host02", "ceph orch host rm HOSTNAME", "ceph orch host rm host02", "cephadm shell", "ceph orch host label add HOSTNAME LABEL", "ceph orch host label add host02 mon", "ceph orch host ls", "cephadm shell", "ceph orch host label rm HOSTNAME LABEL", "ceph orch host label rm host02 mon", "ceph orch host ls", "cephadm shell", "ceph orch host ls HOST ADDR LABELS STATUS host01 _admin mon osd mgr host02 mon osd mgr mylabel", "ceph orch apply DAEMON --placement=\"label: LABEL \"", "ceph orch apply prometheus --placement=\"label:mylabel\"", "vi placement.yml", "service_type: prometheus placement: label: \"mylabel\"", "ceph orch apply -i FILENAME", "ceph orch apply -i placement.yml Scheduled prometheus update...", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=prometheus NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID prometheus.host02 host02 *:9095 running (2h) 8m ago 2h 85.3M - 2.22.2 ac25aac5d567 ad8c7593d7c0", "ceph orch apply mon 5", "ceph orch apply mon --unmanaged", "ceph orch host label add HOSTNAME mon", "ceph orch host label add host01 mon", "ceph orch host ls", "ceph orch host label add host02 mon ceph orch host label add host03 mon ceph orch host ls HOST ADDR LABELS STATUS host01 mon host02 mon host03 mon host04 host05 host06", "ceph orch apply mon label:mon", "ceph orch apply mon HOSTNAME1 , HOSTNAME2 , HOSTNAME3", "ceph orch apply mon host01,host02,host03", "[ceph-admin@admin cephadm-ansible]USD ceph cephadm generate-key", "[ceph-admin@admin cephadm-ansible]USD ceph cephadm get-pub-key", "[ceph-admin@admin cephadm-ansible]USDceph cephadm clear-key", "[ceph-admin@admin cephadm-ansible]USD ceph mgr fail", "[ceph-admin@admin cephadm-ansible]USD ceph cephadm set-user <user>", "[ceph-admin@admin cephadm-ansible]USD ceph cephadm set-user user", "ceph cephadm get-pub-key > ~/ceph.pub", "[ceph-admin@admin cephadm-ansible]USD ceph cephadm get-pub-key > ~/ceph.pub", "ssh-copy-id -f -i ~/ceph.pub USER @ HOST", "[ceph-admin@admin cephadm-ansible]USD ssh-copy-id ceph-admin@host01", "ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon,mgr host04 host05 host06", "ceph orch host label add HOSTNAME _admin", "ceph orch host label add host03 _admin", "ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon,mgr,_admin host04 host05 host06", "ceph orch host label add HOSTNAME mon", "ceph orch host label add host02 mon ceph orch host label add host03 mon", "ceph orch host ls", "ceph orch host ls HOST ADDR LABELS STATUS host01 mon,mgr,_admin host02 mon host03 mon host04 host05 host06", "ceph orch apply mon label:mon", "ceph orch apply mon HOSTNAME1 , HOSTNAME2 , HOSTNAME3", "ceph orch apply mon host01,host02,host03", "ceph orch apply mon NODE:IP_ADDRESS_OR_NETWORK_NAME [ NODE:IP_ADDRESS_OR_NETWORK_NAME ...]", "ceph orch apply mon host02:10.10.128.69 host03:mynetwork", "ceph orch apply mgr NUMBER_OF_DAEMONS", "ceph orch apply mgr 3", "ceph orch apply mgr --placement \" HOSTNAME1 HOSTNAME2 HOSTNAME3 \"", "ceph orch apply mgr --placement \"host02 host03 host04\"", "ceph orch device ls [--hostname= HOSTNAME1 HOSTNAME2 ] [--wide] [--refresh]", "ceph orch device ls --wide --refresh", "ceph orch daemon add osd HOSTNAME : DEVICE_PATH", "ceph orch daemon add osd host02:/dev/sdb", "ceph orch apply osd --all-available-devices", "ansible-playbook -i hosts cephadm-clients.yml -extra-vars '{\"fsid\":\" FSID \", \"client_group\":\" ANSIBLE_GROUP_NAME \", \"keyring\":\" PATH_TO_KEYRING \", \"conf\":\" CONFIG_FILE \"}'", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-clients.yml --extra-vars '{\"fsid\":\"be3ca2b2-27db-11ec-892b-005056833d58\",\"client_group\":\"fs_clients\",\"keyring\":\"/etc/ceph/fs.keyring\", \"conf\": \"/etc/ceph/ceph.conf\"}'", "ceph mgr module disable cephadm", "ceph fsid", "exit", "cephadm rm-cluster --force --zap-osds --fsid FSID", "cephadm rm-cluster --force --zap-osds --fsid a6ca415a-cde7-11eb-a41a-002590fc2544", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "host02 host03 host04 [admin] host01 [clients] client01 client02 client03", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --limit CLIENT_GROUP_NAME | CLIENT_NODE_NAME", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --limit clients", "ansible-playbook -i INVENTORY_FILE cephadm-clients.yml --extra-vars '{\"fsid\":\" FSID \",\"keyring\":\" KEYRING_PATH \",\"client_group\":\" CLIENT_GROUP_NAME \",\"conf\":\" CEPH_CONFIGURATION_PATH \",\"keyring_dest\":\" KEYRING_DESTINATION_PATH \"}'", "[ceph-admin@host01 cephadm-ansible]USD ansible-playbook -i hosts cephadm-clients.yml --extra-vars '{\"fsid\":\"266ee7a8-2a05-11eb-b846-5254002d4916\",\"keyring\":\"/etc/ceph/ceph.client.admin.keyring\",\"client_group\":\"clients\",\"conf\":\"/etc/ceph/ceph.conf\",\"keyring_dest\":\"/etc/ceph/custom.name.ceph.keyring\"}'", "ansible-playbook -i INVENTORY_FILE cephadm-clients.yml --extra-vars '{\"fsid\":\" FSID \",\"keyring\":\" KEYRING_PATH \",\"conf\":\" CONF_PATH \"}'", "ls -l /etc/ceph/ -rw-------. 1 ceph ceph 151 Jul 11 12:23 custom.name.ceph.keyring -rw-------. 1 ceph ceph 151 Jul 11 12:23 ceph.keyring -rw-------. 1 ceph ceph 269 Jul 11 12:23 ceph.conf", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi INVENTORY_FILE HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address=10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"", "sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: BOOTSTRAP_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: -name: NAME_OF_TASK cephadm_registry_login: state: STATE registry_url: REGISTRY_URL registry_username: REGISTRY_USER_NAME registry_password: REGISTRY_PASSWORD - name: NAME_OF_TASK cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: DASHBOARD_USER dashboard_password: DASHBOARD_PASSWORD allow_fqdn_hostname: ALLOW_FQDN_HOSTNAME cluster_network: NETWORK_CIDR", "[ceph-admin@admin cephadm-ansible]USD sudo vi bootstrap.yml --- - name: bootstrap the cluster hosts: host01 become: true gather_facts: false tasks: - name: login to registry cephadm_registry_login: state: login registry_url: registry.redhat.io registry_username: user1 registry_password: mypassword1 - name: bootstrap initial cluster cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: mydashboarduser dashboard_password: mydashboardpassword allow_fqdn_hostname: true cluster_network: 10.10.128.0/28", "ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml -vvv", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts bootstrap.yml -vvv", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi INVENTORY_FILE NEW_HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address= 10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: HOST_TO_DELEGATE_TASK_TO - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: CEPH_COMMAND_TO_RUN register: REGISTER_NAME - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] debug: msg: \"{{ REGISTER_NAME .stdout }}\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi add-hosts.yml --- - name: add additional hosts to the cluster hosts: all become: true gather_facts: true tasks: - name: add hosts to the cluster ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: host01 - name: list hosts in the cluster when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts when: inventory_hostname in groups['admin'] debug: msg: \"{{ host_list.stdout }}\"", "ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts add-hosts.yml", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE retries: NUMBER_OF_RETRIES delay: DELAY until: CONTINUE_UNTIL register: REGISTER_NAME - name: NAME_OF_TASK ansible.builtin.shell: cmd: ceph orch host ls register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \"{{ REGISTER_NAME .stdout }}\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi remove-hosts.yml --- - name: remove host hosts: host01 become: true gather_facts: true tasks: - name: drain host07 ceph_orch_host: name: host07 state: drain - name: remove host from the cluster ceph_orch_host: name: host07 state: absent retries: 20 delay: 1 until: result is succeeded register: result - name: list hosts in the cluster ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts debug: msg: \"{{ host_list.stdout }}\"", "ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts remove-hosts.yml", "TASK [print current hosts] ****************************************************************************************************** Friday 24 June 2022 14:52:40 -0400 (0:00:03.365) 0:02:31.702 *********** ok: [host01] => msg: |- HOST ADDR LABELS STATUS host01 10.10.128.68 _admin mon mgr host02 10.10.128.69 mon mgr host03 10.10.128.70 mon mgr host04 10.10.128.71 osd host05 10.10.128.72 osd host06 10.10.128.73 osd", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION value: VALUE_OF_PARAMETER_TO_SET - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \" MESSAGE_TO_DISPLAY {{ REGISTER_NAME .stdout }}\"", "[ceph-admin@admin cephadm-ansible]USD sudo vi change_configuration.yml --- - name: set pool delete hosts: host01 become: true gather_facts: false tasks: - name: set the allow pool delete option ceph_config: action: set who: mon option: mon_allow_pool_delete value: true - name: get the allow pool delete setting ceph_config: action: get who: mon option: mon_allow_pool_delete register: verify_mon_allow_pool_delete - name: print current mon_allow_pool_delete setting debug: msg: \"the value of 'mon_allow_pool_delete' is {{ verify_mon_allow_pool_delete.stdout }}\"", "ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts change_configuration.yml", "TASK [print current mon_allow_pool_delete setting] ************************************************************* Wednesday 29 June 2022 13:51:41 -0400 (0:00:05.523) 0:00:17.953 ******** ok: [host01] => msg: the value of 'mon_allow_pool_delete' is true", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_apply: spec: | service_type: SERVICE_TYPE service_id: UNIQUE_NAME_OF_SERVICE placement: host_pattern: ' HOST_PATTERN_TO_SELECT_HOSTS ' label: LABEL spec: SPECIFICATION_OPTIONS :", "[ceph-admin@admin cephadm-ansible]USD sudo vi deploy_osd_service.yml --- - name: deploy osd service hosts: host01 become: true gather_facts: true tasks: - name: apply osd spec ceph_orch_apply: spec: | service_type: osd service_id: osd placement: host_pattern: '*' label: osd spec: data_devices: all: true", "ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts deploy_osd_service.yml", "[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible", "sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_daemon: state: STATE_OF_SERVICE daemon_id: DAEMON_ID daemon_type: TYPE_OF_SERVICE", "[ceph-admin@admin cephadm-ansible]USD sudo vi restart_services.yml --- - name: start and stop services hosts: host01 become: true gather_facts: false tasks: - name: start osd.0 ceph_orch_daemon: state: started daemon_id: 0 daemon_type: osd - name: stop mon.host02 ceph_orch_daemon: state: stopped daemon_id: host02 daemon_type: mon", "ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml", "[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts restart_services.yml", "cephadm adopt [-h] --name DAEMON_NAME --style STYLE [--cluster CLUSTER ] --legacy-dir [ LEGACY_DIR ] --config-json CONFIG_JSON ] [--skip-firewalld] [--skip-pull]", "cephadm adopt --style=legacy --name prometheus.host02", "cephadm ceph-volume inventory/simple/raw/lvm [-h] [--fsid FSID ] [--config-json CONFIG_JSON ] [--config CONFIG , -c CONFIG ] [--keyring KEYRING , -k KEYRING ]", "cephadm ceph-volume inventory --fsid f64f341c-655d-11eb-8778-fa163e914bcc", "cephadm check-host [--expect-hostname HOSTNAME ]", "cephadm check-host --expect-hostname host02", "cephadm shell deploy DAEMON_TYPE [-h] [--name DAEMON_NAME ] [--fsid FSID ] [--config CONFIG , -c CONFIG ] [--config-json CONFIG_JSON ] [--keyring KEYRING ] [--key KEY ] [--osd-fsid OSD_FSID ] [--skip-firewalld] [--tcp-ports TCP_PORTS ] [--reconfig] [--allow-ptrace] [--memory-request MEMORY_REQUEST ] [--memory-limit MEMORY_LIMIT ] [--meta-json META_JSON ]", "cephadm shell deploy mon --fsid f64f341c-655d-11eb-8778-fa163e914bcc", "cephadm enter [-h] [--fsid FSID ] --name NAME [command [command ...]]", "cephadm enter --name 52c611f2b1d9", "cephadm help", "cephadm help", "cephadm install PACKAGES", "cephadm install ceph-common ceph-osd", "cephadm --image IMAGE_ID inspect-image", "cephadm --image 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a inspect-image", "cephadm list-networks", "cephadm list-networks", "cephadm ls [--no-detail] [--legacy-dir LEGACY_DIR ]", "cephadm ls --no-detail", "cephadm logs [--fsid FSID ] --name DAEMON_NAME cephadm logs [--fsid FSID ] --name DAEMON_NAME -- -n NUMBER # Last N lines cephadm logs [--fsid FSID ] --name DAEMON_NAME -- -f # Follow the logs", "cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 -- -n 20 cephadm logs --fsid 57bddb48-ee04-11eb-9962-001a4a000672 --name osd.8 -- -f", "cephadm prepare-host [--expect-hostname HOSTNAME ]", "cephadm prepare-host cephadm prepare-host --expect-hostname host01", "cephadm [-h] [--image IMAGE_ID ] pull", "cephadm --image 13ea90216d0be03003d12d7869f72ad9de5cec9e54a27fd308e01e467c0d4a0a pull", "cephadm registry-login --registry-url [ REGISTRY_URL ] --registry-username [ USERNAME ] --registry-password [ PASSWORD ] [--fsid FSID ] [--registry-json JSON_FILE ]", "cephadm registry-login --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1", "cat REGISTRY_FILE { \"url\":\" REGISTRY_URL \", \"username\":\" REGISTRY_USERNAME \", \"password\":\" REGISTRY_PASSWORD \" }", "cat registry_file { \"url\":\"registry.redhat.io\", \"username\":\"myuser\", \"password\":\"mypass\" } cephadm registry-login -i registry_file", "cephadm rm-daemon [--fsid FSID ] [--name DAEMON_NAME ] [--force ] [--force-delete-data]", "cephadm rm-daemon --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8", "cephadm rm-cluster [--fsid FSID ] [--force]", "cephadm rm-cluster --fsid f64f341c-655d-11eb-8778-fa163e914bcc", "ceph mgr module disable cephadm", "cephadm rm-repo [-h]", "cephadm rm-repo", "cephadm run [--fsid FSID ] --name DAEMON_NAME", "cephadm run --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8", "cephadm shell [--fsid FSID ] [--name DAEMON_NAME , -n DAEMON_NAME ] [--config CONFIG , -c CONFIG ] [--mount MOUNT , -m MOUNT ] [--keyring KEYRING , -k KEYRING ] [--env ENV , -e ENV ]", "cephadm shell -- ceph orch ls cephadm shell", "cephadm unit [--fsid FSID ] --name DAEMON_NAME start/stop/restart/enable/disable", "cephadm unit --fsid f64f341c-655d-11eb-8778-fa163e914bcc --name osd.8 start", "cephadm version", "cephadm version" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/installation_guide/minimum-hardware-considerations-for-red-hat-ceph-storage_install
Installing on IBM Cloud
Installing on IBM Cloud OpenShift Container Platform 4.16 Installing OpenShift Container Platform IBM Cloud Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_cloud/index
8.93. libpcap
8.93. libpcap 8.93.1. RHBA-2013:1727 - libpcap bug fix and enhancement update Updated libpcap packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The Packet Capture library (pcap) provides a high level interface to packet capture systems. All packets on the network, even those destined for other hosts, are accessible through this mechanism. It also supports saving captured packets to a 'savefile', and reading packets from a 'savefile'. libpcap provides implementation-independent access to the underlying packet capture facility provided by the operating system. Note The libpcap packages have been upgraded to upstream version 1.4.1, which provides a number of bug fixes and enhancements over the version. (BZ# 916749 ) Bug Fixes BZ# 723108 Previously, the libpcap library generated wrong filtering code for Berkeley Packet Filter (BPF) infrastructure. As a consequence, the in-kernel packet filter was discarding some packets which should have been received by userspace process. Moreover, the tcpdump utility produced incorrect output when a fragmentation of IPv6 packet occurred because of the MTUlink. To fix this bug, the code which deals with BPF filter generation has been fixed to check for fragmentation headers in IPv6 PDUs before checking for the final protocol. As a result, the kernel filter no longer discards IPv6 fragments when source-site fragmentation occurs during IPv6 transmission and tcpdump receives all packets. BZ# 731789 Prior to this update, libpcap was unable to open a capture device with small values of SnapLen, which caused libpcap to return an error code and tcpdump to exit prematurely. Calculation of frames for memory mapping packet capture mechanism has been adjusted not to truncate packets to smaller values than actual SnapLen, thus fixing the bug. As a result, libpcap no longer returns errors when trying to open a capture device with small values of SnapLen, and applications using libpcap are able to process packets. Users of libpcap are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/libpcap
3.22. RHEA-2011:1757 - new package: virt-who
3.22. RHEA-2011:1757 - new package: virt-who A new virt-who package is now available for Red Hat Enterprise Linux 6. The virt-who package provides an agent that collects information about virtual guests present in the system and reports them to the Red Hat Subscription Manager tool. This enhancement update adds the virt-who package to Red Hat Enterprise Linux 6. (BZ# 725832 ) All users are advised to install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/virt-who
Chapter 1. Kubernetes overview
Chapter 1. Kubernetes overview Kubernetes is an open source container orchestration tool developed by Google. You can run and manage container-based workloads by using Kubernetes. The most common Kubernetes use case is to deploy an array of interconnected microservices, building an application in a cloud native way. You can create Kubernetes clusters that can span hosts across on-premise, public, private, or hybrid clouds. Traditionally, applications were deployed on top of a single operating system. With virtualization, you can split the physical host into several virtual hosts. Working on virtual instances on shared resources is not optimal for efficiency and scalability. Because a virtual machine (VM) consumes as many resources as a physical machine, providing resources to a VM such as CPU, RAM, and storage can be expensive. Also, you might see your application degrading in performance due to virtual instance usage on shared resources. Figure 1.1. Evolution of container technologies for classical deployments To solve this problem, you can use containerization technologies that segregate applications in a containerized environment. Similar to a VM, a container has its own filesystem, vCPU, memory, process space, dependencies, and more. Containers are decoupled from the underlying infrastructure, and are portable across clouds and OS distributions. Containers are inherently much lighter than a fully-featured OS, and are lightweight isolated processes that run on the operating system kernel. VMs are slower to boot, and are an abstraction of physical hardware. VMs run on a single machine with the help of a hypervisor. You can perform the following actions by using Kubernetes: Sharing resources Orchestrating containers across multiple hosts Installing new hardware configurations Running health checks and self-healing applications Scaling containerized applications 1.1. Kubernetes components Table 1.1. Kubernetes components Component Purpose kube-proxy Runs on every node in the cluster and maintains the network traffic between the Kubernetes resources. kube-controller-manager Governs the state of the cluster. kube-scheduler Allocates pods to nodes. etcd Stores cluster data. kube-apiserver Validates and configures data for the API objects. kubelet Runs on nodes and reads the container manifests. Ensures that the defined containers have started and are running. kubectl Allows you to define how you want to run workloads. Use the kubectl command to interact with the kube-apiserver . Node Node is a physical machine or a VM in a Kubernetes cluster. The control plane manages every node and schedules pods across the nodes in the Kubernetes cluster. container runtime container runtime runs containers on a host operating system. You must install a container runtime on each node so that pods can run on the node. Persistent storage Stores the data even after the device is shut down. Kubernetes uses persistent volumes to store the application data. container-registry Stores and accesses the container images. Pod The pod is the smallest logical unit in Kubernetes. A pod contains one or more containers to run in a worker node. 1.2. Kubernetes resources A custom resource is an extension of the Kubernetes API. You can customize Kubernetes clusters by using custom resources. Operators are software extensions which manage applications and their components with the help of custom resources. Kubernetes uses a declarative model when you want a fixed desired result while dealing with cluster resources. By using Operators, Kubernetes defines its states in a declarative way. You can modify the Kubernetes cluster resources by using imperative commands. An Operator acts as a control loop which continuously compares the desired state of resources with the actual state of resources and puts actions in place to bring reality in line with the desired state. Figure 1.2. Kubernetes cluster overview Table 1.2. Kubernetes Resources Resource Purpose Service Kubernetes uses services to expose a running application on a set of pods. ReplicaSets Kubernetes uses the ReplicaSets to maintain the constant pod number. Deployment A resource object that maintains the life cycle of an application. Kubernetes is a core component of an OpenShift Container Platform. You can use OpenShift Container Platform for developing and running containerized applications. With its foundation in Kubernetes, the OpenShift Container Platform incorporates the same technology that serves as the engine for massive telecommunications, streaming video, gaming, banking, and other applications. You can extend your containerized applications beyond a single cloud to on-premise and multi-cloud environments by using the OpenShift Container Platform. Figure 1.3. Architecture of Kubernetes A cluster is a single computational unit consisting of multiple nodes in a cloud environment. A Kubernetes cluster includes a control plane and worker nodes. You can run Kubernetes containers across various machines and environments. The control plane node controls and maintains the state of a cluster. You can run the Kubernetes application by using worker nodes. You can use the Kubernetes namespace to differentiate cluster resources in a cluster. Namespace scoping is applicable for resource objects, such as deployment, service, and pods. You cannot use namespace for cluster-wide resource objects such as storage class, nodes, and persistent volumes. 1.3. Kubernetes conceptual guidelines Before getting started with the OpenShift Container Platform, consider these conceptual guidelines of Kubernetes: Start with one or more worker nodes to run the container workloads. Manage the deployment of those workloads from one or more control plane nodes. Wrap containers in a deployment unit called a pod. By using pods provides extra metadata with the container and offers the ability to group several containers in a single deployment entity. Create special kinds of assets. For example, services are represented by a set of pods and a policy that defines how they are accessed. This policy allows containers to connect to the services that they need even if they do not have the specific IP addresses for the services. Replication controllers are another special asset that indicates how many pod replicas are required to run at a time. You can use this capability to automatically scale your application to adapt to its current demand. The API to OpenShift Container Platform cluster is 100% Kubernetes. Nothing changes between a container running on any other Kubernetes and running on OpenShift Container Platform. No changes to the application. OpenShift Container Platform brings added-value features to provide enterprise-ready enhancements to Kubernetes. OpenShift Container Platform CLI tool ( oc ) is compatible with kubectl . While the Kubernetes API is 100% accessible within OpenShift Container Platform, the kubectl command-line lacks many features that could make it more user-friendly. OpenShift Container Platform offers a set of features and command-line tool like oc . Although Kubernetes excels at managing your applications, it does not specify or manage platform-level requirements or deployment processes. Powerful and flexible platform management tools and processes are important benefits that OpenShift Container Platform offers. You must add authentication, networking, security, monitoring, and logs management to your containerization platform.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/getting_started/kubernetes-overview
Chapter 7. Paging Messages
Chapter 7. Paging Messages AMQ Broker transparently supports huge queues containing millions of messages while the server is running with limited memory. In such a situation it's not possible to store all of the queues in memory at any one time, so AMQ Broker transparently pages messages into and out of memory as they are needed, thus allowing massive queues with a low memory footprint. Paging is done individually per address. AMQ Broker will start paging messages to disk when the size of all messages in memory for an address exceeds a configured maximum size. For more information about addresses, see Configuring addresses and queues . By default, AMQ Broker does not page messages. You must explicitly configure paging to enable it. See the paging example located under INSTALL_DIR /examples/standard/ for a working example showing how to use paging with AMQ Broker. 7.1. About Page Files Messages are stored per address on the file system. Each address has an individual folder where messages are stored in multiple files (page files). Each file will contain messages up to a max configured size ( page-size-bytes ). The system will navigate on the files as needed, and it will remove the page file as soon as all the messages are acknowledged up to that point. Browsers will read through the page-cursor system. Consumers with selectors will also navigate through the page-files and ignore messages that don't match the criteria. Note When you have a queue, and consumers filtering the queue with a very restrictive selector you may get into a situation where you won't be able to read more data from paging until you consume messages from the queue. Example: in one consumer you make a selector as 'color="red"' but you only have one color red one million messages after blue, you won't be able to consume red until you consume blue ones. This is different to browsing as we will "browse" the entire queue looking for messages and while we "depage" messages while feeding the queue. 7.2. Configuring the Paging Directory Location To configure the location of the paging directory, add the paging-directory configuration element to the broker's main configuration file BROKER_INSTANCE_DIR /etc/broker.xml , as in the example below. <configuration ...> ... <core ...> <paging-directory>/somewhere/paging-directory</paging-directory> ... </core> </configuration> AMQ Broker will create one directory for each address being paged under the configured location. 7.3. Configuring an Address for Paging Configuration for paging is done at the address level by adding elements to a specific address-settings , as in the example below. <address-settings> <address-setting match="jms.paged.queue"> <max-size-bytes>104857600</max-size-bytes> <page-size-bytes>10485760</page-size-bytes> <address-full-policy>PAGE</address-full-policy> </address-setting> </address-settings> In the example above, when messages sent to the address jms.paged.queue exceed 104857600 bytes in memory, the broker will begin paging. Note Paging is done individually per address. If you specify max-size-bytes for an address, each matching address does not exceed the maximum size that you specified. It DOES NOT mean that the total overall size of all matching addresses is limited to max-size-bytes . This is the list of available parameters on the address settings. Table 7.1. Paging Configuration Elements Element Name Description Default max-size-bytes The maximum size in memory allowed for the address before the broker enters page mode. -1 (disabled). When this parameter is disabled, the broker uses global-max-size as a memory-usage limit for paging instead. For more information, see Section 7.4, "Configuring a Global Paging Size" . page-size-bytes The size of each page file used on the paging system. 10MiB (10 \* 1024 \* 1024 bytes) address-full-policy Valid values are PAGE , DROP , BLOCK , and FAIL . If the value is PAGE then further messages will be paged to disk. If the value is DROP then further messages will be silently dropped. If the value is FAIL then the messages will be dropped and the client message producers will receive an exception. If the value is BLOCK then client message producers will block when they try and send further messages. PAGE page-max-cache-size The system will keep up to this number of page files in memory to optimize IO during paging navigation. 5 page-sync-timeout Time, in nanoseconds, between periodic page synchronizations. If you are using an asynchronous IO journal (that is, journal-type is set to ASYNCIO in the broker.xml configuration file), the default value is 3333333 nanoseconds (that is, 3.333333 milliseconds). If you are using a standard Java NIO journal (that is, journal-type is set to NIO ), the default value is the configured value of the journal-buffer-timeout parameter. 7.4. Configuring a Global Paging Size Sometimes configuring a memory limit per address is not practical, such as when a broker manages many addresses that have different usage patterns. In these situations, use the global-max-size parameter to set a global limit to the amount of memory the broker can use before it enters into the page mode configured for the address associated with the incoming message. The default value for global-max-size is half of the maximum memory available to the Java virtual machine (JVM). You can specify your own value for this parameter by configuring it in the broker.xml configuration file. The value for global-max-size is in bytes, but you can use byte notation ("K", "Mb", "GB", for example) for convenience. The following procedure shows how to configure the global-max-size parameter in the broker.xml configuration file. Configuring the global-max-size parameter Procedure Stop the broker. If the broker is running on Linux, run the following command: If the broker is running on Windows as a service, run the following command: Open the broker.xml configuration file located under BROKER_INSTANCE_DIR /etc . Add the global-max-size parameter to broker.xml to limit the amount of memory, in bytes, the broker can use. Note that you can also use byte notation ( K , Mb , GB ) for the value of global-max-size , as shown in the following example. <configuration> <core> ... <global-max-size>1GB</global-max-size> ... </core> </configuration> In the preceding example, the broker is configured to use a maximum of one gigabyte, 1GB , of available memory when processing messages. If the configured limit is exceeded, the broker enters the page mode configured for the address associated with the incoming message. Start the broker. If the broker is running on Linux, run the following command: If the broker is running on Windows as a service, run the following command: Related Information See Section 7.3, "Configuring an Address for Paging" for information about setting the paging mode for an address. 7.5. Limiting Disk Usage when Paging You can limit the amount of physical disk the broker uses before it blocks incoming messages rather than pages them. Add the max-disk-usage to the broker.xml configuration file and provide a value for the percentage of disk space the broker is allowed to use when paging messages. The default value for max-disk-usage is 90 , which means the limit is set at 90 percent of disk space. Configuring the max-disk-usage Procedure Stop the broker. If the broker is running on Linux, run the following command: If the broker is running on Windows as a service, run the following command: Open the broker.xml configuration file located under BROKER_INSTANCE_DIR /etc . Add the max-disk-usage configuration element and set a limit to the amount disk space to use when paging messages. <configuration> <core> ... <max-disk-usage>50</max-disk-usage> ... </core> </configuration> In the preceding example, the broker is limited to using 50 percent of disk space when paging messages. Messages are blocked and no longer paged after 50 percent of the disk is used. Start the broker. If the broker is running on Linux, run the following command: If the broker is running on Windows as a service, run the following command: 7.6. How to Drop Messages Instead of paging messages when the max size is reached, an address can also be configured to just drop messages when the address is full. To do this just set the address-full-policy to DROP in the address settings 7.6.1. Dropping Messages and Throwing an Exception to Producers Instead of paging messages when the max size is reached, an address can also be configured to drop messages and also throw an exception on the client-side when the address is full. To do this just set the address-full-policy to FAIL in the address settings 7.7. How to Block Producers Instead of paging messages when the max size is reached, an address can also be configured to block producers from sending further messages when the address is full, thus preventing the memory from being exhausted on the server. Note Blocking works only if the protocol being used supports it. For example, an AMQP producer will understand a Block packet when it is sent by the broker, but a STOMP producer will not. When memory is freed up on the server, producers will automatically unblock and be able to continue sending. To do this just set the address-full-policy to BLOCK in the address settings. In the default configuration, all addresses are configured to block producers after 10 MiB of data are in the address. 7.8. Caution with Addresses with Multicast Queues When a message is routed to an address that has multicast queues bound to it, for example, a JMS subscription in a Topic, there is only one copy of the message in memory. Each queue handles only a reference to it. Because of this the memory is only freed up after all queues referencing the message have delivered it. If you have a single lazy subscription, the entire address will suffer IO performance hit as all the queues will have messages being sent through an extra storage on the paging system. For example: An address has 10 queues One of the queues does not deliver its messages (maybe because of a slow consumer). Messages continually arrive at the address and paging is started. The other 9 queues are empty even though messages have been sent. In this example, all the other 9 queues will be consuming messages from the page system. This may cause performance issues if this is an undesirable state.
[ "<configuration ...> <core ...> <paging-directory>/somewhere/paging-directory</paging-directory> </core> </configuration>", "<address-settings> <address-setting match=\"jms.paged.queue\"> <max-size-bytes>104857600</max-size-bytes> <page-size-bytes>10485760</page-size-bytes> <address-full-policy>PAGE</address-full-policy> </address-setting> </address-settings>", "BROKER_INSTANCE_DIR /bin/artemis stop", "BROKER_INSTANCE_DIR \\bin\\artemis-service.exe stop", "<configuration> <core> <global-max-size>1GB</global-max-size> </core> </configuration>", "BROKER_INSTANCE_DIR /bin/artemis run", "BROKER_INSTANCE_DIR \\bin\\artemis-service.exe start", "BROKER_INSTANCE_DIR /bin/artemis stop", "BROKER_INSTANCE_DIR \\bin\\artemis-service.exe stop", "<configuration> <core> <max-disk-usage>50</max-disk-usage> </core> </configuration>", "BROKER_INSTANCE_DIR /bin/artemis run", "BROKER_INSTANCE_DIR \\bin\\artemis-service.exe start" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/configuring_amq_broker/paging
18.12.4. Usage of Variables in Filters
18.12.4. Usage of Variables in Filters There are two variables that have been reserved for usage by the network traffic filtering subsystem: MAC and IP. MAC is designated for the MAC address of the network interface. A filtering rule that references this variable will automatically be replaced with the MAC address of the interface. This works without the user having to explicitly provide the MAC parameter. Even though it is possible to specify the MAC parameter similar to the IP parameter above, it is discouraged since libvirt knows what MAC address an interface will be using. The parameter IP represents the IP address that the operating system inside the virtual machine is expected to use on the given interface. The IP parameter is special in so far as the libvirt daemon will try to determine the IP address (and thus the IP parameter's value) that is being used on an interface if the parameter is not explicitly provided but referenced. For current limitations on IP address detection, consult the section on limitations Section 18.12.12, "Limitations" on how to use this feature and what to expect when using it. The XML file shown in Section 18.12.2, "Filtering Chains" contains the filter no-arp-spoofing , which is an example of using a network filter XML to reference the MAC and IP variables. Note that referenced variables are always prefixed with the character USD . The format of the value of a variable must be of the type expected by the filter attribute identified in the XML. In the above example, the IP parameter must hold a legal IP address in standard format. Failure to provide the correct structure will result in the filter variable not being replaced with a value and will prevent a virtual machine from starting or will prevent an interface from attaching when hot plugging is being used. Some of the types that are expected for each XML attribute are shown in the example Example 18.4, "Sample variable types" . Example 18.4. Sample variable types As variables can contain lists of elements, (the variable IP can contain multiple IP addresses that are valid on a particular interface, for example), the notation for providing multiple elements for the IP variable is: This XML file creates filters to enable multiple IP addresses per interface. Each of the IP addresses will result in a separate filtering rule. Therefore using the XML above and the following rule, three individual filtering rules (one for each IP address) will be created: As it is possible to access individual elements of a variable holding a list of elements, a filtering rule like the following accesses the 2nd element of the variable DSTPORTS . Example 18.5. Using a variety of variables As it is possible to create filtering rules that represent all possible combinations of rules from different lists using the notation USDVARIABLE[@<iterator id="x">] . The following rule allows a virtual machine to receive traffic on a set of ports, which are specified in DSTPORTS , from the set of source IP address specified in SRCIPADDRESSES . The rule generates all combinations of elements of the variable DSTPORTS with those of SRCIPADDRESSES by using two independent iterators to access their elements. Assign concrete values to SRCIPADDRESSES and DSTPORTS as shown: Assigning values to the variables using USDSRCIPADDRESSES[@1] and USDDSTPORTS[@2] would then result in all combinations of addresses and ports being created as shown: 10.0.0.1, 80 10.0.0.1, 8080 11.1.2.3, 80 11.1.2.3, 8080 Accessing the same variables using a single iterator, for example by using the notation USDSRCIPADDRESSES[@1] and USDDSTPORTS[@1] , would result in parallel access to both lists and result in the following combination: 10.0.0.1, 80 11.1.2.3, 8080 Note USDVARIABLE is short-hand for USDVARIABLE[@0] . The former notation always assumes the role of iterator with iterator id="0" added as shown in the opening paragraph at the top of this section.
[ "<devices> <interface type='bridge'> <mac address='00:16:3e:5d:c7:9e'/> <filterref filter='clean-traffic'> <parameter name='IP' value='10.0.0.1'/> <parameter name='IP' value='10.0.0.2'/> <parameter name='IP' value='10.0.0.3'/> </filterref> </interface> </devices>", "<rule action='accept' direction='in' priority='500'> <tcp srpipaddr='USDIP'/> </rule>", "<rule action='accept' direction='in' priority='500'> <udp dstportstart='USDDSTPORTS[1]'/> </rule>", "<rule action='accept' direction='in' priority='500'> <ip srcipaddr='USDSRCIPADDRESSES[@1]' dstportstart='USDDSTPORTS[@2]'/> </rule>", "SRCIPADDRESSES = [ 10.0.0.1, 11.1.2.3 ] DSTPORTS = [ 80, 8080 ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-vars-in-filters
6.2. Mounting a btrfs file system
6.2. Mounting a btrfs file system To mount any device in the btrfs file system use the following command: Other useful mount options include: device=/ dev / name Appending this option to the mount command tells btrfs to scan the named device for a btrfs volume. This is used to ensure the mount will succeed as attempting to mount devices that are not btrfs will cause the mount to fail. Note This does not mean all devices will be added to the file system, it only scans them. max_inline= number Use this option to set the maximum amount of space (in bytes) that can be used to inline data within a metadata B-tree leaf. The default is 8192 bytes. For 4k pages it is limited to 3900 bytes due to additional headers that need to fit into the leaf. alloc_start= number Use this option to set where in the disk allocations start. thread_pool= number Use this option to assign the number of worker threads allocated. discard Use this option to enable discard/TRIM on freed blocks. noacl Use this option to disable the use of ACL's. space_cache Use this option to store the free space data on disk to make caching a block group faster. This is a persistent change and is safe to boot into old kernels. nospace_cache Use this option to disable the above space_cache . clear_cache Use this option to clear all the free space caches during mount. This is a safe option but will trigger the space cache to be rebuilt. As such, leave the file system mounted in order to let the rebuild process finish. This mount option is intended to be used once and only after problems are apparent with the free space. enospc_debug This option is used to debug problems with "no space left". recovery Use this option to enable autorecovery upon mount.
[ "mount / dev / device / mount-point" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/btrfs-mount
Chapter 4. Modifying a machine set
Chapter 4. Modifying a machine set You can modify a machine set, such as adding labels, changing the instance type, or changing block storage. On Red Hat Virtualization (RHV), you can also change a machine set to provision new nodes on a different storage domain. Note If you need to scale a machine set without making other changes, see Manually scaling a machine set . 4.1. Modifying a machine set To make changes to a machine set, edit the MachineSet YAML. Then, remove all machines associated with the machine set by deleting each machine or scaling down the machine set to 0 replicas. Then, scale the replicas back to the desired number. Changes you make to a machine set do not affect existing machines. If you need to scale a machine set without making other changes, you do not need to delete the machines. Note By default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker machine set to 0 unless you first relocate the router pods. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure Edit the machine set: USD oc edit machineset <machineset> -n openshift-machine-api Scale down the machine set to 0 : USD oc scale --replicas=0 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Wait for the machines to be removed. Scale up the machine set as needed: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Wait for the machines to start. The new machines contain changes you made to the machine set. 4.2. Migrating nodes to a different storage domain on RHV You can migrate the OpenShift Container Platform control plane and compute nodes to a different storage domain in a Red Hat Virtualization (RHV) cluster. 4.2.1. Migrating compute nodes to a different storage domain in RHV Prerequisites You are logged in to the Manager. You have the name of the target storage domain. Procedure Identify the virtual machine template: USD oc get -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.template_name}{"\n"}' machineset -A Create a new virtual machine in the Manager, based on the template you identified. Leave all other settings unchanged. For details, see Creating a Virtual Machine Based on a Template in the Red Hat Virtualization Virtual Machine Management Guide . Tip You do not need to start the new virtual machine. Create a new template from the new virtual machine. Specify the target storage domain under Target . For details, see Creating a Template in the Red Hat Virtualization Virtual Machine Management Guide . Add a new machine set to the OpenShift Container Platform cluster with the new template. Get the details of the current machine set: USD oc get machineset -o yaml Use these details to create a machine set. For more information see Creating a machine set . Enter the new virtual machine template name in the template_name field. Use the same template name you used in the New template dialog in the Manager. Note the names of both the old and new machine sets. You need to refer to them in subsequent steps. Migrate the workloads. Scale up the new machine set. For details on manually scaling machine sets, see Scaling a machine set manually . OpenShift Container Platform moves the pods to an available worker when the old machine is removed. Scale down the old machine set. Remove the old machine set: USD oc delete machineset <machineset-name> Additional resources Creating a machine set . Scaling a machine set manually Controlling pod placement using the scheduler 4.2.2. Migrating control plane nodes to a different storage domain on RHV OpenShift Container Platform does not manage control plane nodes, so they are easier to migrate than compute nodes. You can migrate them like any other virtual machine on Red Hat Virtualization (RHV). Perform this procedure for each node separately. Prerequisites You are logged in to the Manager. You have identified the control plane nodes. They are labeled master in the Manager. Procedure Select the virtual machine labeled master . Shut down the virtual machine. Click the Disks tab. Click the virtual machine's disk. Click More Actions and select Move . Select the target storage domain and wait for the migration process to complete. Start the virtual machine. Verify that the OpenShift Container Platform cluster is stable: USD oc get nodes The output should display the node with the status Ready . Repeat this procedure for each control plane node.
[ "oc edit machineset <machineset> -n openshift-machine-api", "oc scale --replicas=0 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc get -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.template_name}{\"\\n\"}' machineset -A", "oc get machineset -o yaml", "oc delete machineset <machineset-name>", "oc get nodes" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/machine_management/modifying-machineset
Chapter 7. Finalizing your GitHub application
Chapter 7. Finalizing your GitHub application After installing RHTAP, you must replace the placeholder values you previously entered in your GitHub application with values specific to your cluster. This allows anyone who installs the GitHub application to authenticate to Red Hat Developer Hub, and use Red Hat Trusted Application Pipeline. Prerequisites: ClusterAdmin access to an OpenShift cluster via the web console Procedure: By opening the link generated at the end of the last procedure, you should be in the OpenShift console, using the Administrator view. If not, navigate to view in the OpenShift Console for your cluster. Use the left navigation bar to go to Pipelines > Pipelines . In the Project field, below the banner of the page, select rhtap . Select the PipelineRun tab. Select the PipelineRun with a name that starts with rhtap-pe-info- . Navigate to the Logs tab. In a separate browser tab, return to the GitHub Apps page ( Settings > Developer settings > GitHub Apps ). to your new custom application, click Edit . Replace the placeholder values for the following fields with the new values found in the logs of the PipelineRun you executed in the OpenShift console: Homepage URL Callback URL Webhook URL Scroll to the bottom of the page and click Save . In a separate tab, navigate to the address you entered as the new homepage URL for your GitHub application. Choose GitHub as the sign-in method by clicking the SIGN IN button. In the popup window, authorize your custom GitHub application as requested. You should be immediately redirected to Red Hat Developer Hub (RHDH). Any developer who downloads your GitHub application can also authenticate by using that application, and by running the second PipelineRun generated in the third procedure. In RHDH, developers can then leverage the automated, customizable, and secure CI/CD functionality of Red Hat Trusted Application Pipeline. Revised on 2024-07-15 21:03:35 UTC
null
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.0/html/installing_red_hat_trusted_application_pipeline/accessing-rhtap-for-the-first-time
Chapter 5. PersistentVolume [v1]
Chapter 5. PersistentVolume [v1] Description PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PersistentVolumeSpec is the specification of a persistent volume. status object PersistentVolumeStatus is the current status of a persistent volume. 5.1.1. .spec Description PersistentVolumeSpec is the specification of a persistent volume. Type object Property Type Description accessModes array (string) accessModes contains all ways the volume can be mounted. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes awsElasticBlockStore object Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. azureDisk object AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object AzureFile represents an Azure File Service mount on the host and bind mount to the pod. capacity object (Quantity) capacity is the description of the persistent volume's resources and capacity. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity cephfs object Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. cinder object Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. claimRef object ObjectReference contains enough information to let you inspect or modify the referred object. csi object Represents storage that is managed by an external CSI volume driver (Beta feature) fc object Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. flexVolume object FlexPersistentVolumeSource represents a generic persistent volume resource that is provisioned/attached using an exec based plugin. flocker object Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. gcePersistentDisk object Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. glusterfs object Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. hostPath object Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. iscsi object ISCSIPersistentVolumeSource represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. local object Local represents directly-attached storage with node affinity (Beta feature) mountOptions array (string) mountOptions is the list of mount options, e.g. ["ro", "soft"]. Not validated - mount will simply fail if one is invalid. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options nfs object Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. nodeAffinity object VolumeNodeAffinity defines constraints that limit what nodes this volume can be accessed from. persistentVolumeReclaimPolicy string persistentVolumeReclaimPolicy defines what happens to a persistent volume when released from its claim. Valid options are Retain (default for manually created PersistentVolumes), Delete (default for dynamically provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be supported by the volume plugin underlying this PersistentVolume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming Possible enum values: - "Delete" means the volume will be deleted from Kubernetes on release from its claim. The volume plugin must support Deletion. - "Recycle" means the volume will be recycled back into the pool of unbound persistent volumes on release from its claim. The volume plugin must support Recycling. - "Retain" means the volume will be left in its current phase (Released) for manual reclamation by the administrator. The default policy is Retain. photonPersistentDisk object Represents a Photon Controller persistent disk resource. portworxVolume object PortworxVolumeSource represents a Portworx volume resource. quobyte object Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. rbd object Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. scaleIO object ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume storageClassName string storageClassName is the name of StorageClass to which this persistent volume belongs. Empty value means that this volume does not belong to any StorageClass. storageos object Represents a StorageOS persistent volume resource. volumeAttributesClassName string Name of VolumeAttributesClass to which this persistent volume belongs. Empty value is not allowed. When this field is not set, it indicates that this volume does not belong to any VolumeAttributesClass. This field is mutable and can be changed by the CSI driver after a volume has been updated successfully to a new class. For an unbound PersistentVolume, the volumeAttributesClassName will be matched with unbound PersistentVolumeClaims during the binding process. This is an alpha field and requires enabling VolumeAttributesClass feature. volumeMode string volumeMode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. Value of Filesystem is implied when not included in spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. vsphereVolume object Represents a vSphere volume resource. 5.1.2. .spec.awsElasticBlockStore Description Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 5.1.3. .spec.azureDisk Description AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. Possible enum values: - "None" - "ReadOnly" - "ReadWrite" diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared Possible enum values: - "Dedicated" - "Managed" - "Shared" readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 5.1.4. .spec.azureFile Description AzureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key secretNamespace string secretNamespace is the namespace of the secret that contains Azure Storage Account Name and Key default is the same as the Pod shareName string shareName is the azure Share Name 5.1.5. .spec.cephfs Description Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling. Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace user string user is Optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 5.1.6. .spec.cephfs.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.7. .spec.cinder Description Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling. Type object Required volumeID Property Type Description fsType string fsType Filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 5.1.8. .spec.cinder.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.9. .spec.claimRef Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 5.1.10. .spec.csi Description Represents storage that is managed by an external CSI volume driver (Beta feature) Type object Required driver volumeHandle Property Type Description controllerExpandSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace controllerPublishSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace driver string driver is the name of the driver to use for this volume. Required. fsType string fsType to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". nodeExpandSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace nodePublishSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace nodeStageSecretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace readOnly boolean readOnly value to pass to ControllerPublishVolumeRequest. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes of the volume to publish. volumeHandle string volumeHandle is the unique volume name returned by the CSI volume plugin's CreateVolume to refer to the volume on all subsequent calls. Required. 5.1.11. .spec.csi.controllerExpandSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.12. .spec.csi.controllerPublishSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.13. .spec.csi.nodeExpandSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.14. .spec.csi.nodePublishSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.15. .spec.csi.nodeStageSecretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.16. .spec.fc Description Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 5.1.17. .spec.flexVolume Description FlexPersistentVolumeSource represents a generic persistent volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace 5.1.18. .spec.flexVolume.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.19. .spec.flocker Description Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling. Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 5.1.20. .spec.gcePersistentDisk Description Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling. Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 5.1.21. .spec.glusterfs Description Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling. Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod endpointsNamespace string endpointsNamespace is the namespace that contains Glusterfs endpoint. If this field is empty, the EndpointNamespace defaults to the same namespace as the bound PVC. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 5.1.22. .spec.hostPath Description Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath Possible enum values: - "" For backwards compatible, leave it empty if unset - "BlockDevice" A block device must exist at the given path - "CharDevice" A character device must exist at the given path - "Directory" A directory must exist at the given path - "DirectoryOrCreate" If nothing exists at the given path, an empty directory will be created there as needed with file mode 0755, having the same group and ownership with Kubelet. - "File" A file must exist at the given path - "FileOrCreate" If nothing exists at the given path, an empty file will be created there as needed with file mode 0644, having the same group and ownership with Kubelet. - "Socket" A UNIX socket must exist at the given path 5.1.23. .spec.iscsi Description ISCSIPersistentVolumeSource represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling. Type object Required targetPortal iqn lun Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is Target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun is iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 5.1.24. .spec.iscsi.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.25. .spec.local Description Local represents directly-attached storage with node affinity (Beta feature) Type object Required path Property Type Description fsType string fsType is the filesystem type to mount. It applies only when the Path is a block device. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default value is to auto-select a filesystem if unspecified. path string path of the full path to the volume on the node. It can be either a directory or block device (disk, partition, ... ). 5.1.26. .spec.nfs Description Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling. Type object Required server path Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 5.1.27. .spec.nodeAffinity Description VolumeNodeAffinity defines constraints that limit what nodes this volume can be accessed from. Type object Property Type Description required object A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. 5.1.28. .spec.nodeAffinity.required Description A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 5.1.29. .spec.nodeAffinity.required.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 5.1.30. .spec.nodeAffinity.required.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 5.1.31. .spec.nodeAffinity.required.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 5.1.32. .spec.nodeAffinity.required.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 5.1.33. .spec.nodeAffinity.required.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 5.1.34. .spec.nodeAffinity.required.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn" values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 5.1.35. .spec.photonPersistentDisk Description Represents a Photon Controller persistent disk resource. Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 5.1.36. .spec.portworxVolume Description PortworxVolumeSource represents a Portworx volume resource. Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 5.1.37. .spec.quobyte Description Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling. Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 5.1.38. .spec.rbd Description Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling. Type object Required monitors image Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 5.1.39. .spec.rbd.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.40. .spec.scaleIO Description ScaleIOPersistentVolumeSource represents a persistent ScaleIO volume Type object Required gateway system secretRef Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs" gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace sslEnabled boolean sslEnabled is the flag to enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 5.1.41. .spec.scaleIO.secretRef Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 5.1.42. .spec.storageos Description Represents a StorageOS persistent volume resource. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object ObjectReference contains enough information to let you inspect or modify the referred object. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 5.1.43. .spec.storageos.secretRef Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 5.1.44. .spec.vsphereVolume Description Represents a vSphere volume resource. Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 5.1.45. .status Description PersistentVolumeStatus is the current status of a persistent volume. Type object Property Type Description lastPhaseTransitionTime Time lastPhaseTransitionTime is the time the phase transitioned from one to another and automatically resets to current time everytime a volume phase transitions. This is a beta field and requires the PersistentVolumeLastPhaseTransitionTime feature to be enabled (enabled by default). message string message is a human-readable message indicating details about why the volume is in this state. phase string phase indicates if a volume is available, bound to a claim, or released by a claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#phase Possible enum values: - "Available" used for PersistentVolumes that are not yet bound Available volumes are held by the binder and matched to PersistentVolumeClaims - "Bound" used for PersistentVolumes that are bound - "Failed" used for PersistentVolumes that failed to be correctly recycled or deleted after being released from a claim - "Pending" used for PersistentVolumes that are not available - "Released" used for PersistentVolumes where the bound PersistentVolumeClaim was deleted released volumes must be recycled before becoming available again this phase is used by the persistent volume claim binder to signal to another process to reclaim the resource reason string reason is a brief CamelCase string that describes any failure and is meant for machine parsing and tidy display in the CLI. 5.2. API endpoints The following API endpoints are available: /api/v1/persistentvolumes DELETE : delete collection of PersistentVolume GET : list or watch objects of kind PersistentVolume POST : create a PersistentVolume /api/v1/watch/persistentvolumes GET : watch individual changes to a list of PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/persistentvolumes/{name} DELETE : delete a PersistentVolume GET : read the specified PersistentVolume PATCH : partially update the specified PersistentVolume PUT : replace the specified PersistentVolume /api/v1/watch/persistentvolumes/{name} GET : watch changes to an object of kind PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/persistentvolumes/{name}/status GET : read status of the specified PersistentVolume PATCH : partially update status of the specified PersistentVolume PUT : replace status of the specified PersistentVolume 5.2.1. /api/v1/persistentvolumes HTTP method DELETE Description delete collection of PersistentVolume Table 5.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PersistentVolume Table 5.3. HTTP responses HTTP code Reponse body 200 - OK PersistentVolumeList schema 401 - Unauthorized Empty HTTP method POST Description create a PersistentVolume Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.5. Body parameters Parameter Type Description body PersistentVolume schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 202 - Accepted PersistentVolume schema 401 - Unauthorized Empty 5.2.2. /api/v1/watch/persistentvolumes HTTP method GET Description watch individual changes to a list of PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead. Table 5.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /api/v1/persistentvolumes/{name} Table 5.8. Global path parameters Parameter Type Description name string name of the PersistentVolume HTTP method DELETE Description delete a PersistentVolume Table 5.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.10. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 202 - Accepted PersistentVolume schema 401 - Unauthorized Empty HTTP method GET Description read the specified PersistentVolume Table 5.11. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PersistentVolume Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PersistentVolume Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.15. Body parameters Parameter Type Description body PersistentVolume schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty 5.2.4. /api/v1/watch/persistentvolumes/{name} Table 5.17. Global path parameters Parameter Type Description name string name of the PersistentVolume HTTP method GET Description watch changes to an object of kind PersistentVolume. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /api/v1/persistentvolumes/{name}/status Table 5.19. Global path parameters Parameter Type Description name string name of the PersistentVolume HTTP method GET Description read status of the specified PersistentVolume Table 5.20. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PersistentVolume Table 5.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.22. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PersistentVolume Table 5.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.24. Body parameters Parameter Type Description body PersistentVolume schema Table 5.25. HTTP responses HTTP code Reponse body 200 - OK PersistentVolume schema 201 - Created PersistentVolume schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/storage_apis/persistentvolume-v1
Installing
Installing Red Hat Enterprise Linux AI 1.1 Installation documentation on various platforms Red Hat RHEL AI Documentation Team
[ "use the embedded container image ostreecontainer --url=/run/install/repo/container --transport=oci --no-signature-verification switch bootc to point to Red Hat container image for upgrades %post bootc switch --mutate-in-place --transport registry registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.1 touch /etc/cloud/cloud-init.disabled %end ## user customizations follow customize this for your target system network environment network --bootproto=dhcp --device=link --activate customize this for your target system desired disk partitioning clearpart --all --initlabel --disklabel=gpt reqpart --add-boot part / --grow --fstype xfs services can also be customized via Kickstart firewall --disabled services --enabled=sshd optionally add a user user --name=cloud-user --groups=wheel --plaintext --password <password> sshkey --username cloud-user \"ssh-ed25519 AAAAC3Nza.....\" if desired, inject an SSH key for root rootpw --iscrypted locked sshkey --username root \"ssh-ed25519 AAAAC3Nza...\" reboot", "mkksiso rhelai-bootc.ks <downloaded-iso-image> rhelai-bootc-ks.iso", "customize this for your target system network environment network --bootproto=dhcp --device=link --activate customize this for your target system desired disk partitioning clearpart --all --initlabel --disklabel=gpt reqpart --add-boot part / --grow --fstype xfs customize this to include your own bootc container ostreecontainer --url quay.io/<your-user-name>/nvidia-bootc:latest services can also be customized via Kickstart firewall --disabled services --enabled=sshd optionally add a user user --name=cloud-user --groups=wheel --plaintext --password <password> sshkey --username cloud-user \"ssh-ed25519 AAAAC3Nza.....\" if desired, inject an SSH key for root rootpw --iscrypted locked sshkey --username root \"ssh-ed25519 AAAAC3Nza...\" reboot", "mkksiso rhelai-bootc.ks <downloaded-iso-image> rhelai-bootc-ks.iso", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train", "export BUCKET=<custom_bucket_name> export RAW_AMI=nvidia-bootc.ami export AMI_NAME=\"rhel-ai\" export DEFAULT_VOLUME_SIZE=1000", "aws s3 mb s3://USDBUCKET", "printf '{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\":{ \"sts:Externalid\": \"vmimport\" } } } ] }' > trust-policy.json", "aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json", "printf '{ \"Version\":\"2012-10-17\", \"Statement\":[ { \"Effect\":\"Allow\", \"Action\":[ \"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\" ], \"Resource\":[ \"arn:aws:s3:::%s\", \"arn:aws:s3:::%s/*\" ] }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe*\" ], \"Resource\":\"*\" } ] }' USDBUCKET USDBUCKET > role-policy.json", "aws iam put-role-policy --role-name vmimport --policy-name vmimport-USDBUCKET --policy-document file://role-policy.json", "curl -Lo disk.raw <link-to-raw-file>", "aws s3 cp disk.raw s3://USDBUCKET/USDRAW_AMI", "printf '{ \"Description\": \"my-image\", \"Format\": \"raw\", \"UserBucket\": { \"S3Bucket\": \"%s\", \"S3Key\": \"%s\" } }' USDBUCKET USDRAW_AMI > containers.json", "task_id=USD(aws ec2 import-snapshot --disk-container file://containers.json | jq -r .ImportTaskId)", "aws ec2 describe-import-snapshot-tasks --filters Name=task-state,Values=active", "snapshot_id=USD(aws ec2 describe-snapshots | jq -r '.Snapshots[] | select(.Description | contains(\"'USD{task_id}'\")) | .SnapshotId')", "aws ec2 create-tags --resources USDsnapshot_id --tags Key=Name,Value=\"USDAMI_NAME\"", "ami_id=USD(aws ec2 register-image --name \"USDAMI_NAME\" --description \"USDAMI_NAME\" --architecture x86_64 --root-device-name /dev/sda1 --block-device-mappings \"DeviceName=/dev/sda1,Ebs={VolumeSize=USD{DEFAULT_VOLUME_SIZE},SnapshotId=USD{snapshot_id}}\" --virtualization-type hvm --ena-support | jq -r .ImageId)", "aws ec2 create-tags --resources USDami_id --tags Key=Name,Value=\"USDAMI_NAME\"", "aws ec2 describe-images --owners self", "aws ec2 describe-security-groups", "aws ec2 describe-subnets", "instance_name=rhel-ai-instance ami=<ami-id> instance_type=<instance-type-size> key_name=<key-pair-name> security_group=<sg-id> disk_size=<size-of-disk>", "aws ec2 run-instances --image-id USDami --instance-type USDinstance_type --key-name USDkey_name --security-group-ids USDsecurity_group --subnet-id USDsubnet --block-device-mappings DeviceName=/dev/sda1,Ebs='{VolumeSize='USDdisk_size'}' --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value='USDinstance_name'}]'", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/cloud--user/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls. taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train", "ibmcloud login", "ibmcloud login API endpoint: https://cloud.ibm.com Region: us-east Get a one-time code from https://identity-1.eu-central.iam.cloud.ibm.com/identity/passcode to proceed. Open the URL in the default browser? [Y/n] > One-time code > Authenticating OK Select an account: 1. <account-name> 2. <account-name-2> API endpoint: https://cloud.ibm.com Region: us-east User: <user-name> Account: <selected-account> Resource group: No resource group targeted, use 'ibmcloud target -g RESOURCE_GROUP'", "ibmcloud plugin install cloud-object-storage infrastructure-service", "ibmcloud target -g Default", "ibmcloud target -r us-east", "ibmcloud catalog service cloud-object-storage --output json | jq -r '.[].children[] | select(.children != null) | .children[].name'", "cos_deploy_plan=premium-global-deployment", "cos_si_name=THE_NAME_OF_YOUR_SERVICE_INSTANCE", "ibmcloud resource service-instance-create USD{cos_si_name} cloud-object-storage standard global -d USD{cos_deploy_plan}", "cos_crn=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains(\"cloud-object-storage\")) | .crn')", "ibmcloud cos config crn --crn USD{cos_crn} --force", "bucket_name=NAME_OF_MY_BUCKET", "ibmcloud cos bucket-create --bucket USD{bucket_name}", "cos_si_guid=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains(\"cloud-object-storage\")) | .guid')", "ibmcloud iam authorization-policy-create is cloud-object-storage Reader --source-resource-type image --target-service-instance-id USD{cos_si_guid}", "curl -Lo disk.qcow2 \"PASTE_HERE_THE_LINK_OF_THE_QCOW2_FILE\"", "image_name=rhel-ai-20240703v0", "ibmcloud cos upload --bucket USD{bucket_name} --key USD{image_name}.qcow2 --file disk.qcow2 --region <region>", "ibmcloud is image-create USD{image_name} --file cos://<region>/USD{bucket_name}/USD{image_name}.qcow2 --os-name red-ai-9-amd64-nvidia-byol", "image_id=USD(ibmcloud is images --visibility private --output json | jq -r '.[] | select(.name==\"'USDimage_name'\") | .id')", "while ibmcloud is image --output json USD{image_id} | jq -r .status | grep -xq pending; do sleep 1; done", "ibmcloud is image USD{image_id}", "ibmcloud login -c <ACCOUNT_ID> -r <REGION> -g <RESOURCE_GROUP>", "ibmcloud plugin install infrastructure-service", "ssh-keygen -f ibmcloud -t ed25519", "ibmcloud is key-create my-ssh-key @ibmcloud.pub --key-type ed25519", "ibmcloud is floating-ip-reserve my-public-ip --zone <region>", "ibmcloud is instance-profiles", "name=my-rhelai-instance vpc=my-vpc-in-us-east zone=us-east-1 subnet=my-subnet-in-us-east-1 instance_profile=gx3-64x320x4l4 image=my-custom-rhelai-image sshkey=my-ssh-key floating_ip=my-public-ip disk_size=250", "ibmcloud is instance-create USDname USDvpc USDzone USDinstance_profile USDsubnet --image USDimage --keys USDsshkey --boot-volume '{\"name\": \"'USD{name}'-boot\", \"volume\": {\"name\": \"'USD{name}'-boot\", \"capacity\": 'USD{disk_size}', \"profile\": {\"name\": \"general-purpose\"}}}' --allow-ip-spoofing false", "ibmcloud is floating-ip-update USDfloating_ip --nic primary --in USDname", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model model_list serve model serve sysinfo system info test model test train model train", "name=my-rhelai-instance", "data_volume_size=1000", "ibmcloud is instance-volume-attachment-add data USD{name} --new-volume-name USD{name}-data --profile general-purpose --capacity USD{data_volume_size}", "lsblk", "disk=/dev/vdb", "sgdisk -n 1:0:0 USDdisk", "mkfs.xfs -L ilab-data USD{disk}1", "echo LABEL=ilab-data /mnt xfs defaults 0 0 >> /etc/fstab", "systemctl daemon-reload", "mount -a", "chmod 1777 /mnt/", "echo 'export ILAB_HOME=/mnt' >> USDHOME/.bash_profile", "source USDHOME/.bash_profile" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.1/html-single/installing/installing_overview
3.6. Caching Limitations
3.6. Caching Limitations XML, BLOB, CLOB, and OBJECT type cannot be used as part of the cache key for prepared statement of procedure cache keys. The exact SQL string, including the cache hint if present, must match the cached entry for the results to be reused. This allows cache usage to skip parsing and resolving for faster responses. Result set caching is transactional by default using the NON_XA transaction mode. To use full XA support, change the configuration to use NON_DURABLE_XA. Clearing the results cache clears all cache entries for all VDBs.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/caching_limitations
Chapter 5. Configuring network access
Chapter 5. Configuring network access Configure network access for your Data Grid deployment and find out about internal network services. 5.1. Exposing Data Grid clusters on the network Make Data Grid clusters available on the network so you can access Data Grid Console as well as REST and Hot Rod endpoints. By default, the Data Grid chart exposes deployments through a Route but you can configure it to expose clusters via Load Balancer or Node Port. You can also configure the Data Grid chart so that deployments are not exposed on the network and only available internally to the OpenShift cluster. Procedure Specify one of the following for the deploy.expose.type field: Option Description Route Exposes Data Grid through a route. This is the default value. LoadBalancer Exposes Data Grid through a load balancer service. NodePort Exposes Data Grid through a node port service. "" (empty value) Disables exposing Data Grid on the network. Optionally specify a hostname with the deploy.expose.host field if you expose Data Grid through a route. Optionally specify a port with the deploy.expose.nodePort field if you expose Data Grid through a node port service. Install or upgrade your Data Grid Helm release. 5.2. Retrieving network service details Get network service details so you can connect to Data Grid clusters. Prerequisites Expose your Data Grid cluster on the network. Have an oc client. Procedure Use one of the following commands to retrieve network service details: If you expose Data Grid through a route: If you expose Data Grid through a load balancer or node port service: 5.3. Network services The Data Grid chart creates default network services for internal access. Service Port Protocol Description <helm_release_name> 11222 TCP Provides access to Data Grid Hot Rod and REST endpoints. <helm_release_name> 11223 TCP Provides access to Data Grid metrics. <helm_release_name>-ping 8888 TCP Allows Data Grid pods to discover each other and form clusters. You can retrieve details about internal network services as follows:
[ "oc get routes", "oc get services", "oc get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) infinispan ClusterIP 192.0.2.0 <none> 11222/TCP,11223/TCP infinispan-ping ClusterIP None <none> 8888/TCP" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/building_and_deploying_data_grid_clusters_with_helm/network-access
Chapter 3. Reference design specifications
Chapter 3. Reference design specifications 3.1. Telco core and RAN DU reference design specifications The telco core reference design specification (RDS) describes OpenShift Container Platform 4.16 clusters running on commodity hardware that can support large scale telco applications including control plane and some centralized data plane functions. The telco RAN RDS describes the configuration for clusters running on commodity hardware to host 5G workloads in the Radio Access Network (RAN). 3.1.1. Reference design specifications for telco 5G deployments Red Hat and certified partners offer deep technical expertise and support for networking and operational capabilities required to run telco applications on OpenShift Container Platform 4.16 clusters. Red Hat's telco partners require a well-integrated, well-tested, and stable environment that can be replicated at scale for enterprise 5G solutions. The telco core and RAN DU reference design specifications (RDS) outline the recommended solution architecture based on a specific version of OpenShift Container Platform. Each RDS describes a tested and validated platform configuration for telco core and RAN DU use models. The RDS ensures an optimal experience when running your applications by defining the set of critical KPIs for telco 5G core and RAN DU. Following the RDS minimizes high severity escalations and improves application stability. 5G use cases are evolving and your workloads are continually changing. Red Hat is committed to iterating over the telco core and RAN DU RDS to support evolving requirements based on customer and partner feedback. 3.1.2. Reference design scope The telco core and telco RAN reference design specifications (RDS) capture the recommended, tested, and supported configurations to get reliable and repeatable performance for clusters running the telco core and telco RAN profiles. Each RDS includes the released features and supported configurations that are engineered and validated for clusters to run the individual profiles. The configurations provide a baseline OpenShift Container Platform installation that meets feature and KPI targets. Each RDS also describes expected variations for each individual configuration. Validation of each RDS includes many long duration and at-scale tests. Note The validated reference configurations are updated for each major Y-stream release of OpenShift Container Platform. Z-stream patch releases are periodically re-tested against the reference configurations. 3.1.3. Deviations from the reference design Deviating from the validated telco core and telco RAN DU reference design specifications (RDS) can have significant impact beyond the specific component or feature that you change. Deviations require analysis and engineering in the context of the complete solution. Important All deviations from the RDS should be analyzed and documented with clear action tracking information. Due diligence is expected from partners to understand how to bring deviations into line with the reference design. This might require partners to provide additional resources to engage with Red Hat to work towards enabling their use case to achieve a best in class outcome with the platform. This is critical for the supportability of the solution and ensuring alignment across Red Hat and with partners. Deviation from the RDS can have some or all of the following consequences: It can take longer to resolve issues. There is a risk of missing project service-level agreements (SLAs), project deadlines, end provider performance requirements, and so on. Unapproved deviations may require escalation at executive levels. Note Red Hat prioritizes the servicing of requests for deviations based on partner engagement priorities. 3.2. Telco RAN DU reference design specification 3.2.1. Telco RAN DU 4.16 reference design overview The Telco RAN distributed unit (DU) 4.16 reference design configures an OpenShift Container Platform 4.16 cluster running on commodity hardware to host telco RAN DU workloads. It captures the recommended, tested, and supported configurations to get reliable and repeatable performance for a cluster running the telco RAN DU profile. 3.2.1.1. Deployment architecture overview You deploy the telco RAN DU 4.16 reference configuration to managed clusters from a centrally managed RHACM hub cluster. The reference design specification (RDS) includes configuration of the managed clusters and the hub cluster components. Figure 3.1. Telco RAN DU deployment architecture overview 3.2.2. Telco RAN DU use model overview Use the following information to plan telco RAN DU workloads, cluster resources, and hardware specifications for the hub cluster and managed single-node OpenShift clusters. 3.2.2.1. Telco RAN DU application workloads DU worker nodes must have 3rd Generation Xeon (Ice Lake) 2.20 GHz or better CPUs with firmware tuned for maximum performance. 5G RAN DU user applications and workloads should conform to the following best practices and application limits: Develop cloud-native network functions (CNFs) that conform to the latest version of the CNF best practices guide . Use SR-IOV for high performance networking. Use exec probes sparingly and only when no other suitable options are available Do not use exec probes if a CNF uses CPU pinning. Use other probe implementations, for example, httpGet or tcpSocket . When you need to use exec probes, limit the exec probe frequency and quantity. The maximum number of exec probes must be kept below 10, and frequency must not be set to less than 10 seconds. Avoid using exec probes unless there is absolutely no viable alternative. Note Startup probes require minimal resources during steady-state operation. The limitation on exec probes applies primarily to liveness and readiness probes. 3.2.2.2. Telco RAN DU representative reference application workload characteristics The representative reference application workload has the following characteristics: Has a maximum of 15 pods and 30 containers for the vRAN application including its management and control functions Uses a maximum of 2 ConfigMap and 4 Secret CRs per pod Uses a maximum of 10 exec probes with a frequency of not less than 10 seconds Incremental application load on the kube-apiserver is less than 10% of the cluster platform usage Note You can extract CPU load can from the platform metrics. For example: query=avg_over_time(pod:container_cpu_usage:sum{namespace="openshift-kube-apiserver"}[30m]) Application logs are not collected by the platform log collector Aggregate traffic on the primary CNI is less than 1 MBps 3.2.2.3. Telco RAN DU worker node cluster resource utilization The maximum number of running pods in the system, inclusive of application workloads and OpenShift Container Platform pods, is 120. Resource utilization OpenShift Container Platform resource utilization varies depending on many factors including application workload characteristics such as: Pod count Type and frequency of probes Messaging rates on primary CNI or secondary CNI with kernel networking API access rate Logging rates Storage IOPS Cluster resource requirements are applicable under the following conditions: The cluster is running the described representative application workload. The cluster is managed with the constraints described in "Telco RAN DU worker node cluster resource utilization". Components noted as optional in the RAN DU use model configuration are not applied. Important You will need to do additional analysis to determine the impact on resource utilization and ability to meet KPI targets for configurations outside the scope of the Telco RAN DU reference design. You might have to allocate additional resources in the cluster depending on your requirements. Additional resources Telco RAN DU 4.16 validated software components 3.2.2.4. Hub cluster management characteristics Red Hat Advanced Cluster Management (RHACM) is the recommended cluster management solution. Configure it to the following limits on the hub cluster: Configure a maximum of 5 RHACM policies with a compliant evaluation interval of at least 10 minutes. Use a maximum of 10 managed cluster templates in policies. Where possible, use hub-side templating. Disable all RHACM add-ons except for the policy-controller and observability-controller add-ons. Set Observability to the default configuration. Important Configuring optional components or enabling additional features will result in additional resource usage and can reduce overall system performance. For more information, see Reference design deployment components . Table 3.1. OpenShift platform resource utilization under reference application load Metric Limit Notes CPU usage Less than 4000 mc - 2 cores (4 hyperthreads) Platform CPU is pinned to reserved cores, including both hyperthreads in each reserved core. The system is engineered to use 3 CPUs (3000mc) at steady-state to allow for periodic system tasks and spikes. Memory used Less than 16G 3.2.2.5. Telco RAN DU RDS components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure and deploy clusters to run telco RAN DU workloads. Figure 3.2. Telco RAN DU reference design components Note Ensure that components that are not included in the telco RAN DU profile do not affect the CPU resources allocated to workload applications. Important Out of tree drivers are not supported. Additional resources For details of the telco RAN RDS KPI test results, see Telco RAN DU reference design specification KPI test results . This information is only available to customers and partners. 3.2.3. Telco RAN DU 4.16 reference design components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure and deploy clusters to run RAN DU workloads. 3.2.3.1. Host firmware tuning New in this release No reference design updates in this release Description Configure system level performance. See Configuring host firmware for low latency and high performance for recommended settings. If Ironic inspection is enabled, the firmware setting values are available from the per-cluster BareMetalHost CR on the hub cluster. You enable Ironic inspection with a label in the spec.clusters.nodes field in the SiteConfig CR that you use to install the cluster. For example: nodes: - hostName: "example-node1.example.com" ironicInspect: "enabled" Note The telco RAN DU reference SiteConfig does not enable the ironicInspect field by default. Limits and requirements Hyperthreading must be enabled Engineering considerations Tune all settings for maximum performance Note You can tune firmware selections for power savings at the expense of performance as required. 3.2.3.2. Node Tuning Operator New in this release With this release, the Node Tuning Operator supports setting CPU frequencies in the PerformanceProfile for reserved and isolated core CPUs. This is an optional feature that you can use to define specific frequencies. Use this feature to set specific frequencies by enabling the intel_pstate CPUFreq driver in the Intel hardware. You must follow Intel's recommendations on frequencies for FlexRAN-like applications, which requires the default CPU frequency to be set to a lower value than default running frequency. Previously, for the RAN DU-profile, setting the realTime workload hint to true in the PerformanceProfile always disabled the intel_pstate . With this release, the Node Tuning Operator detects the underlying Intel hardware using TuneD and appropriately sets the intel_pstate kernel parameter based on the processor's generation. In this release, OpenShift Container Platform deployments with a performance profile now default to using cgroups v2 as the underlying resource management layer. If you run workloads that are not ready for this change, you can still revert to using the older cgroups v1 mechanism. Description You tune the cluster performance by creating a performance profile. Settings that you configure with a performance profile include: Selecting the realtime or non-realtime kernel. Allocating cores to a reserved or isolated cpuset . OpenShift Container Platform processes allocated to the management workload partition are pinned to reserved set. Enabling kubelet features (CPU manager, topology manager, and memory manager). Configuring huge pages. Setting additional kernel arguments. Setting per-core power tuning and max CPU frequency. Reserved and isolated core frequency tuning. Limits and requirements The Node Tuning Operator uses the PerformanceProfile CR to configure the cluster. You need to configure the following settings in the RAN DU profile PerformanceProfile CR: Select reserved and isolated cores and ensure that you allocate at least 4 hyperthreads (equivalent to 2 cores) on Intel 3rd Generation Xeon (Ice Lake) 2.20 GHz CPUs or better with firmware tuned for maximum performance. Set the reserved cpuset to include both hyperthread siblings for each included core. Unreserved cores are available as allocatable CPU for scheduling workloads. Ensure that hyperthread siblings are not split across reserved and isolated cores. Configure reserved and isolated CPUs to include all threads in all cores based on what you have set as reserved and isolated CPUs. Set core 0 of each NUMA node to be included in the reserved CPU set. Set the huge page size to 1G. Note You should not add additional workloads to the management partition. Only those pods which are part of the OpenShift management platform should be annotated into the management partition. Engineering considerations You should use the RT kernel to meet performance requirements. Note You can use the non-RT kernel if required. The number of huge pages that you configure depends on the application workload requirements. Variation in this parameter is expected and allowed. Variation is expected in the configuration of reserved and isolated CPU sets based on selected hardware and additional components in use on the system. Variation must still meet the specified limits. Hardware without IRQ affinity support impacts isolated CPUs. To ensure that pods with guaranteed whole CPU QoS have full use of the allocated CPU, all hardware in the server must support IRQ affinity. For more information, see About support of IRQ affinity setting . Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. 3.2.3.3. PTP Operator New in this release Configuring linuxptp services as grandmaster clock (T-GM) for dual Intel E810 Westport Channel NICs is now a generally available feature. You can configure the linuxptp services ptp4l and phc2sys as a highly available (HA) system clock for dual PTP boundary clocks (T-BC). Description See PTP timing for details of support and configuration of PTP in cluster nodes. The DU node can run in the following modes: As an ordinary clock (OC) synced to a grandmaster clock or boundary clock (T-BC) As a grandmaster clock synced from GPS with support for single or dual card E810 Westport Channel NICs. As dual boundary clocks (one per NIC) with support for E810 Westport Channel NICs Allow for High Availability of the system clock when there are multiple time sources on different NICs. Optional: as a boundary clock for radio units (RUs) Events and metrics for grandmaster clocks are a Tech Preview feature added in the 4.14 telco RAN DU RDS. For more information see Using the PTP hardware fast event notifications framework . You can subscribe applications to PTP events that happen on the node where the DU application is running. Limits and requirements Limited to two boundary clocks for dual NIC and HA Limited to two WPC card configuration for T-GM Engineering considerations Configurations are provided for ordinary clock, boundary clock, grandmaster clock, or PTP-HA PTP fast event notifications uses ConfigMap CRs to store PTP event subscriptions Use Intel E810-XXV-4T Westport Channel NICs for PTP grandmaster clocks with GPS timing, minimum firmware version 4.40 3.2.3.4. SR-IOV Operator New in this release With this release, you can use the SR-IOV Network Operator to configure QinQ (802.1ad and 802.1q) tagging. QinQ tagging provides efficient traffic management by enabling the use of both inner and outer VLAN tags. Outer VLAN tagging is hardware accelerated, leading to faster network performance. The update extends beyond the SR-IOV Network Operator itself. You can now configure QinQ on externally managed VFs by setting the outer VLAN tag using nmstate . QinQ support varies across different NICs. For a comprehensive list of known limitations for specific NIC models, see Configuring QinQ support for SR-IOV enabled workloads in the Additional resources section. With this release, you can configure the SR-IOV Network Operator to drain nodes in parallel during network policy updates, dramatically accelerating the setup process. This translates to significant time savings, especially for large cluster deployments that previously took hours or even days to complete. Description The SR-IOV Operator provisions and configures the SR-IOV CNI and device plugins. Both netdevice (kernel VFs) and vfio (DPDK) devices are supported. Limits and requirements Use OpenShift Container Platform supported devices SR-IOV and IOMMU enablement in BIOS: The SR-IOV Network Operator will automatically enable IOMMU on the kernel command line. SR-IOV VFs do not receive link state updates from the PF. If link down detection is needed you must configure this at the protocol level. You can apply multi-network policies on netdevice drivers types only. Multi-network policies require the iptables tool, which cannot manage vfio driver types. Engineering considerations SR-IOV interfaces with the vfio driver type are typically used to enable additional secondary networks for applications that require high throughput or low latency. Customer variation on the configuration and number of SriovNetwork and SriovNetworkNodePolicy custom resources (CRs) is expected. IOMMU kernel command line settings are applied with a MachineConfig CR at install time. This ensures that the SriovOperator CR does not cause a reboot of the node when adding them. SR-IOV support for draining nodes in parallel is not applicable in a single-node OpenShift cluster. If you exclude the SriovOperatorConfig CR from your deployment, the CR will not be created automatically. In scenarios where you pin or restrict workloads to specific nodes, the SR-IOV parallel node drain feature will not result in the rescheduling of pods. In these scenarios, the SR-IOV Operator disables the parallel node drain functionality. Additional resources Preparing the GitOps ZTP site configuration repository for version independence Configuring QinQ support for SR-IOV enabled workloads 3.2.3.5. Logging New in this release No reference design updates in this release Description Use logging to collect logs from the far edge node for remote analysis. The recommended log collector is Vector. Engineering considerations Handling logs beyond the infrastructure and audit logs, for example, from the application workload requires additional CPU and network bandwidth based on additional logging rate. As of OpenShift Container Platform 4.14, Vector is the reference log collector. Note Use of fluentd in the RAN use model is deprecated. 3.2.3.6. SRIOV-FEC Operator New in this release No reference design updates in this release Description SRIOV-FEC Operator is an optional 3rd party Certified Operator supporting FEC accelerator hardware. Limits and requirements Starting with FEC Operator v2.7.0: SecureBoot is supported The vfio driver for the PF requires the usage of vfio-token that is injected into Pods. Applications in the pod can pass the VF token to DPDK by using the EAL parameter --vfio-vf-token . Engineering considerations The SRIOV-FEC Operator uses CPU cores from the isolated CPU set. You can validate FEC readiness as part of the pre-checks for application deployment, for example, by extending the validation policy. 3.2.3.7. Local Storage Operator New in this release No reference design updates in this release Description You can create persistent volumes that can be used as PVC resources by applications with the Local Storage Operator. The number and type of PV resources that you create depends on your requirements. Engineering considerations Create backing storage for PV CRs before creating the PV . This can be a partition, a local volume, LVM volume, or full disk. Refer to the device listing in LocalVolume CRs by the hardware path used to access each device to ensure correct allocation of disks and partitions. Logical names (for example, /dev/sda ) are not guaranteed to be consistent across node reboots. For more information, see the RHEL 9 documentation on device identifiers . 3.2.3.8. LVMS Operator New in this release No reference design updates in this release Note LVMS Operator is an optional component. When you use the LVMS Operator as the storage solution, it replaces the Local Storage Operator, and the CPU required will be assigned to the management partition as platform overhead. The reference configuration must include one of these storage solutions but not both. Description The LVMS Operator provides dynamic provisioning of block and file storage. The LVMS Operator creates logical volumes from local devices that can be used as PVC resources by applications. Volume expansion and snapshots are also possible. The following example configuration creates a vg1 volume group that leverages all available disks on the node except the installation disk: StorageLVMCluster.yaml apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: storage-lvmcluster namespace: openshift-storage annotations: ran.openshift.io/ztp-deploy-wave: "10" spec: storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10 Limits and requirements In single-node OpenShift clusters, persistent storage must be provided by either LVMS or local storage, not both. Engineering considerations Ensure that sufficient disks or partitions are available for storage requirements. 3.2.3.9. Workload partitioning New in this release No reference design updates in this release Description Workload partitioning pins OpenShift platform and Day 2 Operator pods that are part of the DU profile to the reserved cpuset and removes the reserved CPU from node accounting. This leaves all unreserved CPU cores available for user workloads. The method of enabling and configuring workload partitioning changed in OpenShift Container Platform 4.14. 4.14 and later Configure partitions by setting installation parameters: cpuPartitioningMode: AllNodes Configure management partition cores with the reserved CPU set in the PerformanceProfile CR 4.13 and earlier Configure partitions with extra MachineConfiguration CRs applied at install-time Limits and requirements Namespace and Pod CRs must be annotated to allow the pod to be applied to the management partition Pods with CPU limits cannot be allocated to the partition. This is because mutation can change the pod QoS. For more information about the minimum number of CPUs that can be allocated to the management partition, see Node Tuning Operator . Engineering considerations Workload Partitioning pins all management pods to reserved cores. A sufficient number of cores must be allocated to the reserved set to account for operating system, management pods, and expected spikes in CPU use that occur when the workload starts, the node reboots, or other system events happen. 3.2.3.10. Cluster tuning New in this release No reference design updates in this release Description See the section Cluster capabilities section for a full list of optional components that you enable or disable before installation. Limits and requirements Cluster capabilities are not available for installer-provisioned installation methods. You must apply all platform tuning configurations. The following table lists the required platform tuning configurations: Table 3.2. Cluster capabilities configurations Feature Description Remove optional cluster capabilities Reduce the OpenShift Container Platform footprint by disabling optional cluster Operators on single-node OpenShift clusters only. Remove all optional Operators except the Marketplace and Node Tuning Operators. Configure cluster monitoring Configure the monitoring stack for reduced footprint by doing the following: Disable the local alertmanager and telemeter components. If you use RHACM observability, the CR must be augmented with appropriate additionalAlertManagerConfigs CRs to forward alerts to the hub cluster. Reduce the Prometheus retention period to 24h. Note The RHACM hub cluster aggregates managed cluster metrics. Disable networking diagnostics Disable networking diagnostics for single-node OpenShift because they are not required. Configure a single OperatorHub catalog source Configure the cluster to use a single catalog source that contains only the Operators required for a RAN DU deployment. Each catalog source increases the CPU use on the cluster. Using a single CatalogSource fits within the platform CPU budget. Engineering considerations In this release, OpenShift Container Platform deployments use Control Groups version 2 (cgroup v2) by default. As a consequence, performance profiles in a cluster use cgroups v2 for the underlying resource management layer. If workloads running on the cluster require cgroups v1, you can configure nodes to use cgroups v1. You can make this configuration as part of the initial cluster deployment. 3.2.3.11. Machine configuration New in this release No reference design updates in this release Limits and requirements The CRI-O wipe disable MachineConfig assumes that images on disk are static other than during scheduled maintenance in defined maintenance windows. To ensure the images are static, do not set the pod imagePullPolicy field to Always . Table 3.3. Machine configuration options Feature Description Container runtime Sets the container runtime to crun for all node roles. kubelet config and container mount hiding Reduces the frequency of kubelet housekeeping and eviction monitoring to reduce CPU usage. Create a container mount namespace, visible to kubelet and CRI-O, to reduce system mount scanning resource usage. SCTP Optional configuration (enabled by default) Enables SCTP. SCTP is required by RAN applications but disabled by default in RHCOS. kdump Optional configuration (enabled by default) Enables kdump to capture debug information when a kernel panic occurs. CRI-O wipe disable Disables automatic wiping of the CRI-O image cache after unclean shutdown. SR-IOV-related kernel arguments Includes additional SR-IOV related arguments in the kernel command line. RCU Normal systemd service Sets rcu_normal after the system is fully started. One-shot time sync Runs a one-time system time synchronization job for control plane or worker nodes. 3.2.3.12. Lifecycle Agent New in this release Use the Lifecycle Agent to enable image-based upgrades for single-node OpenShift clusters. Description The Lifecycle Agent provides local lifecycle management services for single-node OpenShift clusters. Limits and requirements The Lifecycle Agent is not applicable in multi-node clusters or single-node OpenShift clusters with an additional worker. Requires a persistent volume. Additional resources Understanding the image-based upgrade for single-node OpenShift clusters 3.2.3.13. Reference design deployment components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure the hub cluster with Red Hat Advanced Cluster Management (RHACM). 3.2.3.13.1. Red Hat Advanced Cluster Management (RHACM) New in this release You can now use PolicyGenerator resources and Red Hat Advanced Cluster Management (RHACM) to deploy polices for managed clusters with GitOps ZTP. This is a Technology Preview feature. Description RHACM provides Multi Cluster Engine (MCE) installation and ongoing lifecycle management functionality for deployed clusters. You declaratively specify configurations and upgrades with Policy CRs and apply the policies to clusters with the RHACM policy controller as managed by Topology Aware Lifecycle Manager. GitOps Zero Touch Provisioning (ZTP) uses the MCE feature of RHACM Configuration, upgrades, and cluster status are managed with the RHACM policy controller During installation RHACM can apply labels to individual nodes as configured in the SiteConfig custom resource (CR). Limits and requirements A single hub cluster supports up to 3500 deployed single-node OpenShift clusters with 5 Policy CRs bound to each cluster. Engineering considerations Use RHACM policy hub-side templating to better scale cluster configuration. You can significantly reduce the number of policies by using a single group policy or small number of general group policies where the group and per-cluster values are substituted into templates. Cluster specific configuration: managed clusters typically have some number of configuration values that are specific to the individual cluster. These configurations should be managed using RHACM policy hub-side templating with values pulled from ConfigMap CRs based on the cluster name. To save CPU resources on managed clusters, policies that apply static configurations should be unbound from managed clusters after GitOps ZTP installation of the cluster. 3.2.3.13.2. Topology Aware Lifecycle Manager (TALM) New in this release No reference design updates in this release Description Managed updates TALM is an Operator that runs only on the hub cluster for managing how changes (including cluster and Operator upgrades, configuration, and so on) are rolled out to the network. TALM does the following: Progressively applies updates to fleets of clusters in user-configurable batches by using Policy CRs. Adds ztp-done labels or other user configurable labels on a per-cluster basis Precaching for single-node OpenShift clusters TALM supports optional precaching of OpenShift Container Platform, OLM Operator, and additional user images to single-node OpenShift clusters before initiating an upgrade. A PreCachingConfig custom resource is available for specifying optional pre-caching configurations. For example: apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: example-config namespace: example-ns spec: additionalImages: - quay.io/foobar/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e - quay.io/foobar/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adf - quay.io/foobar/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfs spaceRequired: 45 GiB 1 overrides: preCacheImage: quay.io/test_images/pre-cache:latest platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable excludePrecachePatterns: 2 - aws - vsphere 1 1 Configurable space-required parameter allows you to validate before and after pre-caching storage space 2 Configurable filtering allows exclusion of unused images Limits and requirements TALM supports concurrent cluster deployment in batches of 400 Precaching and backup features are for single-node OpenShift clusters only. Engineering considerations The PreCachingConfig CR is optional and does not need to be created if you just wants to precache platform related (OpenShift and OLM Operator) images. The PreCachingConfig CR must be applied before referencing it in the ClusterGroupUpgrade CR. 3.2.3.13.3. GitOps and GitOps ZTP plugins New in this release No reference design updates in this release Description GitOps and GitOps ZTP plugins provide a GitOps-based infrastructure for managing cluster deployment and configuration. Cluster definitions and configurations are maintained as a declarative state in Git. ZTP plugins provide support for generating installation CRs from the SiteConfig CR and automatic wrapping of configuration CRs in policies based on PolicyGenTemplate CRs. You can deploy and manage multiple versions of OpenShift Container Platform on managed clusters using the baseline reference configuration CRs. You can also use custom CRs alongside the baseline CRs. Limits 300 SiteConfig CRs per ArgoCD application. You can use multiple applications to achieve the maximum number of clusters supported by a single hub cluster. Content in the /source-crs folder in Git overrides content provided in the GitOps ZTP plugin container. Git takes precedence in the search path. Add the /source-crs folder in the same directory as the kustomization.yaml file, which includes the PolicyGenTemplate as a generator. Note Alternative locations for the /source-crs directory are not supported in this context. Engineering considerations To avoid confusion or unintentional overwriting of files when updating content, use unique and distinguishable names for user-provided CRs in the /source-crs folder and extra manifests in Git. The SiteConfig CR allows multiple extra-manifest paths. When files with the same name are found in multiple directory paths, the last file found takes precedence. This allows you to put the full set of version-specific Day 0 manifests (extra-manifests) in Git and reference them from the SiteConfig CR. With this feature, you can deploy multiple OpenShift Container Platform versions to managed clusters simultaneously. The extraManifestPath field of the SiteConfig CR is deprecated from OpenShift Container Platform 4.15 and later. Use the new extraManifests.searchPaths field instead. Additional resources Preparing the GitOps ZTP site configuration repository for version independence Adding custom content to the GitOps ZTP pipeline 3.2.3.13.4. Agent-based installer New in this release No reference design updates in this release Description Agent-based installer (ABI) provides installation capabilities without centralized infrastructure. The installation program creates an ISO image that you mount to the server. When the server boots it installs OpenShift Container Platform and supplied extra manifests. Note You can also use ABI to install OpenShift Container Platform clusters without a hub cluster. An image registry is still required when you use ABI in this manner. Agent-based installer (ABI) is an optional component. Limits and requirements You can supply a limited set of additional manifests at installation time. You must include MachineConfiguration CRs that are required by the RAN DU use case. Engineering considerations ABI provides a baseline OpenShift Container Platform installation. You install Day 2 Operators and the remainder of the RAN DU use case configurations after installation. 3.2.4. Telco RAN distributed unit (DU) reference configuration CRs Use the following custom resources (CRs) to configure and deploy OpenShift Container Platform clusters with the telco RAN DU profile. Some of the CRs are optional depending on your requirements. CR fields you can change are annotated in the CR with YAML comments. Note You can extract the complete set of RAN DU CRs from the ztp-site-generate container image. See Preparing the GitOps ZTP site configuration repository for more information. 3.2.4.1. Day 2 Operators reference CRs Table 3.4. Day 2 Operators CRs Component Reference CR Optional New in this release Cluster logging ClusterLogForwarder.yaml No No Cluster logging ClusterLogging.yaml No No Cluster logging ClusterLogNS.yaml No No Cluster logging ClusterLogOperGroup.yaml No No Cluster logging ClusterLogSubscription.yaml No No Lifecycle Agent ImageBasedUpgrade.yaml Yes Yes Lifecycle Agent LcaSubscription.yaml Yes Yes Lifecycle Agent LcaSubscriptionNS.yaml Yes Yes Lifecycle Agent LcaSubscriptionOperGroup.yaml Yes Yes Local Storage Operator StorageClass.yaml Yes No Local Storage Operator StorageLV.yaml Yes No Local Storage Operator StorageNS.yaml Yes No Local Storage Operator StorageOperGroup.yaml Yes No Local Storage Operator StorageSubscription.yaml Yes No LVM Storage LVMOperatorStatus.yaml No Yes LVM Storage StorageLVMCluster.yaml No Yes LVM Storage StorageLVMSubscription.yaml No Yes LVM Storage StorageLVMSubscriptionNS.yaml No Yes LVM Storage StorageLVMSubscriptionOperGroup.yaml No Yes Node Tuning Operator PerformanceProfile.yaml No No Node Tuning Operator TunedPerformancePatch.yaml No No PTP fast event notifications PtpConfigBoundaryForEvent.yaml Yes Yes PTP fast event notifications PtpConfigForHAForEvent.yaml Yes Yes PTP fast event notifications PtpConfigMasterForEvent.yaml Yes Yes PTP fast event notifications PtpConfigSlaveForEvent.yaml Yes Yes PTP fast event notifications PtpOperatorConfigForEvent.yaml Yes No PTP Operator PtpConfigBoundary.yaml No No PTP Operator PtpConfigDualCardGmWpc.yaml No No PTP Operator PtpConfigForHA.yaml No Yes PTP Operator PtpConfigGmWpc.yaml No No PTP Operator PtpConfigSlave.yaml No No PTP Operator PtpSubscription.yaml No No PTP Operator PtpSubscriptionNS.yaml No No PTP Operator PtpSubscriptionOperGroup.yaml No No SR-IOV FEC Operator AcceleratorsNS.yaml Yes No SR-IOV FEC Operator AcceleratorsOperGroup.yaml Yes No SR-IOV FEC Operator AcceleratorsSubscription.yaml Yes No SR-IOV FEC Operator SriovFecClusterConfig.yaml Yes No SR-IOV Operator SriovNetwork.yaml No No SR-IOV Operator SriovNetworkNodePolicy.yaml No No SR-IOV Operator SriovOperatorConfig.yaml No No SR-IOV Operator SriovOperatorConfigForSNO.yaml No Yes SR-IOV Operator SriovSubscription.yaml No No SR-IOV Operator SriovSubscriptionNS.yaml No No SR-IOV Operator SriovSubscriptionOperGroup.yaml No No 3.2.4.2. Cluster tuning reference CRs Table 3.5. Cluster tuning CRs Component Reference CR Optional New in this release Cluster capabilities example-sno.yaml No No Disabling network diagnostics DisableSnoNetworkDiag.yaml No No Monitoring configuration ReduceMonitoringFootprint.yaml No No OperatorHub 09-openshift-marketplace-ns.yaml No No OperatorHub DefaultCatsrc.yaml No No OperatorHub DisableOLMPprof.yaml No No OperatorHub DisconnectedICSP.yaml No No OperatorHub OperatorHub.yaml Yes No 3.2.4.3. Machine configuration reference CRs Table 3.6. Machine configuration CRs Component Reference CR Optional New in this release Container runtime (crun) enable-crun-master.yaml No No Container runtime (crun) enable-crun-worker.yaml No No Disabling CRI-O wipe 99-crio-disable-wipe-master.yaml No No Disabling CRI-O wipe 99-crio-disable-wipe-worker.yaml No No Enabling kdump 06-kdump-master.yaml No No Enabling kdump 06-kdump-worker.yaml No No Kubelet configuration and container mount hiding 01-container-mount-ns-and-kubelet-conf-master.yaml No No Kubelet configuration and container mount hiding 01-container-mount-ns-and-kubelet-conf-worker.yaml No No One-shot time sync 99-sync-time-once-master.yaml No No One-shot time sync 99-sync-time-once-worker.yaml No No SCTP 03-sctp-machine-config-master.yaml No No SCTP 03-sctp-machine-config-worker.yaml No No Set RCU Normal 08-set-rcu-normal-master.yaml No No Set RCU Normal 08-set-rcu-normal-worker.yaml No No SR-IOV related kernel arguments 07-sriov-related-kernel-args-master.yaml No Yes SR-IOV related kernel arguments 07-sriov-related-kernel-args-worker.yaml No No 3.2.4.4. YAML reference The following is a complete reference for all the custom resources (CRs) that make up the telco RAN DU 4.16 reference configuration. 3.2.4.4.1. Day 2 Operators reference YAML ClusterLogForwarder.yaml apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging annotations: {} spec: # outputs: USDoutputs # pipelines: USDpipelines #apiVersion: "logging.openshift.io/v1" #kind: ClusterLogForwarder #metadata: # name: instance # namespace: openshift-logging #spec: # outputs: # - type: "kafka" # name: kafka-open # url: tcp://10.46.55.190:9092/test # pipelines: # - inputRefs: # - audit # - infrastructure # labels: # label1: test1 # label2: test2 # label3: test3 # label4: test4 # name: all-to-default # outputRefs: # - kafka-open ClusterLogging.yaml apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging annotations: {} spec: managementState: "Managed" collection: type: "vector" ClusterLogNS.yaml --- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management ClusterLogOperGroup.yaml --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: targetNamespaces: - openshift-logging ClusterLogSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: channel: "stable" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown ImageBasedUpgrade.yaml apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle # When setting `stage: Prep`, remember to add the seed image reference object below. # seedImageRef: # image: USDimage # version: USDversion LcaSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: {} spec: channel: "stable" name: lifecycle-agent source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown LcaSubscriptionNS.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management labels: kubernetes.io/metadata.name: openshift-lifecycle-agent LcaSubscriptionOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: {} spec: targetNamespaces: - openshift-lifecycle-agent StorageClass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: {} name: example-storage-class provisioner: kubernetes.io/no-provisioner reclaimPolicy: Delete StorageLV.yaml apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" annotations: {} spec: logLevel: Normal managementState: Managed storageClassDevices: # The list of storage classes and associated devicePaths need to be specified like this example: - storageClassName: "example-storage-class" volumeMode: Filesystem fsType: xfs # The below must be adjusted to the hardware. # For stability and reliability, it's recommended to use persistent # naming conventions for devicePaths, such as /dev/disk/by-path. devicePaths: - /dev/disk/by-path/pci-0000:05:00.0-nvme-1 #--- ## How to verify ## 1. Create a PVC # apiVersion: v1 # kind: PersistentVolumeClaim # metadata: # name: local-pvc-name # spec: # accessModes: # - ReadWriteOnce # volumeMode: Filesystem # resources: # requests: # storage: 100Gi # storageClassName: example-storage-class #--- ## 2. Create a pod that mounts it # apiVersion: v1 # kind: Pod # metadata: # labels: # run: busybox # name: busybox # spec: # containers: # - image: quay.io/quay/busybox:latest # name: busybox # resources: {} # command: ["/bin/sh", "-c", "sleep infinity"] # volumeMounts: # - name: local-pvc # mountPath: /data # volumes: # - name: local-pvc # persistentVolumeClaim: # claimName: local-pvc-name # dnsPolicy: ClusterFirst # restartPolicy: Always ## 3. Run the pod on the cluster and verify the size and access of the `/data` mount StorageNS.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-local-storage annotations: workload.openshift.io/allowed: management StorageOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage annotations: {} spec: targetNamespaces: - openshift-local-storage StorageSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage annotations: {} spec: channel: "stable" name: local-storage-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown LVMOperatorStatus.yaml # This CR verifies the installation/upgrade of the Sriov Network Operator apiVersion: operators.coreos.com/v1 kind: Operator metadata: name: lvms-operator.openshift-storage annotations: {} status: components: refs: - kind: Subscription namespace: openshift-storage conditions: - type: CatalogSourcesUnhealthy status: "False" - kind: InstallPlan namespace: openshift-storage conditions: - type: Installed status: "True" - kind: ClusterServiceVersion namespace: openshift-storage conditions: - type: Succeeded status: "True" reason: InstallSucceeded StorageLVMCluster.yaml apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: lvmcluster namespace: openshift-storage annotations: {} spec: {} #example: creating a vg1 volume group leveraging all available disks on the node # except the installation disk. # storage: # deviceClasses: # - name: vg1 # thinPoolConfig: # name: thin-pool-1 # sizePercent: 90 # overprovisionRatio: 10 StorageLVMSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-storage annotations: {} spec: channel: "stable" name: lvms-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown StorageLVMSubscriptionNS.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-storage labels: workload.openshift.io/allowed: "management" openshift.io/cluster-monitoring: "true" annotations: {} StorageLVMSubscriptionOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lvms-operator-operatorgroup namespace: openshift-storage annotations: {} spec: targetNamespaces: - openshift-storage PerformanceProfile.yaml apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: "ran-du.redhat.com" spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" - "vfio_pci.enable_sriov=1" - "vfio_pci.disable_idle_d3=1" - "module_blacklist=irdma" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: "" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: "restricted" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false TunedPerformancePatch.yaml apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: {} spec: profile: - name: performance-patch # Please note: # - The 'include' line must match the associated PerformanceProfile name, following below pattern # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # - When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from # the [sysctl] section and remove the entire section if it is empty. data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* group.ice-dplls=0:f:10:*:ice-dplls.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "USDmcp" priority: 19 profile: performance-patch PtpConfigBoundaryForEvent.yaml apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: "boundary" ptp4lOpts: "-2 --summary_interval -4" phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "boundary" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigForHAForEvent.yaml apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-ha namespace: openshift-ptp annotations: {} spec: profile: - name: "boundary-ha" ptp4lOpts: " " phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" haProfiles: "USDprofile1,USDprofile2" recommend: - profile: "boundary-ha" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigMasterForEvent.yaml # The grandmaster profile is provided for testing only # It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: "grandmaster" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: "-2 --summary_interval -4" phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "grandmaster" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigSlaveForEvent.yaml apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp annotations: {} spec: profile: - name: "slave" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: "-2 -s --summary_interval -4" phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "slave" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpOperatorConfigForEvent.yaml apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp annotations: {} spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: "" ptpEventConfig: enableEventPublisher: true transportHost: "http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043" PtpConfigBoundary.yaml apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: "boundary" ptp4lOpts: "-2" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "boundary" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigDualCardGmWpc.yaml # The grandmaster profile is provided for testing only # It is not installed on production clusters # In this example two cards USDiface_nic1 and USDiface_nic2 are connected via # SMA1 ports by a cable and USDiface_nic2 receives 1PPS signals from USDiface_nic1 apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: "grandmaster" ptp4lOpts: "-2 --summary_interval -4" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_nic1 -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # "USDiface_nic1": # "U.FL2": "0 2" # "U.FL1": "0 1" # "SMA2": "0 2" # "SMA1": "2 1" # "USDiface_nic2": # "U.FL2": "0 2" # "U.FL1": "0 1" # "SMA2": "0 2" # "SMA1": "1 1" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - "-P" - "29.20" - "-z" - "CFG-HW-ANT_CFG_VOLTCTRL,1" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - "-P" - "29.20" - "-e" - "GPS" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - "-P" - "29.20" - "-d" - "Galileo" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - "-P" - "29.20" - "-d" - "GLONASS" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - "-P" - "29.20" - "-d" - "BeiDou" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - "-P" - "29.20" - "-d" - "SBAS" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - "-P" - "29.20" - "-t" - "-w" - "5" - "-v" - "1" - "-e" - "SURVEYIN,600,50000" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - "-P" - "29.20" - "-p" - "MON-HW" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,300 - "-P" - "29.20" - "-p" - "CFG-MSG,1,38,300" reportOutput: true ts2phcOpts: " " ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_nic1] ts2phc.extts_polarity rising ts2phc.extts_correction 0 [USDiface_nic2] ts2phc.master 0 ts2phc.extts_polarity rising #this is a measured value in nanoseconds to compensate for SMA cable delay ts2phc.extts_correction -10 ptp4lConf: | [USDiface_nic1] masterOnly 1 [USDiface_nic1_1] masterOnly 1 [USDiface_nic1_2] masterOnly 1 [USDiface_nic1_3] masterOnly 1 [USDiface_nic2] masterOnly 1 [USDiface_nic2_1] masterOnly 1 [USDiface_nic2_2] masterOnly 1 [USDiface_nic2_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 1 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: "grandmaster" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigForHA.yaml apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-ha namespace: openshift-ptp annotations: {} spec: profile: - name: "boundary-ha" ptp4lOpts: "" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" haProfiles: "USDprofile1,USDprofile2" recommend: - profile: "boundary-ha" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigGmWpc.yaml # The grandmaster profile is provided for testing only # It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: "grandmaster" ptp4lOpts: "-2 --summary_interval -4" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # "USDiface_master": # "U.FL2": "0 2" # "U.FL1": "0 1" # "SMA2": "0 2" # "SMA1": "0 1" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - "-P" - "29.20" - "-z" - "CFG-HW-ANT_CFG_VOLTCTRL,1" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - "-P" - "29.20" - "-e" - "GPS" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - "-P" - "29.20" - "-d" - "Galileo" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - "-P" - "29.20" - "-d" - "GLONASS" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - "-P" - "29.20" - "-d" - "BeiDou" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - "-P" - "29.20" - "-d" - "SBAS" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - "-P" - "29.20" - "-t" - "-w" - "5" - "-v" - "1" - "-e" - "SURVEYIN,600,50000" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - "-P" - "29.20" - "-p" - "MON-HW" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,300 - "-P" - "29.20" - "-p" - "CFG-MSG,1,38,300" reportOutput: true ts2phcOpts: " " ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: "grandmaster" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigSlave.yaml apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp annotations: {} spec: profile: - name: "ordinary" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: "-2 -s" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "ordinary" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpSubscription.yaml --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp annotations: {} spec: channel: "stable" name: ptp-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown PtpSubscriptionNS.yaml --- apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" PtpSubscriptionOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp annotations: {} spec: targetNamespaces: - openshift-ptp AcceleratorsNS.yaml apiVersion: v1 kind: Namespace metadata: name: vran-acceleration-operators annotations: {} AcceleratorsOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: vran-operators namespace: vran-acceleration-operators annotations: {} spec: targetNamespaces: - vran-acceleration-operators AcceleratorsSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-fec-subscription namespace: vran-acceleration-operators annotations: {} spec: channel: stable name: sriov-fec source: certified-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown SriovFecClusterConfig.yaml apiVersion: sriovfec.intel.com/v2 kind: SriovFecClusterConfig metadata: name: config namespace: vran-acceleration-operators annotations: {} spec: drainSkip: USDdrainSkip # true if SNO, false by default priority: 1 nodeSelector: node-role.kubernetes.io/master: "" acceleratorSelector: pciAddress: USDpciAddress physicalFunction: pfDriver: "vfio-pci" vfDriver: "vfio-pci" vfAmount: 16 bbDevConfig: USDbbDevConfig #Recommended configuration for Intel ACC100 (Mount Bryce) FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-acc100 #Recommended configuration for Intel N3000 FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-n3000 SriovNetwork.yaml apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: "" namespace: openshift-sriov-network-operator annotations: {} spec: # resourceName: "" networkNamespace: openshift-sriov-network-operator # vlan: "" # spoofChk: "" # ipam: "" # linkState: "" # maxTxRate: "" # minTxRate: "" # vlanQoS: "" # trust: "" # capabilities: "" SriovNetworkNodePolicy.yaml apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator annotations: {} spec: # The attributes for Mellanox/Intel based NICs as below. # deviceType: netdevice/vfio-pci # isRdma: true/false deviceType: USDdeviceType isRdma: USDisRdma nicSelector: # The exact physical function name must match the hardware used pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/USDmcp: "" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName SriovOperatorConfig.yaml apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: "node-role.kubernetes.io/USDmcp": "" # Injector and OperatorWebhook pods can be disabled (set to "false") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the "requests"/"limits" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: "1" # requests: # openshift.io/<resource_name>: "1" enableInjector: false enableOperatorWebhook: false logLevel: 0 SriovOperatorConfigForSNO.yaml apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: "node-role.kubernetes.io/USDmcp": "" # Injector and OperatorWebhook pods can be disabled (set to "false") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the "requests"/"limits" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: "1" # requests: # openshift.io/<resource_name>: "1" enableInjector: false enableOperatorWebhook: false # Disable drain is needed for Single Node Openshift disableDrain: true logLevel: 0 SriovSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator annotations: {} spec: channel: "stable" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown SriovSubscriptionNS.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management SriovSubscriptionOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator annotations: {} spec: targetNamespaces: - openshift-sriov-network-operator 3.2.4.4.2. Cluster tuning reference YAML example-sno.yaml # example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.16" sshPublicKey: "ssh-rsa AAAA..." clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all the optional set of components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier # - Ingress is needed for 4.16 and later installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "NodeTuning", "OperatorLifecycleManager", "Ingress" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: "latest" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""' group-du-sno: "" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites: "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" nodes: - hostName: "example-node1.example.com" role: "master" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: "example-hw.profile" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" # Use UEFISecureBoot to enable secure boot bootMode: "UEFI" rootDeviceHints: deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 DisableSnoNetworkDiag.yaml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster annotations: {} spec: disableNetworkDiagnostics: true ReduceMonitoringFootprint.yaml apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring annotations: {} data: config.yaml: | alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h 09-openshift-marketplace-ns.yaml # Taken from https://github.com/operator-framework/operator-marketplace/blob/53c124a3f0edfd151652e1f23c87dd39ed7646bb/manifests/01_namespace.yaml # Update it as the source evolves. apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "" workload.openshift.io/allowed: "management" labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: baseline pod-security.kubernetes.io/enforce-version: v1.25 pod-security.kubernetes.io/audit: baseline pod-security.kubernetes.io/audit-version: v1.25 pod-security.kubernetes.io/warn: baseline pod-security.kubernetes.io/warn-version: v1.25 name: "openshift-marketplace" DefaultCatsrc.yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: default-cat-source namespace: openshift-marketplace annotations: target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' spec: displayName: default-cat-source image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY DisableOLMPprof.yaml apiVersion: v1 kind: ConfigMap metadata: name: collect-profiles-config namespace: openshift-operator-lifecycle-manager annotations: {} data: pprof-config.yaml: | disabled: True DisconnectedICSP.yaml apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp annotations: {} spec: # repositoryDigestMirrors: # - USDmirrors OperatorHub.yaml apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster annotations: {} spec: disableAllDefaultSources: true 3.2.4.4.3. Machine configuration reference YAML enable-crun-master.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: "" containerRuntimeConfig: defaultRuntime: crun enable-crun-worker.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" containerRuntimeConfig: defaultRuntime: crun 99-crio-disable-wipe-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-crio-disable-wipe-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml 99-crio-disable-wipe-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-crio-disable-wipe-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml 06-kdump-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M 06-kdump-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M 01-container-mount-ns-and-kubelet-conf-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c "findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART}" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART} --housekeeping-interval=30s" name: 90-container-mount-namespace.conf - contents: | [Service] Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s" Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s" name: 30-kubelet-interval-tuning.conf name: kubelet.service 01-container-mount-ns-and-kubelet-conf-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: container-mount-namespace-and-kubelet-conf-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c "findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART}" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART} --housekeeping-interval=30s" name: 90-container-mount-namespace.conf - contents: | [Service] Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s" Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s" name: 30-kubelet-interval-tuning.conf name: kubelet.service 99-sync-time-once-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network-online.target Wants=network-online.target [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service 99-sync-time-once-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network-online.target Wants=network-online.target [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service 03-sctp-machine-config-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module-master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf 03-sctp-machine-config-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module-worker spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf 08-set-rcu-normal-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 08-set-rcu-normal-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK mode: 493 path: /usr/local/bin/set-rcu-normal.sh systemd: units: - contents: | [Unit] Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1 [Service] Type=simple ExecStart=/usr/local/bin/set-rcu-normal.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: set-rcu-normal.service 08-set-rcu-normal-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 08-set-rcu-normal-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK mode: 493 path: /usr/local/bin/set-rcu-normal.sh systemd: units: - contents: | [Unit] Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1 [Service] Type=simple ExecStart=/usr/local/bin/set-rcu-normal.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: set-rcu-normal.service 07-sriov-related-kernel-args-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 07-sriov-related-kernel-args-master spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt 07-sriov-related-kernel-args-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 07-sriov-related-kernel-args-worker spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt 3.2.5. Telco RAN DU reference configuration software specifications The following information describes the telco RAN DU reference design specification (RDS) validated software versions. 3.2.5.1. Telco RAN DU 4.16 validated software components The Red Hat telco RAN DU 4.16 solution has been validated using the following Red Hat software products for OpenShift Container Platform managed clusters and hub clusters. Table 3.7. Telco RAN DU managed cluster validated software components Component Software version Managed cluster version 4.16 Cluster Logging Operator 5.9 Local Storage Operator 4.16 PTP Operator 4.16 SRIOV Operator 4.16 Node Tuning Operator 4.16 Logging Operator 4.16 SRIOV-FEC Operator 2.9 Table 3.8. Hub cluster validated software components Component Software version Hub cluster version 4.16 GitOps ZTP plugin 4.16 Red Hat Advanced Cluster Management (RHACM) 2.10, 2.11 Red Hat OpenShift GitOps 1.12 Topology Aware Lifecycle Manager (TALM) 4.16 3.3. Telco core reference design specification 3.3.1. Telco core 4.16 reference design overview The telco core reference design specification (RDS) configures a OpenShift Container Platform cluster running on commodity hardware to host telco core workloads. 3.3.2. Telco core 4.16 use model overview The Telco core reference design specification (RDS) describes a platform that supports large-scale telco applications including control plane functions such as signaling and aggregation. It also includes some centralized data plane functions, for example, user plane functions (UPF). These functions generally require scalability, complex networking support, resilient software-defined storage, and support performance requirements that are less stringent and constrained than far-edge deployments like RAN. Telco core use model architecture The networking prerequisites for telco core functions are diverse and encompass an array of networking attributes and performance benchmarks. IPv6 is mandatory, with dual-stack configurations being prevalent. Certain functions demand maximum throughput and transaction rates, necessitating user plane networking support such as DPDK. Other functions adhere to conventional cloud-native patterns and can use solutions such as OVN-K, kernel networking, and load balancing. Telco core clusters are configured as standard three control plane clusters with worker nodes configured with the stock non real-time (RT) kernel. To support workloads with varying networking and performance requirements, worker nodes are segmented using MachineConfigPool CRs. For example, this is done to separate non-user data plane nodes from high-throughput nodes. To support the required telco operational features, the clusters have a standard set of Operator Lifecycle Manager (OLM) Day 2 Operators installed. 3.3.2.1. Common baseline model The following configurations and use model description are applicable to all telco core use cases. Cluster The cluster conforms to these requirements: High-availability (3+ supervisor nodes) control plane Non-schedulable supervisor nodes Multiple MachineConfigPool resources Storage Core use cases require persistent storage as provided by external OpenShift Data Foundation. For more information, see the "Storage" subsection in "Reference core design components". Networking Telco core clusters networking conforms to these requirements: Dual stack IPv4/IPv6 Fully disconnected: Clusters do not have access to public networking at any point in their lifecycle. Multiple networks: Segmented networking provides isolation between OAM, signaling, and storage traffic. Cluster network type: OVN-Kubernetes is required for IPv6 support. Core clusters have multiple layers of networking supported by underlying RHCOS, SR-IOV Operator, Load Balancer, and other components detailed in the following "Networking" section. At a high level these layers include: Cluster networking: The cluster network configuration is defined and applied through the installation configuration. Updates to the configuration can be done at day-2 through the NMState Operator. Initial configuration can be used to establish: Host interface configuration Active/Active Bonding (Link Aggregation Control Protocol (LACP)) Secondary or additional networks: OpenShift CNI is configured through the Network additionalNetworks or NetworkAttachmentDefinition CRs. MACVLAN Application Workload: User plane networking is running in cloud-native network functions (CNFs). Service Mesh Use of Service Mesh by telco CNFs is very common. It is expected that all core clusters will include a Service Mesh implementation. Service Mesh implementation and configuration is outside the scope of this specification. 3.3.2.1.1. Engineering Considerations common use model The following engineering considerations are relevant for the common use model. Worker nodes Worker nodes run on Intel 3rd Generation Xeon (IceLake) processors or newer. Alternatively, if using Skylake or earlier processors, the mitigations for silicon security vulnerabilities such as Spectre must be disabled; failure to do so may result in a significant 40 percent decrease in transaction performance. IRQ Balancing is enabled on worker nodes. The PerformanceProfile sets globallyDisableIrqLoadBalancing: false . Guaranteed QoS Pods are annotated to ensure isolation as described in "CPU partitioning and performance tuning" subsection in "Reference core design components" section. All nodes Hyper-Threading is enabled on all nodes CPU architecture is x86_64 only Nodes are running the stock (non-RT) kernel Nodes are not configured for workload partitioning The balance of node configuration between power management and maximum performance varies between MachineConfigPools in the cluster. This configuration is consistent for all nodes within a MachineConfigPool . CPU partitioning CPU partitioning is configured using the PerformanceProfile and applied on a per MachineConfigPool basis. See the "CPU partitioning and performance tuning" subsection in "Reference core design components". 3.3.2.1.2. Application workloads Application workloads running on core clusters might include a mix of high-performance networking CNFs and traditional best-effort or burstable pod workloads. Guaranteed QoS scheduling is available to pods that require exclusive or dedicated use of CPUs due to performance or security requirements. Typically pods hosting high-performance and low-latency-sensitive Cloud Native Functions (CNFs) utilizing user plane networking with DPDK necessitate the exclusive utilization of entire CPUs. This is accomplished through node tuning and guaranteed Quality of Service (QoS) scheduling. For pods that require exclusive use of CPUs, be aware of the potential implications of hyperthreaded systems and configure them to request multiples of 2 CPUs when the entire core (2 hyperthreads) must be allocated to the pod. Pods running network functions that do not require the high throughput and low latency networking are typically scheduled with best-effort or burstable QoS and do not require dedicated or isolated CPU cores. Description of limits CNF applications should conform to the latest version of the Red Hat Best Practices for Kubernetes guide. For a mix of best-effort and burstable QoS pods. Guaranteed QoS pods might be used but require correct configuration of reserved and isolated CPUs in the PerformanceProfile . Guaranteed QoS Pods must include annotations for fully isolating CPUs. Best effort and burstable pods are not guaranteed exclusive use of a CPU. Workloads might be preempted by other workloads, operating system daemons, or kernel tasks. Exec probes should be avoided unless there is no viable alternative. Do not use exec probes if a CNF is using CPU pinning. Other probe implementations, for example httpGet/tcpSocket , should be used. Note Startup probes require minimal resources during steady-state operation. The limitation on exec probes applies primarily to liveness and readiness probes. Signaling workload Signaling workloads typically use SCTP, REST, gRPC, or similar TCP or UDP protocols. The transactions per second (TPS) is in the order of hundreds of thousands using secondary CNI (multus) configured as MACVLAN or SR-IOV. Signaling workloads run in pods with either guaranteed or burstable QoS. 3.3.3. Telco core reference design components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure and deploy clusters to run telco core workloads. 3.3.3.1. CPU partitioning and performance tuning New in this release In this release, OpenShift Container Platform deployments use Control Groups version 2 (cgroup v2) by default. As a consequence, performance profiles in a cluster use cgroups v2 for the underlying resource management layer. Description CPU partitioning allows for the separation of sensitive workloads from generic purposes, auxiliary processes, interrupts, and driver work queues to achieve improved performance and latency. The CPUs allocated to those auxiliary processes are referred to as reserved in the following sections. In hyperthreaded systems, a CPU is one hyperthread. Limits and requirements The operating system needs a certain amount of CPU to perform all the support tasks including kernel networking. A system with just user plane networking applications (DPDK) needs at least one Core (2 hyperthreads when enabled) reserved for the operating system and the infrastructure components. A system with Hyper-Threading enabled must always put all core sibling threads to the same pool of CPUs. The set of reserved and isolated cores must include all CPU cores. Core 0 of each NUMA node must be included in the reserved CPU set. Isolated cores might be impacted by interrupts. The following annotations must be attached to the pod if guaranteed QoS pods require full use of the CPU: When per-pod power management is enabled with PerformanceProfile.workloadHints.perPodPowerManagement the following annotations must also be attached to the pod if guaranteed QoS pods require full use of the CPU: Engineering considerations The minimum reserved capacity ( systemReserved ) required can be found by following the guidance in "Which amount of CPU and memory are recommended to reserve for the system in OpenShift 4 nodes?" The actual required reserved CPU capacity depends on the cluster configuration and workload attributes. This reserved CPU value must be rounded up to a full core (2 hyper-thread) alignment. Changes to the CPU partitioning will drain and reboot the nodes in the MCP. The reserved CPUs reduce the pod density, as the reserved CPUs are removed from the allocatable capacity of the OpenShift node. The real-time workload hint should be enabled if the workload is real-time capable. Hardware without Interrupt Request (IRQ) affinity support will impact isolated CPUs. To ensure that pods with guaranteed CPU QoS have full use of allocated CPU, all hardware in the server must support IRQ affinity. OVS dynamically manages its cpuset configuration to adapt to network traffic needs. You do not need to reserve additional CPUs for handling high network throughput on the primary CNI. If workloads running on the cluster require cgroups v1, you can configure nodes to use cgroups v1. You can make this configuration as part of initial cluster deployment. For more information, see Enabling Linux cgroup v1 during installation in the Additional resources section. Additional resources Creating a performance profile Configuring host firmware for low latency and high performance Enabling Linux cgroup v1 during installation 3.3.3.2. Service Mesh Description Telco core CNFs typically require a service mesh implementation. The specific features and performance required are dependent on the application. The selection of service mesh implementation and configuration is outside the scope of this documentation. The impact of service mesh on cluster resource utilization and performance, including additional latency introduced into pod networking, must be accounted for in the overall solution engineering. Additional resources About OpenShift Service Mesh 3.3.3.3. Networking OpenShift Container Platform networking is an ecosystem of features, plugins, and advanced networking capabilities that extend Kubernetes networking with the advanced networking-related features that your cluster needs to manage its network traffic for one or multiple hybrid clusters. Additional resources Understanding networking 3.3.3.3.1. Cluster Network Operator (CNO) New in this release No reference design updates in this release Description The CNO deploys and manages the cluster network components including the default OVN-Kubernetes network plugin during OpenShift Container Platform cluster installation. It allows configuring primary interface MTU settings, OVN gateway modes to use node routing tables for pod egress, and additional secondary networks such as MACVLAN. In support of network traffic separation, multiple network interfaces are configured through the CNO. Traffic steering to these interfaces is configured through static routes applied by using the NMState Operator. To ensure that pod traffic is properly routed, OVN-K is configured with the routingViaHost option enabled. This setting uses the kernel routing table and the applied static routes rather than OVN for pod egress traffic. The Whereabouts CNI plugin is used to provide dynamic IPv4 and IPv6 addressing for additional pod network interfaces without the use of a DHCP server. Limits and requirements OVN-Kubernetes is required for IPv6 support. Large MTU cluster support requires connected network equipment to be set to the same or larger value. Engineering considerations Pod egress traffic is handled by kernel routing table with the routingViaHost option. Appropriate static routes must be configured in the host. Additional resources Cluster Network Operator 3.3.3.3.2. Load Balancer New in this release No reference design updates in this release Description MetalLB is a load-balancer implementation for bare metal Kubernetes clusters using standard routing protocols. It enables a Kubernetes service to get an external IP address which is also added to the host network for the cluster. Some use cases might require features not available in MetalLB, for example stateful load balancing. Where necessary, you can use an external third party load balancer. Selection and configuration of an external load balancer is outside the scope of this specification. When an external third party load balancer is used, the integration effort must include enough analysis to ensure all performance and resource utilization requirements are met. Limits and requirements Stateful load balancing is not supported by MetalLB. An alternate load balancer implementation must be used if this is a requirement for workload CNFs. The networking infrastructure must ensure that the external IP address is routable from clients to the host network for the cluster. Engineering considerations MetalLB is used in BGP mode only for core use case models. For core use models, MetalLB is supported with only the OVN-Kubernetes network provider used in local gateway mode. See routingViaHost in the "Cluster Network Operator" section. BGP configuration in MetalLB varies depending on the requirements of the network and peers. Address pools can be configured as needed, allowing variation in addresses, aggregation length, auto assignment, and other relevant parameters. The values of parameters in the Bi-Directional Forwarding Detection (BFD) profile should remain close to the defaults. Shorter values might lead to false negatives and impact performance. Additional resources When to use MetalLB 3.3.3.3.3. SR-IOV New in this release With this release, you can use the SR-IOV Network Operator to configure QinQ (802.1ad and 802.1q) tagging. QinQ tagging provides efficient traffic management by enabling the use of both inner and outer VLAN tags. Outer VLAN tagging is hardware accelerated, leading to faster network performance. The update extends beyond the SR-IOV Network Operator itself. You can now configure QinQ on externally managed VFs by setting the outer VLAN tag using nmstate . QinQ support varies across different NICs. For a comprehensive list of known limitations for specific NIC models, see the official documentation. With this release, you can configure the SR-IOV Network Operator to drain nodes in parallel during network policy updates, dramatically accelerating the setup process. This translates to significant time savings, especially for large cluster deployments that previously took hours or even days to complete. Description SR-IOV enables physical network interfaces (PFs) to be divided into multiple virtual functions (VFs). VFs can then be assigned to multiple pods to achieve higher throughput performance while keeping the pods isolated. The SR-IOV Network Operator provisions and manages SR-IOV CNI, network device plugin, and other components of the SR-IOV stack. Limits and requirements The network interface controllers supported are listed in Supported devices SR-IOV and IOMMU enablement in BIOS: The SR-IOV Network Operator automatically enables IOMMU on the kernel command line. SR-IOV VFs do not receive link state updates from PF. If link down detection is needed, it must be done at the protocol level. MultiNetworkPolicy CRs can be applied to netdevice networks only. This is because the implementation uses the iptables tool, which cannot manage vfio interfaces. Engineering considerations SR-IOV interfaces in vfio mode are typically used to enable additional secondary networks for applications that require high throughput or low latency. If you exclude the SriovOperatorConfig CR from your deployment, the CR will not be created automatically. Additional resources About Single Root I/O Virtualization (SR-IOV) hardware networks 3.3.3.3.4. NMState Operator New in this release No reference design updates in this release Description The NMState Operator provides a Kubernetes API for performing network configurations across the cluster's nodes. It enables network interface configurations, static IPs and DNS, VLANs, trunks, bonding, static routes, MTU, and enabling promiscuous mode on the secondary interfaces. The cluster nodes periodically report on the state of each node's network interfaces to the API server. Limits and requirements Not applicable Engineering considerations The initial networking configuration is applied using NMStateConfig content in the installation CRs. The NMState Operator is used only when needed for network updates. When SR-IOV virtual functions are used for host networking, the NMState Operator using NodeNetworkConfigurationPolicy is used to configure those VF interfaces, for example, VLANs and the MTU. Additional resources Kubernetes NMState Operator 3.3.3.4. Logging New in this release No reference design updates in this release Description The Cluster Logging Operator enables collection and shipping of logs off the node for remote archival and analysis. The reference configuration ships audit and infrastructure logs to a remote archive by using Kafka. Limits and requirements Not applicable Engineering considerations The impact of cluster CPU use is based on the number or size of logs generated and the amount of log filtering configured. The reference configuration does not include shipping of application logs. Inclusion of application logs in the configuration requires evaluation of the application logging rate and sufficient additional CPU resources allocated to the reserved set. Additional resources About logging 3.3.3.5. Power Management New in this release No reference design updates in this release Description The Performance Profile can be used to configure a cluster in a high power, low power, or mixed mode. The choice of power mode depends on the characteristics of the workloads running on the cluster, particularly how sensitive they are to latency. Configure the maximum latency for a low-latency pod by using the per-pod power management C-states feature. For more information, see Configuring power saving for nodes . Limits and requirements Power configuration relies on appropriate BIOS configuration, for example, enabling C-states and P-states. Configuration varies between hardware vendors. Engineering considerations Latency: To ensure that latency-sensitive workloads meet their requirements, you will need either a high-power configuration or a per-pod power management configuration. Per-pod power management is only available for Guaranteed QoS Pods with dedicated pinned CPUs. Additional resources Configuring power saving for nodes that run colocated high and low priority workloads 3.3.3.6. Storage Overview Cloud native storage services can be provided by multiple solutions including OpenShift Data Foundation from Red Hat or third parties. OpenShift Data Foundation is a Ceph based software-defined storage solution for containers. It provides block storage, file system storage, and on-premises object storage, which can be dynamically provisioned for both persistent and non-persistent data requirements. Telco core applications require persistent storage. Note All storage data may not be encrypted in flight. To reduce risk, isolate the storage network from other cluster networks. The storage network must not be reachable, or routable, from other cluster networks. Only nodes directly attached to the storage network should be allowed to gain access to it. 3.3.3.6.1. OpenShift Data Foundation New in this release No reference design updates in this release Description Red Hat OpenShift Data Foundation is a software-defined storage service for containers. For Telco core clusters, storage support is provided by OpenShift Data Foundation storage services running externally to the application workload cluster. OpenShift Data Foundation supports separation of storage traffic using secondary CNI networks. Limits and requirements In an IPv4/IPv6 dual-stack networking environment, OpenShift Data Foundation uses IPv4 addressing. For more information, see Support OpenShift dual stack with OpenShift Data Foundation using IPv4 . Engineering considerations OpenShift Data Foundation network traffic should be isolated from other traffic on a dedicated network, for example, by using VLAN isolation. 3.3.3.6.2. Other Storage Other storage solutions can be used to provide persistent storage for core clusters. The configuration and integration of these solutions is outside the scope of the telco core RDS. Integration of the storage solution into the core cluster must include correct sizing and performance analysis to ensure the storage meets overall performance and resource utilization requirements. Additional resources Red Hat OpenShift Data Foundation 3.3.3.7. Monitoring New in this release No reference design updates in this release Description The Cluster Monitoring Operator (CMO) is included by default on all OpenShift clusters and provides monitoring (metrics, dashboards, and alerting) for the platform components and optionally user projects as well. Configuration of the monitoring operator allows for customization, including: Default retention period Custom alert rules The default handling of pod CPU and memory metrics is based on upstream Kubernetes cAdvisor and makes a tradeoff that prefers handling of stale data over metric accuracy. This leads to spiky data that will create false triggers of alerts over user-specified thresholds. OpenShift supports an opt-in dedicated service monitor feature creating an additional set of pod CPU and memory metrics that do not suffer from the spiky behavior. For additional information, see this solution guide . In addition to default configuration, the following metrics are expected to be configured for telco core clusters: Pod CPU and memory metrics and alerts for user workloads Limits and requirements Monitoring configuration must enable the dedicated service monitor feature for accurate representation of pod metrics Engineering considerations The Prometheus retention period is specified by the user. The value used is a tradeoff between operational requirements for maintaining historical data on the cluster against CPU and storage resources. Longer retention periods increase the need for storage and require additional CPU to manage the indexing of data. Additional resources About OpenShift Container Platform monitoring 3.3.3.8. Scheduling New in this release No reference design updates in this release Description The scheduler is a cluster-wide component responsible for selecting the right node for a given workload. It is a core part of the platform and does not require any specific configuration in the common deployment scenarios. However, there are few specific use cases described in the following section. NUMA-aware scheduling can be enabled through the NUMA Resources Operator. For more information, see Scheduling NUMA-aware workloads . Limits and requirements The default scheduler does not understand the NUMA locality of workloads. It only knows about the sum of all free resources on a worker node. This might cause workloads to be rejected when scheduled to a node with Topology manager policy set to single-numa-node or restricted . For example, consider a pod requesting 6 CPUs and being scheduled to an empty node that has 4 CPUs per NUMA node. The total allocatable capacity of the node is 8 CPUs and the scheduler will place the pod there. The node local admission will fail, however, as there are only 4 CPUs available in each of the NUMA nodes. All clusters with multi-NUMA nodes are required to use the NUMA Resources Operator . The machineConfigPoolSelector of the NUMA Resources Operator must select all nodes where NUMA aligned scheduling is needed. All machine config pools must have consistent hardware configuration for example all nodes are expected to have the same NUMA zone count. Engineering considerations Pods might require annotations for correct scheduling and isolation. For more information on annotations, see CPU partitioning and performance tuning . You can configure SR-IOV virtual function NUMA affinity to be ignored during scheduling by using the excludeTopology field in SriovNetworkNodePolicy CR. Additional resources See Controlling pod placement using the scheduler Scheduling NUMA-aware workloads 3.3.3.9. Installation New in this release No reference design updates in this release Description Telco core clusters can be installed using the Agent Based Installer (ABI). This method allows users to install OpenShift Container Platform on bare metal servers without requiring additional servers or VMs for managing the installation. The ABI installer can be run on any system for example a laptop to generate an ISO installation image. This ISO is used as the installation media for the cluster supervisor nodes. Progress can be monitored using the ABI tool from any system with network connectivity to the supervisor node's API interfaces. Installation from declarative CRs Does not require additional servers to support installation Supports install in disconnected environment Limits and requirements Disconnected installation requires a reachable registry with all required content mirrored. Engineering considerations Networking configuration should be applied as NMState configuration during installation in preference to day-2 configuration by using the NMState Operator. Additional resources Installing an OpenShift Container Platform cluster with the Agent-based Installer 3.3.3.10. Security New in this release No reference design updates in this release Description Telco operators are security conscious and require clusters to be hardened against multiple attack vectors. Within OpenShift Container Platform, there is no single component or feature responsible for securing a cluster. This section provides details of security-oriented features and configuration for the use models covered in this specification. SecurityContextConstraints : All workload pods should be run with restricted-v2 or restricted SCC. Seccomp : All pods should be run with the RuntimeDefault (or stronger) seccomp profile. Rootless DPDK pods : Many user-plane networking (DPDK) CNFs require pods to run with root privileges. With this feature, a conformant DPDK pod can be run without requiring root privileges. Rootless DPDK pods create a tap device in a rootless pod that injects traffic from a DPDK application to the kernel. Storage : The storage network should be isolated and non-routable to other cluster networks. See the "Storage" section for additional details. Limits and requirements Rootless DPDK pods requires the following additional configuration steps: Configure the TAP plugin with the container_t SELinux context. Enable the container_use_devices SELinux boolean on the hosts. Engineering considerations For rootless DPDK pod support, the SELinux boolean container_use_devices must be enabled on the host for the TAP device to be created. This introduces a security risk that is acceptable for short to mid-term use. Other solutions will be explored. Additional resources Managing security context constraints 3.3.3.11. Scalability New in this release No reference design updates in this release Description Clusters will scale to the sizing listed in the limits and requirements section. Scaling of workloads is described in the use model section. Limits and requirements Cluster scales to at least 120 nodes Engineering considerations Not applicable 3.3.3.12. Additional configuration 3.3.3.12.1. Disconnected environment Description Telco core clusters are expected to be installed in networks without direct access to the internet. All container images needed to install, configure, and operator the cluster must be available in a disconnected registry. This includes OpenShift Container Platform images, day-2 Operator Lifecycle Manager (OLM) Operator images, and application workload images. The use of a disconnected environment provides multiple benefits, for example: Limiting access to the cluster for security Curated content: The registry is populated based on curated and approved updates for the clusters Limits and requirements A unique name is required for all custom CatalogSources. Do not reuse the default catalog names. A valid time source must be configured as part of cluster installation. Engineering considerations Not applicable Additional resources About cluster updates in a disconnected environment 3.3.3.12.2. Kernel New in this release No reference design updates in this release Description The user can install the following kernel modules by using MachineConfig to provide extended kernel functionality to CNFs: sctp ip_gre ip6_tables ip6t_REJECT ip6table_filter ip6table_mangle iptable_filter iptable_mangle iptable_nat xt_multiport xt_owner xt_REDIRECT xt_statistic xt_TCPMSS Limits and requirements Use of functionality available through these kernel modules must be analyzed by the user to determine the impact on CPU load, system performance, and ability to sustain KPI. Note Out of tree drivers are not supported. Engineering considerations Not applicable 3.3.4. Telco core 4.16 reference configuration CRs Use the following custom resources (CRs) to configure and deploy OpenShift Container Platform clusters with the telco core profile. Use the CRs to form the common baseline used in all the specific use models unless otherwise indicated. 3.3.4.1. Extracting the telco core reference design configuration CRs You can extract the complete set of custom resources (CRs) for the telco core profile from the telco-core-rds-rhel9 container image. The container image has both the required CRs, and the optional CRs, for the telco core profile. Prerequisites You have installed podman . Procedure Extract the content from the telco-core-rds-rhel9 container image by running the following commands: USD mkdir -p ./out USD podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.16 | base64 -d | tar xv -C out Verification The out directory has the following folder structure. You can view the telco core CRs in the out/telco-core-rds/ directory. Example output out/ └── telco-core-rds ├── configuration │ └── reference-crs │ ├── optional │ │ ├── logging │ │ ├── networking │ │ │ └── multus │ │ │ └── tap_cni │ │ ├── other │ │ └── tuning │ └── required │ ├── networking │ │ ├── metallb │ │ ├── multinetworkpolicy │ │ └── sriov │ ├── other │ ├── performance │ ├── scheduling │ └── storage │ └── odf-external └── install 3.3.4.2. Resource Tuning reference CRs Table 3.9. Resource Tuning CRs Component Reference CR Optional New in this release System reserved capacity control-plane-system-reserved.yaml Yes No 3.3.4.3. Storage reference CRs Table 3.10. Storage CRs Component Reference CR Optional New in this release External ODF configuration 01-rook-ceph-external-cluster-details.secret.yaml No No External ODF configuration 02-ocs-external-storagecluster.yaml No No External ODF configuration odfNS.yaml No No External ODF configuration odfOperGroup.yaml No No External ODF configuration odfSubscription.yaml No No 3.3.4.4. Networking reference CRs Table 3.11. Networking CRs Component Reference CR Optional New in this release Baseline Network.yaml No No Baseline networkAttachmentDefinition.yaml Yes Yes Load balancer addr-pool.yaml No No Load balancer bfd-profile.yaml No No Load balancer bgp-advr.yaml No No Load balancer bgp-peer.yaml No No Load balancer community.yaml No Yes Load balancer metallb.yaml No No Load balancer metallbNS.yaml Yes No Load balancer metallbOperGroup.yaml Yes No Load balancer metallbSubscription.yaml No No Multus - Tap CNI for rootless DPDK pod mc_rootless_pods_selinux.yaml No No NMState Operator NMState.yaml No Yes NMState Operator NMStateNS.yaml No Yes NMState Operator NMStateOperGroup.yaml No Yes NMState Operator NMStateSubscription.yaml No Yes SR-IOV Network Operator sriovNetwork.yaml Yes No SR-IOV Network Operator sriovNetworkNodePolicy.yaml No No SR-IOV Network Operator SriovOperatorConfig.yaml No No SR-IOV Network Operator SriovSubscription.yaml No No SR-IOV Network Operator SriovSubscriptionNS.yaml No No SR-IOV Network Operator SriovSubscriptionOperGroup.yaml No No 3.3.4.5. Scheduling reference CRs Table 3.12. Scheduling CRs Component Reference CR Optional New in this release NUMA-aware scheduler nrop.yaml No No NUMA-aware scheduler NROPSubscription.yaml No No NUMA-aware scheduler NROPSubscriptionNS.yaml No No NUMA-aware scheduler NROPSubscriptionOperGroup.yaml No No NUMA-aware scheduler sched.yaml No No NUMA-aware scheduler Scheduler.yaml No Yes 3.3.4.6. Other reference CRs Table 3.13. Other CRs Component Reference CR Optional New in this release Additional kernel modules control-plane-load-kernel-modules.yaml Yes No Additional kernel modules sctp_module_mc.yaml Yes No Additional kernel modules worker-load-kernel-modules.yaml Yes No Cluster logging ClusterLogForwarder.yaml No No Cluster logging ClusterLogging.yaml No No Cluster logging ClusterLogNS.yaml No No Cluster logging ClusterLogOperGroup.yaml No No Cluster logging ClusterLogSubscription.yaml No No Disconnected configuration catalog-source.yaml No No Disconnected configuration icsp.yaml No No Disconnected configuration operator-hub.yaml No No Monitoring and observability monitoring-config-cm.yaml Yes No Power management PerformanceProfile.yaml No No 3.3.4.7. YAML reference 3.3.4.7.1. Resource Tuning reference YAML control-plane-system-reserved.yaml # optional # count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: autosizing-master spec: autoSizingReserved: true machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: "" 3.3.4.7.2. Storage reference YAML 01-rook-ceph-external-cluster-details.secret.yaml # required # count: 1 --- apiVersion: v1 kind: Secret metadata: name: rook-ceph-external-cluster-details namespace: openshift-storage type: Opaque data: # encoded content has been made generic external_cluster_details: eyJuYW1lIjoicm9vay1jZXBoLW1vbi1lbmRwb2ludHMiLCJraW5kIjoiQ29uZmlnTWFwIiwiZGF0YSI6eyJkYXRhIjoiY2VwaHVzYTE9MS4yLjMuNDo2Nzg5IiwibWF4TW9uSWQiOiIwIiwibWFwcGluZyI6Int9In19LHsibmFtZSI6InJvb2stY2VwaC1tb24iLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJhZG1pbi1zZWNyZXQiOiJhZG1pbi1zZWNyZXQiLCJmc2lkIjoiMTExMTExMTEtMTExMS0xMTExLTExMTEtMTExMTExMTExMTExIiwibW9uLXNlY3JldCI6Im1vbi1zZWNyZXQifX0seyJuYW1lIjoicm9vay1jZXBoLW9wZXJhdG9yLWNyZWRzIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsidXNlcklEIjoiY2xpZW50LmhlYWx0aGNoZWNrZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoibW9uaXRvcmluZy1lbmRwb2ludCIsImtpbmQiOiJDZXBoQ2x1c3RlciIsImRhdGEiOnsiTW9uaXRvcmluZ0VuZHBvaW50IjoiMS4yLjMuNCwxLjIuMy4zLDEuMi4zLjIiLCJNb25pdG9yaW5nUG9ydCI6IjkyODMifX0seyJuYW1lIjoiY2VwaC1yYmQiLCJraW5kIjoiU3RvcmFnZUNsYXNzIiwiZGF0YSI6eyJwb29sIjoib2RmX3Bvb2wifX0seyJuYW1lIjoicm9vay1jc2ktcmJkLW5vZGUiLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJ1c2VySUQiOiJjc2ktcmJkLW5vZGUiLCJ1c2VyS2V5IjoiIn19LHsibmFtZSI6InJvb2stY3NpLXJiZC1wcm92aXNpb25lciIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7InVzZXJJRCI6ImNzaS1yYmQtcHJvdmlzaW9uZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoicm9vay1jc2ktY2VwaGZzLXByb3Zpc2lvbmVyIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsiYWRtaW5JRCI6ImNzaS1jZXBoZnMtcHJvdmlzaW9uZXIiLCJhZG1pbktleSI6IiJ9fSx7Im5hbWUiOiJyb29rLWNzaS1jZXBoZnMtbm9kZSIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7ImFkbWluSUQiOiJjc2ktY2VwaGZzLW5vZGUiLCJhZG1pbktleSI6ImMyVmpjbVYwIn19LHsibmFtZSI6ImNlcGhmcyIsImtpbmQiOiJTdG9yYWdlQ2xhc3MiLCJkYXRhIjp7ImZzTmFtZSI6ImNlcGhmcyIsInBvb2wiOiJtYW5pbGFfZGF0YSJ9fQ== 02-ocs-external-storagecluster.yaml # required # count: 1 --- apiVersion: ocs.openshift.io/v1 kind: StorageCluster metadata: name: ocs-external-storagecluster namespace: openshift-storage spec: externalStorage: enable: true labelSelector: {} status: phase: Ready odfNS.yaml # required: yes # count: 1 --- apiVersion: v1 kind: Namespace metadata: name: openshift-storage annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" odfOperGroup.yaml # required: yes # count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage odfSubscription.yaml # required: yes # count: 1 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: odf-operator namespace: openshift-storage spec: channel: "stable-4.14" name: odf-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown 3.3.4.7.3. Networking reference YAML Network.yaml # required # count: 1 apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: gatewayConfig: routingViaHost: true # additional networks are optional and may alternatively be specified using NetworkAttachmentDefinition CRs additionalNetworks: [USDadditionalNetworks] # eg #- name: add-net-1 # namespace: app-ns-1 # rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "add-net-1", "plugins": [{"type": "macvlan", "master": "bond1", "ipam": {}}] }' # type: Raw #- name: add-net-2 # namespace: app-ns-1 # rawCNIConfig: '{ "cniVersion": "0.4.0", "name": "add-net-2", "plugins": [ {"type": "macvlan", "master": "bond1", "mode": "private" },{ "type": "tuning", "name": "tuning-arp" }] }' # type: Raw # Enable to use MultiNetworkPolicy CRs useMultiNetworkPolicy: true networkAttachmentDefinition.yaml # optional # copies: 0-N apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: USDname namespace: USDns spec: nodeSelector: kubernetes.io/hostname: USDnodeName config: USDconfig #eg #config: '{ # "cniVersion": "0.3.1", # "name": "external-169", # "type": "vlan", # "master": "ens8f0", # "mode": "bridge", # "vlanid": 169, # "ipam": { # "type": "static", # } #}' addr-pool.yaml # required # count: 1-N apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: USDname # eg addresspool3 namespace: metallb-system annotations: metallb.universe.tf/address-pool: USDname # eg addresspool3 spec: ############## # Expected variation in this configuration addresses: [USDpools] #- 3.3.3.0/24 autoAssign: true ############## bfd-profile.yaml # required # count: 1-N apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: bfdprofile namespace: metallb-system spec: ################ # These values may vary. Recommended values are included as default receiveInterval: 150 # default 300ms transmitInterval: 150 # default 300ms #echoInterval: 300 # default 50ms detectMultiplier: 10 # default 3 echoMode: true passiveMode: true minimumTtl: 5 # default 254 # ################ bgp-advr.yaml # required # count: 1-N apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: USDname # eg bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: [USDpool] # eg: # - addresspool3 peers: [USDpeers] # eg: # - peer-one # communities: [USDcommunities] # Note correlation with address pool, or Community # eg: # - bgpcommunity # - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 bgp-peer.yaml # required # count: 1-N apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: USDname namespace: metallb-system spec: peerAddress: USDip # eg 192.168.1.2 peerASN: USDpeerasn # eg 64501 myASN: USDmyasn # eg 64500 routerID: USDid # eg 10.10.10.10 bfdProfile: bfdprofile passwordSecret: {} community.yaml --- apiVersion: metallb.io/v1beta1 kind: Community metadata: name: bgpcommunity namespace: metallb-system spec: communities: [USDcomm] metallb.yaml # required # count: 1 apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: node-role.kubernetes.io/worker: "" metallbNS.yaml # required: yes # count: 1 --- apiVersion: v1 kind: Namespace metadata: name: metallb-system annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" metallbOperGroup.yaml # required: yes # count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system metallbSubscription.yaml # required: yes # count: 1 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown mc_rootless_pods_selinux.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux boolean for tap cni plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service NMState.yaml apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate spec: {} NMStateNS.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-nmstate annotations: workload.openshift.io/allowed: management NMStateOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate NMStateSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: "stable" name: kubernetes-nmstate-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown sriovNetwork.yaml # optional (though expected for all) # count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: USDname # eg sriov-network-abcd namespace: openshift-sriov-network-operator spec: capabilities: "USDcapabilities" # eg '{"mac": true, "ips": true}' ipam: "USDipam" # eg '{ "type": "host-local", "subnet": "10.3.38.0/24" }' networkNamespace: USDnns # eg cni-test resourceName: USDresource # eg resourceTest sriovNetworkNodePolicy.yaml # optional (though expected in all deployments) # count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator spec: {} # USDspec # eg #deviceType: netdevice #nicSelector: # deviceID: "1593" # pfNames: # - ens8f0np0#0-9 # rootDevices: # - 0000:d8:00.0 # vendor: "8086" #nodeSelector: # kubernetes.io/hostname: host.sample.lab #numVfs: 20 #priority: 99 #excludeTopology: true #resourceName: resourceNameABCD SriovOperatorConfig.yaml # required # count: 1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: "" enableInjector: true enableOperatorWebhook: true disableDrain: false logLevel: 2 SriovSubscription.yaml # required: yes # count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: "stable" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown SriovSubscriptionNS.yaml # required: yes # count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management SriovSubscriptionOperGroup.yaml # required: yes # count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator 3.3.4.7.4. Scheduling reference YAML nrop.yaml # Optional # count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: # Periodic is the default setting infoRefreshMode: Periodic machineConfigPoolSelector: matchLabels: # This label must match the pool(s) you want to run NUMA-aligned workloads pools.operator.machineconfiguration.openshift.io/worker: "" NROPSubscription.yaml # required # count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: "4.14" name: numaresources-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace NROPSubscriptionNS.yaml # required: yes # count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources annotations: workload.openshift.io/allowed: management NROPSubscriptionOperGroup.yaml # required: yes # count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources sched.yaml # Optional # count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: #cacheResyncPeriod: "0" # Image spec should be the latest for the release imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.14.0" #logLevel: "Trace" schedulerName: topo-aware-scheduler Scheduler.yaml apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: # non-schedulable control plane is the default. This ensures # compliance. mastersSchedulable: false policy: name: "" 3.3.4.7.5. Other reference YAML control-plane-load-kernel-modules.yaml # optional # count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 40-load-kernel-modules-control-plane spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwo= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf sctp_module_mc.yaml # optional # count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,c2N0cA== filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf worker-load-kernel-modules.yaml # optional # count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 40-load-kernel-modules-worker spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwo= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf ClusterLogForwarder.yaml # required # count: 1 apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - type: "kafka" name: kafka-open url: tcp://10.11.12.13:9092/test pipelines: - inputRefs: - infrastructure #- application - audit labels: label1: test1 label2: test2 label3: test3 label4: test4 label5: test5 name: all-to-default outputRefs: - kafka-open ClusterLogging.yaml # required # count: 1 apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: vector managementState: Managed ClusterLogNS.yaml --- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management ClusterLogOperGroup.yaml --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging ClusterLogSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: "stable" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown catalog-source.yaml # required # count: 1..N apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-operators-disconnected namespace: openshift-marketplace spec: displayName: Red Hat Disconnected Operators Catalog image: USDimageUrl publisher: Red Hat sourceType: grpc # updateStrategy: # registryPoll: # interval: 1h status: connectionState: lastObservedState: READY icsp.yaml # required # count: 1 apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp spec: repositoryDigestMirrors: [] # - USDmirrors operator-hub.yaml # required # count: 1 apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true monitoring-config-cm.yaml # optional # count: 1 --- apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 15d volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 100Gi alertmanagerMain: volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 20Gi PerformanceProfile.yaml # required # count: 1 apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: # Some pods want the kernel stack to ignore IPv6 router Advertisement. kubeletconfig.experimental: | {"allowedUnsafeSysctls":["net.ipv6.conf.all.accept_ra"]} spec: cpu: # node0 CPUs: 0-17,36-53 # node1 CPUs: 18-34,54-71 # siblings: (0,36), (1,37)... # we want to reserve the first Core of each NUMA socket # # no CPU left behind! all-cpus == isolated + reserved isolated: USDisolated # eg 1-17,19-35,37-53,55-71 reserved: USDreserved # eg 0,18,36,54 # Guaranteed QoS pods will disable IRQ balancing for cores allocated to the pod. # default value of globallyDisableIrqLoadBalancing is false globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: # 32GB per numa node - count: USDcount # eg 64 size: 1G machineConfigPoolSelector: # For SNO: machineconfiguration.openshift.io/role: 'master' pools.operator.machineconfiguration.openshift.io/worker: '' nodeSelector: # For SNO: node-role.kubernetes.io/master: "" node-role.kubernetes.io/worker: "" workloadHints: realTime: false highPowerConsumption: false perPodPowerManagement: true realTimeKernel: enabled: false numa: # All guaranteed QoS containers get resources from a single NUMA node topologyPolicy: "single-numa-node" net: userLevelNetworking: false 3.3.5. Telco core reference configuration software specifications The following information describes the telco core reference design specification (RDS) validated software versions. 3.3.5.1. Software stack The following software versions were used for validating the telco core reference design specification: Table 3.14. Software versions for validation Component Software version Cluster Logging Operator 5.9.1 OpenShift Data Foundation 4.16 SR-IOV Operator 4.16 MetalLB 4.16 NMState Operator 4.16 NUMA-aware scheduler 4.16
[ "query=avg_over_time(pod:container_cpu_usage:sum{namespace=\"openshift-kube-apiserver\"}[30m])", "nodes: - hostName: \"example-node1.example.com\" ironicInspect: \"enabled\"", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: storage-lvmcluster namespace: openshift-storage annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10", "cpuPartitioningMode: AllNodes", "apiVersion: ran.openshift.io/v1alpha1 kind: PreCachingConfig metadata: name: example-config namespace: example-ns spec: additionalImages: - quay.io/foobar/application1@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e - quay.io/foobar/application2@sha256:3d5800123dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47adf - quay.io/foobar/applicationN@sha256:4fe1334adfafadsf987123adfffdaf1243340adfafdedga0991234afdadfs spaceRequired: 45 GiB 1 overrides: preCacheImage: quay.io/test_images/pre-cache:latest platformImage: quay.io/openshift-release-dev/ocp-release@sha256:3d5800990dee7cd4727d3fe238a97e2d2976d3808fc925ada29c559a47e2e operatorsIndexes: - registry.example.com:5000/custom-redhat-operators:1.0.0 operatorsPackagesAndChannels: - local-storage-operator: stable - ptp-operator: stable - sriov-network-operator: stable excludePrecachePatterns: 2 - aws - vsphere", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging annotations: {} spec: outputs: USDoutputs pipelines: USDpipelines #apiVersion: \"logging.openshift.io/v1\" #kind: ClusterLogForwarder #metadata: name: instance namespace: openshift-logging #spec: outputs: - type: \"kafka\" name: kafka-open url: tcp://10.46.55.190:9092/test pipelines: - inputRefs: - audit - infrastructure labels: label1: test1 label2: test2 label3: test3 label4: test4 name: all-to-default outputRefs: - kafka-open", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging annotations: {} spec: managementState: \"Managed\" collection: type: \"vector\"", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management", "--- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: targetNamespaces: - openshift-logging", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: channel: \"stable\" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle # When setting `stage: Prep`, remember to add the seed image reference object below. # seedImageRef: # image: USDimage # version: USDversion", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: {} spec: channel: \"stable\" name: lifecycle-agent source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management labels: kubernetes.io/metadata.name: openshift-lifecycle-agent", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: {} spec: targetNamespaces: - openshift-lifecycle-agent", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: {} name: example-storage-class provisioner: kubernetes.io/no-provisioner reclaimPolicy: Delete", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" annotations: {} spec: logLevel: Normal managementState: Managed storageClassDevices: # The list of storage classes and associated devicePaths need to be specified like this example: - storageClassName: \"example-storage-class\" volumeMode: Filesystem fsType: xfs # The below must be adjusted to the hardware. # For stability and reliability, it's recommended to use persistent # naming conventions for devicePaths, such as /dev/disk/by-path. devicePaths: - /dev/disk/by-path/pci-0000:05:00.0-nvme-1 #--- ## How to verify ## 1. Create a PVC apiVersion: v1 kind: PersistentVolumeClaim metadata: name: local-pvc-name spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi storageClassName: example-storage-class #--- ## 2. Create a pod that mounts it apiVersion: v1 kind: Pod metadata: labels: run: busybox name: busybox spec: containers: - image: quay.io/quay/busybox:latest name: busybox resources: {} command: [\"/bin/sh\", \"-c\", \"sleep infinity\"] volumeMounts: - name: local-pvc mountPath: /data volumes: - name: local-pvc persistentVolumeClaim: claimName: local-pvc-name dnsPolicy: ClusterFirst restartPolicy: Always ## 3. Run the pod on the cluster and verify the size and access of the `/data` mount", "apiVersion: v1 kind: Namespace metadata: name: openshift-local-storage annotations: workload.openshift.io/allowed: management", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage annotations: {} spec: targetNamespaces: - openshift-local-storage", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage annotations: {} spec: channel: \"stable\" name: local-storage-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "This CR verifies the installation/upgrade of the Sriov Network Operator apiVersion: operators.coreos.com/v1 kind: Operator metadata: name: lvms-operator.openshift-storage annotations: {} status: components: refs: - kind: Subscription namespace: openshift-storage conditions: - type: CatalogSourcesUnhealthy status: \"False\" - kind: InstallPlan namespace: openshift-storage conditions: - type: Installed status: \"True\" - kind: ClusterServiceVersion namespace: openshift-storage conditions: - type: Succeeded status: \"True\" reason: InstallSucceeded", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: lvmcluster namespace: openshift-storage annotations: {} spec: {} #example: creating a vg1 volume group leveraging all available disks on the node except the installation disk. storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-storage annotations: {} spec: channel: \"stable\" name: lvms-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: v1 kind: Namespace metadata: name: openshift-storage labels: workload.openshift.io/allowed: \"management\" openshift.io/cluster-monitoring: \"true\" annotations: {}", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lvms-operator-operatorgroup namespace: openshift-storage annotations: {} spec: targetNamespaces: - openshift-storage", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: {} spec: profile: - name: performance-patch # Please note: # - The 'include' line must match the associated PerformanceProfile name, following below pattern # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # - When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from # the [sysctl] section and remove the entire section if it is empty. data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* group.ice-dplls=0:f:10:*:ice-dplls.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"USDmcp\" priority: 19 profile: performance-patch", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"boundary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-ha namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary-ha\" ptp4lOpts: \" \" phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" haProfiles: \"USDprofile1,USDprofile2\" recommend: - profile: \"boundary-ha\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "The grandmaster profile is provided for testing only It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp annotations: {} spec: profile: - name: \"slave\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s --summary_interval -4\" phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"slave\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp annotations: {} spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: \"\" ptpEventConfig: enableEventPublisher: true transportHost: \"http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\"", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary\" ptp4lOpts: \"-2\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"boundary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "The grandmaster profile is provided for testing only It is not installed on production clusters In this example two cards USDiface_nic1 and USDiface_nic2 are connected via SMA1 ports by a cable and USDiface_nic2 receives 1PPS signals from USDiface_nic1 apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_nic1 -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_nic1\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"2 1\" # \"USDiface_nic2\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"1 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,300 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,300\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_nic1] ts2phc.extts_polarity rising ts2phc.extts_correction 0 [USDiface_nic2] ts2phc.master 0 ts2phc.extts_polarity rising #this is a measured value in nanoseconds to compensate for SMA cable delay ts2phc.extts_correction -10 ptp4lConf: | [USDiface_nic1] masterOnly 1 [USDiface_nic1_1] masterOnly 1 [USDiface_nic1_2] masterOnly 1 [USDiface_nic1_3] masterOnly 1 [USDiface_nic2] masterOnly 1 [USDiface_nic2_1] masterOnly 1 [USDiface_nic2_2] masterOnly 1 [USDiface_nic2_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 1 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-ha namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary-ha\" ptp4lOpts: \"\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" haProfiles: \"USDprofile1,USDprofile2\" recommend: - profile: \"boundary-ha\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "The grandmaster profile is provided for testing only It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_master\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"0 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,300 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,300\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp annotations: {} spec: profile: - name: \"ordinary\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"ordinary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"", "--- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp annotations: {} spec: channel: \"stable\" name: ptp-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp annotations: {} spec: targetNamespaces: - openshift-ptp", "apiVersion: v1 kind: Namespace metadata: name: vran-acceleration-operators annotations: {}", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: vran-operators namespace: vran-acceleration-operators annotations: {} spec: targetNamespaces: - vran-acceleration-operators", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-fec-subscription namespace: vran-acceleration-operators annotations: {} spec: channel: stable name: sriov-fec source: certified-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: sriovfec.intel.com/v2 kind: SriovFecClusterConfig metadata: name: config namespace: vran-acceleration-operators annotations: {} spec: drainSkip: USDdrainSkip # true if SNO, false by default priority: 1 nodeSelector: node-role.kubernetes.io/master: \"\" acceleratorSelector: pciAddress: USDpciAddress physicalFunction: pfDriver: \"vfio-pci\" vfDriver: \"vfio-pci\" vfAmount: 16 bbDevConfig: USDbbDevConfig #Recommended configuration for Intel ACC100 (Mount Bryce) FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-acc100 #Recommended configuration for Intel N3000 FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-n3000", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: \"\" namespace: openshift-sriov-network-operator annotations: {} spec: # resourceName: \"\" networkNamespace: openshift-sriov-network-operator vlan: \"\" spoofChk: \"\" ipam: \"\" linkState: \"\" maxTxRate: \"\" minTxRate: \"\" vlanQoS: \"\" trust: \"\" capabilities: \"\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator annotations: {} spec: # The attributes for Mellanox/Intel based NICs as below. # deviceType: netdevice/vfio-pci # isRdma: true/false deviceType: USDdeviceType isRdma: USDisRdma nicSelector: # The exact physical function name must match the hardware used pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" # Injector and OperatorWebhook pods can be disabled (set to \"false\") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the \"requests\"/\"limits\" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: \"1\" # requests: # openshift.io/<resource_name>: \"1\" enableInjector: false enableOperatorWebhook: false logLevel: 0", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" # Injector and OperatorWebhook pods can be disabled (set to \"false\") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the \"requests\"/\"limits\" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: \"1\" # requests: # openshift.io/<resource_name>: \"1\" enableInjector: false enableOperatorWebhook: false # Disable drain is needed for Single Node Openshift disableDrain: true logLevel: 0", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator annotations: {} spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown", "apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator annotations: {} spec: targetNamespaces: - openshift-sriov-network-operator", "example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.16\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all the optional set of components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier # - Ingress is needed for 4.16 and later installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"NodeTuning\", \"OperatorLifecycleManager\", \"Ingress\" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: \"latest\" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: \"\"' group-du-sno: \"\" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: \"example-sno\"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites: \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: \"example-hw.profile\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" # Use UEFISecureBoot to enable secure boot bootMode: \"UEFI\" rootDeviceHints: deviceName: \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster annotations: {} spec: disableNetworkDiagnostics: true", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring annotations: {} data: config.yaml: | alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h", "Taken from https://github.com/operator-framework/operator-marketplace/blob/53c124a3f0edfd151652e1f23c87dd39ed7646bb/manifests/01_namespace.yaml Update it as the source evolves. apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"\" workload.openshift.io/allowed: \"management\" labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: baseline pod-security.kubernetes.io/enforce-version: v1.25 pod-security.kubernetes.io/audit: baseline pod-security.kubernetes.io/audit-version: v1.25 pod-security.kubernetes.io/warn: baseline pod-security.kubernetes.io/warn-version: v1.25 name: \"openshift-marketplace\"", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: default-cat-source namespace: openshift-marketplace annotations: target.workload.openshift.io/management: '{\"effect\": \"PreferredDuringScheduling\"}' spec: displayName: default-cat-source image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY", "apiVersion: v1 kind: ConfigMap metadata: name: collect-profiles-config namespace: openshift-operator-lifecycle-manager annotations: {} data: pprof-config.yaml: | disabled: True", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp annotations: {} spec: repositoryDigestMirrors: - USDmirrors", "apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster annotations: {} spec: disableAllDefaultSources: true", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\" containerRuntimeConfig: defaultRuntime: crun", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" containerRuntimeConfig: defaultRuntime: crun", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-crio-disable-wipe-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-crio-disable-wipe-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: container-mount-namespace-and-kubelet-conf-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network-online.target Wants=network-online.target [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network-online.target Wants=network-online.target [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module-master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module-worker spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 08-set-rcu-normal-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK mode: 493 path: /usr/local/bin/set-rcu-normal.sh systemd: units: - contents: | [Unit] Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1 [Service] Type=simple ExecStart=/usr/local/bin/set-rcu-normal.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: set-rcu-normal.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 08-set-rcu-normal-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK mode: 493 path: /usr/local/bin/set-rcu-normal.sh systemd: units: - contents: | [Unit] Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1 [Service] Type=simple ExecStart=/usr/local/bin/set-rcu-normal.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: set-rcu-normal.service", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 07-sriov-related-kernel-args-master spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt", "Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 07-sriov-related-kernel-args-worker spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt", "cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" irq-load-balancing.crio.io: \"disable\"", "cpu-c-states.crio.io: \"disable\" cpu-freq-governor.crio.io: \"performance\"", "mkdir -p ./out", "podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.16 | base64 -d | tar xv -C out", "out/ └── telco-core-rds ├── configuration │ └── reference-crs │ ├── optional │ │ ├── logging │ │ ├── networking │ │ │ └── multus │ │ │ └── tap_cni │ │ ├── other │ │ └── tuning │ └── required │ ├── networking │ │ ├── metallb │ │ ├── multinetworkpolicy │ │ └── sriov │ ├── other │ ├── performance │ ├── scheduling │ └── storage │ └── odf-external └── install", "optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: autosizing-master spec: autoSizingReserved: true machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\"", "required count: 1 --- apiVersion: v1 kind: Secret metadata: name: rook-ceph-external-cluster-details namespace: openshift-storage type: Opaque data: # encoded content has been made generic external_cluster_details: eyJuYW1lIjoicm9vay1jZXBoLW1vbi1lbmRwb2ludHMiLCJraW5kIjoiQ29uZmlnTWFwIiwiZGF0YSI6eyJkYXRhIjoiY2VwaHVzYTE9MS4yLjMuNDo2Nzg5IiwibWF4TW9uSWQiOiIwIiwibWFwcGluZyI6Int9In19LHsibmFtZSI6InJvb2stY2VwaC1tb24iLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJhZG1pbi1zZWNyZXQiOiJhZG1pbi1zZWNyZXQiLCJmc2lkIjoiMTExMTExMTEtMTExMS0xMTExLTExMTEtMTExMTExMTExMTExIiwibW9uLXNlY3JldCI6Im1vbi1zZWNyZXQifX0seyJuYW1lIjoicm9vay1jZXBoLW9wZXJhdG9yLWNyZWRzIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsidXNlcklEIjoiY2xpZW50LmhlYWx0aGNoZWNrZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoibW9uaXRvcmluZy1lbmRwb2ludCIsImtpbmQiOiJDZXBoQ2x1c3RlciIsImRhdGEiOnsiTW9uaXRvcmluZ0VuZHBvaW50IjoiMS4yLjMuNCwxLjIuMy4zLDEuMi4zLjIiLCJNb25pdG9yaW5nUG9ydCI6IjkyODMifX0seyJuYW1lIjoiY2VwaC1yYmQiLCJraW5kIjoiU3RvcmFnZUNsYXNzIiwiZGF0YSI6eyJwb29sIjoib2RmX3Bvb2wifX0seyJuYW1lIjoicm9vay1jc2ktcmJkLW5vZGUiLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJ1c2VySUQiOiJjc2ktcmJkLW5vZGUiLCJ1c2VyS2V5IjoiIn19LHsibmFtZSI6InJvb2stY3NpLXJiZC1wcm92aXNpb25lciIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7InVzZXJJRCI6ImNzaS1yYmQtcHJvdmlzaW9uZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoicm9vay1jc2ktY2VwaGZzLXByb3Zpc2lvbmVyIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsiYWRtaW5JRCI6ImNzaS1jZXBoZnMtcHJvdmlzaW9uZXIiLCJhZG1pbktleSI6IiJ9fSx7Im5hbWUiOiJyb29rLWNzaS1jZXBoZnMtbm9kZSIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7ImFkbWluSUQiOiJjc2ktY2VwaGZzLW5vZGUiLCJhZG1pbktleSI6ImMyVmpjbVYwIn19LHsibmFtZSI6ImNlcGhmcyIsImtpbmQiOiJTdG9yYWdlQ2xhc3MiLCJkYXRhIjp7ImZzTmFtZSI6ImNlcGhmcyIsInBvb2wiOiJtYW5pbGFfZGF0YSJ9fQ==", "required count: 1 --- apiVersion: ocs.openshift.io/v1 kind: StorageCluster metadata: name: ocs-external-storagecluster namespace: openshift-storage spec: externalStorage: enable: true labelSelector: {} status: phase: Ready", "required: yes count: 1 --- apiVersion: v1 kind: Namespace metadata: name: openshift-storage annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"", "required: yes count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage", "required: yes count: 1 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: odf-operator namespace: openshift-storage spec: channel: \"stable-4.14\" name: odf-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown", "required count: 1 apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: gatewayConfig: routingViaHost: true # additional networks are optional and may alternatively be specified using NetworkAttachmentDefinition CRs additionalNetworks: [USDadditionalNetworks] # eg #- name: add-net-1 # namespace: app-ns-1 # rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"add-net-1\", \"plugins\": [{\"type\": \"macvlan\", \"master\": \"bond1\", \"ipam\": {}}] }' # type: Raw #- name: add-net-2 # namespace: app-ns-1 # rawCNIConfig: '{ \"cniVersion\": \"0.4.0\", \"name\": \"add-net-2\", \"plugins\": [ {\"type\": \"macvlan\", \"master\": \"bond1\", \"mode\": \"private\" },{ \"type\": \"tuning\", \"name\": \"tuning-arp\" }] }' # type: Raw # Enable to use MultiNetworkPolicy CRs useMultiNetworkPolicy: true", "optional copies: 0-N apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: USDname namespace: USDns spec: nodeSelector: kubernetes.io/hostname: USDnodeName config: USDconfig #eg #config: '{ # \"cniVersion\": \"0.3.1\", # \"name\": \"external-169\", # \"type\": \"vlan\", # \"master\": \"ens8f0\", # \"mode\": \"bridge\", # \"vlanid\": 169, # \"ipam\": { # \"type\": \"static\", # } #}'", "required count: 1-N apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: USDname # eg addresspool3 namespace: metallb-system annotations: metallb.universe.tf/address-pool: USDname # eg addresspool3 spec: ############## # Expected variation in this configuration addresses: [USDpools] #- 3.3.3.0/24 autoAssign: true ##############", "required count: 1-N apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: bfdprofile namespace: metallb-system spec: ################ # These values may vary. Recommended values are included as default receiveInterval: 150 # default 300ms transmitInterval: 150 # default 300ms #echoInterval: 300 # default 50ms detectMultiplier: 10 # default 3 echoMode: true passiveMode: true minimumTtl: 5 # default 254 # ################", "required count: 1-N apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: USDname # eg bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: [USDpool] # eg: # - addresspool3 peers: [USDpeers] # eg: # - peer-one # communities: [USDcommunities] # Note correlation with address pool, or Community # eg: # - bgpcommunity # - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100", "required count: 1-N apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: USDname namespace: metallb-system spec: peerAddress: USDip # eg 192.168.1.2 peerASN: USDpeerasn # eg 64501 myASN: USDmyasn # eg 64500 routerID: USDid # eg 10.10.10.10 bfdProfile: bfdprofile passwordSecret: {}", "--- apiVersion: metallb.io/v1beta1 kind: Community metadata: name: bgpcommunity namespace: metallb-system spec: communities: [USDcomm]", "required count: 1 apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: node-role.kubernetes.io/worker: \"\"", "required: yes count: 1 --- apiVersion: v1 kind: Namespace metadata: name: metallb-system annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"", "required: yes count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system", "required: yes count: 1 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux boolean for tap cni plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service", "apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate spec: {}", "apiVersion: v1 kind: Namespace metadata: name: openshift-nmstate annotations: workload.openshift.io/allowed: management", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: \"stable\" name: kubernetes-nmstate-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown", "optional (though expected for all) count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: USDname # eg sriov-network-abcd namespace: openshift-sriov-network-operator spec: capabilities: \"USDcapabilities\" # eg '{\"mac\": true, \"ips\": true}' ipam: \"USDipam\" # eg '{ \"type\": \"host-local\", \"subnet\": \"10.3.38.0/24\" }' networkNamespace: USDnns # eg cni-test resourceName: USDresource # eg resourceTest", "optional (though expected in all deployments) count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator spec: {} # USDspec eg #deviceType: netdevice #nicSelector: deviceID: \"1593\" pfNames: - ens8f0np0#0-9 rootDevices: - 0000:d8:00.0 vendor: \"8086\" #nodeSelector: kubernetes.io/hostname: host.sample.lab #numVfs: 20 #priority: 99 #excludeTopology: true #resourceName: resourceNameABCD", "required count: 1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: \"\" enableInjector: true enableOperatorWebhook: true disableDrain: false logLevel: 2", "required: yes count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown", "required: yes count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management", "required: yes count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator", "Optional count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: # Periodic is the default setting infoRefreshMode: Periodic machineConfigPoolSelector: matchLabels: # This label must match the pool(s) you want to run NUMA-aligned workloads pools.operator.machineconfiguration.openshift.io/worker: \"\"", "required count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: \"4.14\" name: numaresources-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace", "required: yes count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources annotations: workload.openshift.io/allowed: management", "required: yes count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources", "Optional count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: #cacheResyncPeriod: \"0\" # Image spec should be the latest for the release imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.14.0\" #logLevel: \"Trace\" schedulerName: topo-aware-scheduler", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: # non-schedulable control plane is the default. This ensures # compliance. mastersSchedulable: false policy: name: \"\"", "optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 40-load-kernel-modules-control-plane spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwo= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf", "optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,c2N0cA== filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf", "optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 40-load-kernel-modules-worker spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwo= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf", "required count: 1 apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - type: \"kafka\" name: kafka-open url: tcp://10.11.12.13:9092/test pipelines: - inputRefs: - infrastructure #- application - audit labels: label1: test1 label2: test2 label3: test3 label4: test4 label5: test5 name: all-to-default outputRefs: - kafka-open", "required count: 1 apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: vector managementState: Managed", "--- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management", "--- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: \"stable\" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown", "required count: 1..N apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-operators-disconnected namespace: openshift-marketplace spec: displayName: Red Hat Disconnected Operators Catalog image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY", "required count: 1 apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp spec: repositoryDigestMirrors: [] - USDmirrors", "required count: 1 apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true", "optional count: 1 --- apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 15d volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 100Gi alertmanagerMain: volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 20Gi", "required count: 1 apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: # Some pods want the kernel stack to ignore IPv6 router Advertisement. kubeletconfig.experimental: | {\"allowedUnsafeSysctls\":[\"net.ipv6.conf.all.accept_ra\"]} spec: cpu: # node0 CPUs: 0-17,36-53 # node1 CPUs: 18-34,54-71 # siblings: (0,36), (1,37) # we want to reserve the first Core of each NUMA socket # # no CPU left behind! all-cpus == isolated + reserved isolated: USDisolated # eg 1-17,19-35,37-53,55-71 reserved: USDreserved # eg 0,18,36,54 # Guaranteed QoS pods will disable IRQ balancing for cores allocated to the pod. # default value of globallyDisableIrqLoadBalancing is false globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: # 32GB per numa node - count: USDcount # eg 64 size: 1G machineConfigPoolSelector: # For SNO: machineconfiguration.openshift.io/role: 'master' pools.operator.machineconfiguration.openshift.io/worker: '' nodeSelector: # For SNO: node-role.kubernetes.io/master: \"\" node-role.kubernetes.io/worker: \"\" workloadHints: realTime: false highPowerConsumption: false perPodPowerManagement: true realTimeKernel: enabled: false numa: # All guaranteed QoS containers get resources from a single NUMA node topologyPolicy: \"single-numa-node\" net: userLevelNetworking: false" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/scalability_and_performance/reference-design-specifications
11.3.3.3. Server Options
11.3.3.3. Server Options Server options must be placed on their own line in .fetchmailrc after a poll or skip action. auth <auth-type> - Replace <auth-type> with the type of authentication to be used. By default, password authentication is used, but some protocols support other types of authentication, including kerberos_v5 , kerberos_v4 , and ssh . If the any authentication type is used, Fetchmail first tries methods that do not require a password, then methods that mask the password, and finally attempts to send the password unencrypted to authenticate to the server. interval <number> - Polls the specified server every <number> of times that it checks for email on all configured servers. This option is generally used for email servers where the user rarely receives messages. port <port-number> - Replace <port-number> with the port number. This value overrides the default port number for the specified protocol. proto <protocol> - Replace <protocol> with the protocol, such as pop3 or imap , to use when checking for messages on the server. timeout <seconds> - Replace <seconds> with the number of seconds of server inactivity after which Fetchmail gives up on a connection attempt. If this value is not set, a default of 300 seconds is assumed.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-email-mda-fetchmail-configuration-server
Chapter 4. Tuna
Chapter 4. Tuna You can use the Tuna tool to adjust scheduler tunables, tune thread priority, IRQ handlers, and isolate CPU cores and sockets. Tuna aims to reduce the complexity of performing tuning tasks. After installing the tuna package, use the tuna command without any arguments to start the Tuna graphical user interface (GUI). Use the tuna -h command to display available command-line interface (CLI) options. Note that the tuna (8) manual page distinguishes between action and modifier options. The Tuna GUI and CLI provide equivalent functionality. The GUI displays the CPU topology on one screen to help you identify problems. The Tuna GUI also allows you to make changes to the running threads, and see the results of those changes immediately. In the CLI, Tuna accepts multiple command-line parameters and processes them sequentially. You can use such commands in application initialization scripts as configuration commands. The Monitoring tab of the Tuna GUI Important Use the tuna --save= filename command with a descriptive file name to save the current configuration. Note that this command does not save every option that Tuna can change, but saves the kernel thread changes only. Any processes that are not currently running when they are changed are not saved. 4.1. Reviewing the System with Tuna Before you make any changes, you can use Tuna to show you what is currently happening on the system. To view the current policies and priorities, use the tuna --show_threads command: To show only a specific thread corresponding to a PID or matching a command name, add the --threads option before --show_threads : The pid_or_cmd_list argument is a list of comma-separated PIDs or command-name patterns. To view the current interrupt requests (IRQs) and their affinity, use the tuna --show_irqs command: To show only a specific interrupt request corresponding to an IRQ number or matching an IRQ user name, add the --irqs option before --show_irqs : The number_or_user_list argument is a list of comma-separated IRQ numbers or user-name patterns.
[ "tuna --show_threads thread pid SCHED_ rtpri affinity cmd 1 OTHER 0 0,1 init 2 FIFO 99 0 migration/0 3 OTHER 0 0 ksoftirqd/0 4 FIFO 99 0 watchdog/0", "tuna --threads= pid_or_cmd_list --show_threads", "tuna --show_irqs users affinity 0 timer 0 1 i8042 0 7 parport0 0", "tuna --irqs= number_or_user_list --show_irqs" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/chap-Tuna
Chapter 2. Configuring the OpenShift Container Platform TLS component for builds
Chapter 2. Configuring the OpenShift Container Platform TLS component for builds The tls component of the QuayRegistry custom resource definition (CRD) allows you to control whether SSL/TLS are managed by the Red Hat Quay Operator, or self managed. In its current state, Red Hat Quay does not support the builds feature, or the builder workers, when the tls component is managed by the Red Hat Quay Operator. When setting the tls component to unmanaged , you must supply your own ssl.cert and ssl.key files. Additionally, if you want your cluster to support builders , or the worker nodes that are responsible for building images, you must add both the Quay route and the builder route name to the SAN list in the certificate. Alternatively, however, you could use a wildcard. The following procedure shows you how to add the builder route. Prerequisites You have set the tls component to unmanaged and uploaded custom SSL/TLS certificates to the Red Hat Quay Operator. For more information, see SSL and TLS for Red Hat Quay . Procedure In the configuration file that defines your SSL/TLS certificate parameters, for example, openssl.cnf , add the following information to the certificate's Subject Alternative Name (SAN) field. For example: # ... [alt_names] <quayregistry-name>-quay-builder-<namespace>.<domain-name>:443 # ... For example: # ... [alt_names] example-registry-quay-builder-quay-enterprise.apps.cluster-new.gcp.quaydev.org:443 # ...
[ "[alt_names] <quayregistry-name>-quay-builder-<namespace>.<domain-name>:443", "[alt_names] example-registry-quay-builder-quay-enterprise.apps.cluster-new.gcp.quaydev.org:443" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/builders_and_image_automation/configuring-openshift-tls-component-builds
Chapter 71. KafkaClientAuthenticationTls schema reference
Chapter 71. KafkaClientAuthenticationTls schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationTls schema properties To configure mTLS authentication, set the type property to the value tls . mTLS uses a TLS certificate to authenticate. 71.1. certificateAndKey The certificate is specified in the certificateAndKey property and is always loaded from an OpenShift secret. In the secret, the certificate must be stored in X509 format under two different keys: public and private. You can use the secrets created by the User Operator, or you can create your own TLS certificate file, with the keys used for authentication, then create a Secret from the file: oc create secret generic MY-SECRET \ --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt \ --from-file= MY-PRIVATE.key Note mTLS authentication can only be used with TLS connections. Example mTLS configuration authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key 71.2. KafkaClientAuthenticationTls schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationTls type from KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth . It must have the value tls for the type KafkaClientAuthenticationTls . Property Description certificateAndKey Reference to the Secret which holds the certificate and private key pair. CertAndKeySecretSource type Must be tls . string
[ "create secret generic MY-SECRET --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt --from-file= MY-PRIVATE.key", "authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkaclientauthenticationtls-reference
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/rules_development_guide/making-open-source-more-inclusive
Chapter 2. Deploying and configuring a Postfix SMTP server
Chapter 2. Deploying and configuring a Postfix SMTP server As a system administrator, you can configure your email infrastructure by using a mail transport agent (MTA), such as Postfix, to transport email messages between hosts using the SMTP protocol. Postfix is a server-side application for routing and delivering mail. You can use Postfix to set up a local mail server, create a null-client mail relay, use a Postfix server as a destination for multiple domains, or choose an LDAP directory instead of files for lookups. The key features of Postfix: Security features to protect against common email related threats Customization options, including support for virtual domains and aliases 2.1. Overview of the main Postfix configuration files The postfix package provides multiple configuration files in the /etc/postfix/ directory. To configure your email infrastructure, use the following configuration files: main.cf - contains the global configuration of Postfix. master.cf - specifies Postfix interaction with various processes to accomplish mail delivery. access - specifies access rules, for example hosts that are allowed to connect to Postfix. transport - maps email addresses to relay hosts. aliases - contains a configurable list required by the mail protocol that describes user ID aliases. Note that you can find this file in the /etc/ directory. 2.2. Installing and configuring a Postfix SMTP server You can configure your Postfix SMTP server to receive, store, and deliver email messages. If the mail server package is not selected during the system installation, Postfix will not be available by default. Perform the following steps to install Postfix: Prerequisites You have the root access. Register your system Procedure Disable and remove the Sendmail utility: Install Postfix: To configure Postfix, edit the /etc/postfix/main.cf file and make the following changes: By default, Postfix receives emails only on the loopback interface. To configure Postfix to listen on specific interfaces, update the inet_interfaces parameter to the IP addresses of these interfaces: To configure Postfix to listen on all interfaces, set: If you want that Postfix uses a different hostname than the fully-qualified domain name (FQDN) that is returned by the gethostname() function, add the myhostname parameter: For example, Postfix adds this hostname to header of emails it processes. If the domain name differs from the one in the myhostname parameter, add the mydomain parameter: Add the myorigin parameter and set it to the value of mydomain : With this setting, Postfix uses the domain name as origin for locally posted mails instead of the hostname. Add the mynetworks parameter, and define the IP ranges of trusted networks that are allowed to send mails: If clients from not trustworthy networks, such as the Internet, should be able to send mails through this server, you must configure relay restrictions in a later step. Verify if the Postfix configuration in the main.cf file is correct: Enable the postfix service to start at boot and start it: Allow the smtp traffic through firewall and reload the firewall rules: Verification Verify that the postfix service is running: Optional: Restart the postfix service, if the output is stopped, waiting, or the service is not running: Optional: Reload the postfix service after changing any options in the configuration files in the /etc/postfix/ directory to apply those changes: Verify the email communication between local users on your system: To verify that your mail server does not relay emails from external IP ranges to foreign domains, follow the below mentioned procedure: Log in to a client which is not within the subnets that you defined in mynetworks . Configure the client to use your mail server. Try to send an email to an email address that is not under the domain you specified in mydomain on your mail server. For example, try to send an email to [email protected] . Check the /var/log/maillog file: Troubleshooting In case of errors, check the /var/log/maillog file. Additional resources The /etc/postfix/main.cf configuration file The /usr/share/doc/postfix/README_FILES directory Using and configuring firewalld 2.3. Customizing TLS settings of a Postfix server To make your email traffic encrypted and therefore more secure, you can configure Postfix to use a certificate from a trusted certificate authority (CA) instead of the self-signed certificate and customize the Transport Layer Security (TLS) security settings. In RHEL 9, the TLS encryption protocol is enabled in the Postfix server by default. The basic Postfix TLS configuration contains self-signed certificates for inbound SMTP and the opportunistic TLS for outbound SMTP. Prerequisites You have the root access. You have the postfix package installed on your server. You have a certificate signed by a trusted certificate authority (CA) and a private key. You have copied the following files to the Postfix server: The server certificate: /etc/pki/tls/certs/postfix.pem The private key: /etc/pki/tls/private/postfix.key If the server runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced . Procedure Set the path to the certificate and private key files on the server where Postfix is running by adding the following lines to the /etc/postfix/main.cf file: Restrict the incoming SMTP connections to authenticated users only by editing the /etc/postfix/main.cf file: Reload the postfix service to apply the changes: Verification Configure your client to use TLS encryption and send an email. Note To get additional information about Postfix client TLS activity, increase the log level from 0 to 1 by changing the following line in the /etc/postfix/main.cf : 2.4. Configuring Postfix to forward all emails to a mail relay If you want to forward all email to a mail relay, you can configure Postfix server as a null client. In this configuration Postfix only forwards mail to a different mail server and is not capable of receiving mail. Prerequisites You have the root access. You have the postfix package installed on your server. You have the IP address or hostname of the relay host to which you want to forward emails. Procedure To prevent Postfix from accepting any local email delivery and making it a null client, edit the /etc/postfix/main.cf file and make the following changes: Configure Postfix to forward all email by setting the mydestination parameter equal to an empty value: In this configuration the Postfix server is not a destination for any email and acts as a null client. Specify the mail relay server that receives the email from your null client: The relay host is responsible for the mail delivery. Enclose <ip_address_or_hostname> in square brackets. Configure the Postfix mail server to listen only on the loopback interface for emails to deliver: If you want Postfix to rewrite the sender domain of all outgoing emails to the company domain of your relay mail server, set: To disable the local mail delivery, add the following directive at the end of the configuration file: Add the mynetworks parameter so that Postfix forwards email from the local system originating from the 127.0.0.0/8 IPv4 network and the [::1]/128 IPv6 network to the mail relay server: Verify if the Postfix configuration in the main.cf file is correct: Restart the postfix service to apply the changes: Verification Verify that the email communication is forwarded to the mail relay: Troubleshooting In case of errors, check the /var/log/maillog file. Additional resources The /etc/postfix/main.cf configuration file 2.5. Configuring Postfix as a destination for multiple domains You can configure Postfix as a mail server that can receive emails for multiple domains. In this configuration, Postfix acts as the final destination for emails sent to addresses within the specified domains. You can configure the following: Set up multiple email addresses that point to the same email destination Route incoming email for multiple domains to the same Postfix server Prerequisites You have the root access. You have configured a Postfix server. Procedure In the /etc/postfix/virtual virtual alias file, specify the email addresses for each domain. Add each email address on a new line: In this example, Postfix redirects all emails sent to [email protected] to [email protected] and email sent to [email protected] to [email protected]. Create a hash file for the virtual alias map: This command creates the /etc/postfix/virtual.db file. Note that you must always re-run this command after you update the /etc/postfix/virtual file. In the Postfix /etc/postfix/main.cf configuration file, add the virtual_alias_maps parameter and point it to the hash file: Reload the postfix service to apply the changes: Verification Test the configuration by sending an email to one of the virtual email addresses. Troubleshooting In case of errors, check the /var/log/maillog file. 2.6. Using an LDAP directory as a lookup table If you use a Lightweight Directory Access Protocol (LDAP) server to store accounts, domains or aliases, you can configure Postfix to use the LDAP server as a lookup table. Using LDAP instead of files for lookups enables you to have a central database. Prerequisites You have the root access. You have the postfix package installed on your server. You have an LDAP server with the required schema and user credentials. You have the postfix-ldap plugin installed on the server running Postfix. Procedure Configure the LDAP lookup parameters by creating a /etc/postfix/ldap-aliases.cf file with the following content: Specify the hostname of the LDAP server: Specify the base domain name for the LDAP search: Optional: Customize the LDAP search filter and attributes based on your requirements. The filter for searching the directory defaults to query_filter = mailacceptinggeneralid=%s . Enable the LDAP source as a lookup table in the /etc/postfix/main.cf configuration file by adding the following content: Verify the LDAP configuration by running the postmap command, which checks for any syntax errors or connectivity issues: Reload the postfix service to apply the changes: Verification Send a test email to verify that the LDAP lookup works correctly. Check the mail logs in /var/log/maillog for any errors. Additional resources /usr/share/doc/postfix/README_FILES/LDAP_README file /usr/share/doc/postfix/README_FILES/DATABASE_README file 2.7. Configuring Postfix as an outgoing mail server to relay for authenticated users You can configure Postfix to relay mail for authenticated users. In this scenario, you allow users to authenticate themselves and use their email address to send mail through your SMTP server by configuring Postfix as an outgoing mail server with SMTP authentication, TLS encryption, and sender address restrictions. Prerequisites You have the root access. You have configured a Postfix server. Procedure To configure Postfix as an outgoing mail server, edit the /etc/postfix/main.cf file and add the following: Enable SMTP authentication: Disable access without TLS: Allow mail relaying only for authenticated users: Optional: Restrict users to use their own email address only as a sender: Reload the postfix service to apply the changes: Verification Authenticate in your SMTP client that supports TLS and SASL. Send an test email to verify that the SMTP authentication works correctly. 2.8. Delivering email from Postfix to Dovecot running on the same host You can configure Postfix to deliver incoming mail to Dovecot on the same host using LMTP over a UNIX socket. This socket enables direct communication between Postfix and Dovecot on the local machine. Prerequisites You have the root access. You have configured a Postfix server. You have configured a Dovecot server, see Configuring and maintaining a Dovecot IMAP and POP3 server . You have configured the LMTP socket on your Dovecot server, see Configuring an LMTP socket and LMTPS listener . Procedure Configure Postfix to use the LMTP protocol and the UNIX domain socket for delivering mail to Dovecot in the /etc/postfix/main.cf file: If you want to use virtual mailboxes, add the following content: If you want to use non-virtual mailboxes, add the following content: Reload postfix to apply the changes: Verification Send an test email to verify that the LMTP socket works correctly. Check the mail logs in /var/log/maillog for any errors. 2.9. Delivering email from Postfix to Dovecot running on a different host You can establish a secure connection between Postfix mail server and the Dovecot delivery agent over the network. To do so, configure the LMTP service to use network socket for delivering mail between mail servers. By default, the LMTP protocol is not encrypted. However, if you configured TLS encryption, Dovecot uses the same settings automatically for the LMTP service. SMTP servers can then connect to it using the STARTTLS command over LMTP. Prerequisites You have the root access. You have configured a Postfix server. You have configured a Dovecot server, see Configuring and maintaining a Dovecot IMAP and POP3 server . You have configured the LMTP service on your Dovecot server, see Configuring an LMTP socket and LMTPS listener . Procedure Configure Postfix to use the LMTP protocol and the INET domain socket for delivering mail to Dovecot in the /etc/postfix/main.cf file by adding the following content: Replace <dovecot_host> with the IP address or hostname of the Dovecot server and <port> with the port number of the LMTP service. Reload the postfix service to apply the changes: Verification Send an test email to an address hosted by the remote Dovecot server and check the Dovecot logs to ensure that the mail was successfully delivered. 2.10. Securing the Postfix service Postfix is a mail transfer agent (MTA) that uses the Simple Mail Transfer Protocol (SMTP) to deliver electronic messages between other MTAs and to email clients or delivery agents. Although MTAs can encrypt traffic between one another, they might not do so by default. You can also mitigate risks to various attacks by changing setting to more secure values. 2.10.1. Reducing Postfix network-related security risks To reduce the risk of attackers invading your system through the network, perform as many of the following tasks as possible. Do not share the /var/spool/postfix/ mail spool directory on a Network File System (NFS) shared volume. NFSv2 and NFSv3 do not maintain control over user and group IDs. Therefore, if two or more users have the same UID, they can receive and read each other's mail, which is a security risk. Note This rule does not apply to NFSv4 using Kerberos, because the SECRPC_GSS kernel module does not use UID-based authentication. However, to reduce the security risks, you should not put the mail spool directory on NFS shared volumes. To reduce the probability of Postfix server exploits, mail users must access the Postfix server using an email program. Do not allow shell accounts on the mail server, and set all user shells in the /etc/passwd file to /sbin/nologin (with the possible exception of the root user). To protect Postfix from a network attack, it is set up to only listen to the local loopback address by default. You can verify this by viewing the inet_interfaces = localhost line in the /etc/postfix/main.cf file. This ensures that Postfix only accepts mail messages (such as cron job reports) from the local system and not from the network. This is the default setting and protects Postfix from a network attack. To remove the localhost restriction and allow Postfix to listen on all interfaces, set the inet_interfaces parameter to all in /etc/postfix/main.cf . 2.10.2. Postfix configuration options for limiting DoS attacks An attacker can flood the server with traffic, or send information that triggers a crash, causing a denial of service (DoS) attack. You can configure your system to reduce the risk of such attacks by setting limits in the /etc/postfix/main.cf file. You can change the value of the existing directives or you can add new directives with custom values in the <directive> = <value> format. Use the following list of directives for limiting a DoS attack: smtpd_client_connection_rate_limit Limits the maximum number of connection attempts any client can make to this service per time unit. The default value is 0 , which means a client can make as many connections per time unit as Postfix can accept. By default, the directive excludes clients in trusted networks. anvil_rate_time_unit Defines a time unit to calculate the rate limit. The default value is 60 seconds. smtpd_client_event_limit_exceptions Excludes clients from the connection and rate limit commands. By default, the directive excludes clients in trusted networks. smtpd_client_message_rate_limit Defines the maximum number of message deliveries from client to request per time unit (regardless of whether or not Postfix actually accepts those messages). default_process_limit Defines the default maximum number of Postfix child processes that provide a given service. You can ignore this rule for specific services in the master.cf file. By default, the value is 100 . queue_minfree Defines the minimum amount of free space required to receive mail in the queue file system. The directive is currently used by the Postfix SMTP server to decide if it accepts any mail at all. By default, the Postfix SMTP server rejects MAIL FROM commands when the amount of free space is less than 1.5 times the message_size_limit . To specify a higher minimum free space limit, specify a queue_minfree value that is at least 1.5 times the message_size_limit . By default, the queue_minfree value is 0 . header_size_limit Defines the maximum amount of memory in bytes for storing a message header. If a header is large, it discards the excess header. By default, the value is 102400 bytes. message_size_limit Defines the maximum size of a message including the envelope information in bytes. By default, the value is 10240000 bytes. 2.10.3. Configuring Postfix to use SASL Postfix supports Simple Authentication and Security Layer (SASL) based SMTP Authentication (AUTH). SMTP AUTH is an extension of the Simple Mail Transfer Protocol. Currently, the Postfix SMTP server supports the SASL implementations in the following ways: Dovecot SASL The Postfix SMTP server can communicate with the Dovecot SASL implementation using either a UNIX-domain socket or a TCP socket. Use this method if Postfix and Dovecot applications are running on separate machines. Cyrus SASL When enabled, SMTP clients must authenticate with the SMTP server using an authentication method supported and accepted by both the server and the client. Prerequisites The dovecot package is installed on the system Procedure Set up Dovecot: Include the following lines in the /etc/dovecot/conf.d/10-master.conf file: The example uses UNIX-domain sockets for communication between Postfix and Dovecot. The example also assumes default Postfix SMTP server settings, which include the mail queue located in the /var/spool/postfix/ directory, and the application running under the postfix user and group. Optional: Set up Dovecot to listen for Postfix authentication requests through TCP: Specify the method that the email client uses to authenticate with Dovecot by editing the auth_mechanisms parameter in /etc/dovecot/conf.d/10-auth.conf file: The auth_mechanisms parameter supports different plaintext and non-plaintext authentication methods. Set up Postfix by modifying the /etc/postfix/main.cf file: Enable SMTP Authentication on the Postfix SMTP server: Enable the use of Dovecot SASL implementation for SMTP Authentication: Provide the authentication path relative to the Postfix queue directory. Note that the use of a relative path ensures that the configuration works regardless of whether the Postfix server runs in chroot or not: This step uses UNIX-domain sockets for communication between Postfix and Dovecot. To configure Postfix to look for Dovecot on a different machine in case you use TCP sockets for communication, use configuration values similar to the following: In the example, replace the ip-address with the IP address of the Dovecot machine and port-number with the port number specified in Dovecot's /etc/dovecot/conf.d/10-master.conf file. Specify SASL mechanisms that the Postfix SMTP server makes available to clients. Note that you can specify different mechanisms for encrypted and unencrypted sessions. The directives specify that during unencrypted sessions, no anonymous authentication is allowed and no mechanisms that transmit unencrypted user names or passwords are allowed. For encrypted sessions that use TLS, only non-anonymous authentication mechanisms are allowed. Additional resources Postfix SMTP server policy - SASL mechanism properties Postfix and Dovecot SASL Configuring SASL authentication in the Postfix SMTP server
[ "dnf remove sendmail", "dnf install postfix", "inet_interfaces = 127.0.0.1/32, [::1]/128, 192.0.2.1, [2001:db8:1::1]", "inet_interfaces = all", "myhostname = <smtp.example.com>", "mydomain = <example.com>", "myorigin = USDmydomain", "mynetworks = 127.0.0.1/32, [::1]/128, 192.0.2.1/24, [2001:db8:1::1]/64", "postfix check", "systemctl enable --now postfix", "firewall-cmd --permanent --add-service smtp firewall-cmd --reload", "systemctl status postfix", "systemctl restart postfix", "systemctl reload postfix", "echo \"This is a test message\" | mail -s <SUBJECT> <[email protected]>", "554 Relay access denied - the server is not going to relay. 250 OK or similar - the server is going to relay.", "smtpd_tls_cert_file = /etc/pki/tls/certs/postfix.pem smtpd_tls_key_file = /etc/pki/tls/private/postfix.key", "smtpd_tls_auth_only = yes", "systemctl reload postfix", "smtp_tls_loglevel = 1", "mydestination =", "relayhost = <[ip_address_or_hostname]>", "inet_interfaces = loopback-only", "myorigin = <relay.example.com>", "local_transport = error: local delivery disabled", "mynetworks = 127.0.0.0/8, [::1]/128", "postfix check", "systemctl restart postfix", "echo \"This is a test message\" | mail -s <SUBJECT> <[email protected]>", "<[email protected]> <[email protected]> <[email protected]> <[email protected]>", "postmap /etc/postfix/virtual", "virtual_alias_maps = hash:/etc/postfix/virtual", "systemctl reload postfix", "server_host = <ldap.example.com>", "search_base = dc= <example> ,dc= <com>", "virtual_alias_maps = ldap:/etc/postfix/ldap-aliases.cf", "postmap -q @ <example.com> ldap:/etc/postfix/ldap-aliases.cf", "systemctl reload postfix", "smtpd_sasl_auth_enable = yes broken_sasl_auth_clients = yes", "smtpd_tls_auth_only = yes", "smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination", "smtpd_sender_restrictions = reject_sender_login_mismatch", "systemctl reload postfix", "virtual_transport = lmtp:unix:/var/run/dovecot/lmtp", "mailbox_transport = lmtp:unix:/var/run/dovecot/lmtp", "systemctl reload postfix", "mailbox_transport = lmtp:inet: <dovecot_host> : <port>", "systemctl reload postfix", "service auth { unix_listener /var/spool/postfix/private/auth { mode = 0660 user = postfix group = postfix } }", "service auth { inet_listener { port = port-number } }", "auth_mechanisms = plain login", "smtpd_sasl_auth_enable = yes", "smtpd_sasl_type = dovecot", "smtpd_sasl_path = private/auth", "smtpd_sasl_path = inet: ip-address : port-number", "smtpd_sasl_security_options = noanonymous, noplaintext smtpd_sasl_tls_security_options = noanonymous" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/deploying_mail_servers/assembly_mail-transport-agent_deploying-mail-servers
Appendix B. Troubleshooting DNF modules
Appendix B. Troubleshooting DNF modules If DNF modules fails to enable, it can mean an incorrect module is enabled. In that case, you have to resolve dependencies manually as follows. List the enabled modules: B.1. Ruby If Ruby module fails to enable, it can mean an incorrect module is enabled. In that case, you have to resolve dependencies manually as follows: List the enabled modules: If the Ruby 2.5 module has already been enabled, perform a module reset: B.2. PostgreSQL If PostgreSQL module fails to enable, it can mean an incorrect module is enabled. In that case, you have to resolve dependencies manually as follows: List the enabled modules: If the PostgreSQL 10 module has already been enabled, perform a module reset: If you created a PostgreSQL 10 database, perform an upgrade: Enable the DNF modules: Install the PostgreSQL upgrade package: Perform the upgrade:
[ "dnf module list --enabled", "dnf module list --enabled", "dnf module reset ruby", "dnf module list --enabled", "dnf module reset postgresql", "dnf module enable satellite-capsule:el8", "dnf install postgresql-upgrade", "postgresql-setup --upgrade" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_capsule_server/troubleshooting-dnf-modules_capsule
Administration guide
Administration guide Red Hat OpenShift Dev Spaces 3.16 Administering Red Hat OpenShift Dev Spaces 3.16 Jana Vrbkova [email protected] Red Hat Developer Group Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/administration_guide/index
Chapter 19. Using Ansible to automate group membership in IdM
Chapter 19. Using Ansible to automate group membership in IdM Using automatic group membership, you can assign users and hosts user groups and host groups automatically, based on their attributes. For example, you can: Divide employees' user entries into groups based on the employees' manager, location, position or any other attribute. You can list all attributes by entering ipa user-add --help on the command-line. Divide hosts into groups based on their class, location, or any other attribute. You can list all attributes by entering ipa host-add --help on the command-line. Add all users or all hosts to a single global group. You can use Red Hat Ansible Engine to automate the management of automatic group membership in Identity Management (IdM). This section covers the following topics: Preparing your Ansible control node for managing IdM Using Ansible to ensure that an automember rule for an IdM user group is present Using Ansible to ensure that a condition is present in an IdM user group automember rule Using Ansible to ensure that a condition is absent in an IdM user group automember rule Using Ansible to ensure that an automember rule for an IdM group is absent Using Ansible to ensure that a condition is present in an IdM host group automember rule 19.1. Preparing your Ansible control node for managing IdM As a system administrator managing Identity Management (IdM), when working with Red Hat Ansible Engine, it is good practice to do the following: Create a subdirectory dedicated to Ansible playbooks in your home directory, for example ~/MyPlaybooks . Copy and adapt sample Ansible playbooks from the /usr/share/doc/ansible-freeipa/* and /usr/share/doc/rhel-system-roles/* directories and subdirectories into your ~/MyPlaybooks directory. Include your inventory file in your ~/MyPlaybooks directory. By following this practice, you can find all your playbooks in one place and you can run your playbooks without invoking root privileges. Note You only need root privileges on the managed nodes to execute the ipaserver , ipareplica , ipaclient , ipabackup , ipasmartcard_server and ipasmartcard_client ansible-freeipa roles. These roles require privileged access to directories and the dnf software package manager. Follow this procedure to create the ~/MyPlaybooks directory and configure it so that you can use it to store and run Ansible playbooks. Prerequisites You have installed an IdM server on your managed nodes, server.idm.example.com and replica.idm.example.com . You have configured DNS and networking so you can log in to the managed nodes, server.idm.example.com and replica.idm.example.com , directly from the control node. You know the IdM admin password. Procedure Create a directory for your Ansible configuration and playbooks in your home directory: Change into the ~/MyPlaybooks/ directory: Create the ~/MyPlaybooks/ansible.cfg file with the following content: Create the ~/MyPlaybooks/inventory file with the following content: This configuration defines two host groups, eu and us , for hosts in these locations. Additionally, this configuration defines the ipaserver host group, which contains all hosts from the eu and us groups. Optional: Create an SSH public and private key. To simplify access in your test environment, do not set a password on the private key: Copy the SSH public key to the IdM admin account on each managed node: You must enter the IdM admin password when you enter these commands. Additional resources Installing an Identity Management server using an Ansible playbook How to build your inventory 19.2. Using Ansible to ensure that an automember rule for an IdM user group is present The following procedure describes how to use an Ansible playbook to ensure an automember rule for an Identity Management (IdM) group exists. In the example, the presence of an automember rule is ensured for the testing_group user group. Prerequisites You know the IdM admin password. The testing_group user group exists in IdM. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the automember-group-present.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/automember/ directory: Open the automember-group-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipaautomember task section: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to testing_group . Set the automember_type variable to group . Ensure that the state variable is set to present . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources See Benefits of automatic group membership and Automember rules . See Using Ansible to ensure that a condition is present in an IdM user group automember rule . See the README-automember.md file in the /usr/share/doc/ansible-freeipa/ directory. See the /usr/share/doc/ansible-freeipa/playbooks/automember directory. 19.3. Using Ansible to ensure that a specified condition is present in an IdM user group automember rule The following procedure describes how to use an Ansible playbook to ensure that a specified condition exists in an automember rule for an Identity Management (IdM) group. In the example, the presence of a UID-related condition in the automember rule is ensured for the testing_group group. By specifying the .* condition, you ensure that all future IdM users automatically become members of the testing_group . Prerequisites You know the IdM admin password. The testing_group user group and automember user group rule exist in IdM. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the automember-hostgroup-rule-present.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/automember/ directory and name it, for example, automember-usergroup-rule-present.yml : Open the automember-usergroup-rule-present.yml file for editing. Adapt the file by modifying the following parameters: Rename the playbook to correspond to your use case, for example: Automember user group rule member present . Rename the task to correspond to your use case, for example: Ensure an automember condition for a user group is present . Set the following variables in the ipaautomember task section: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to testing_group . Set the automember_type variable to group . Ensure that the state variable is set to present . Ensure that the action variable is set to member . Set the inclusive key variable to UID . Set the inclusive expression variable to . * This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification Log in as an IdM administrator. Add a user, for example: Additional resources See Applying automember rules to existing entries using the IdM CLI . See Benefits of automatic group membership and Automember rules . See the README-automember.md file in the /usr/share/doc/ansible-freeipa/ directory. See the /usr/share/doc/ansible-freeipa/playbooks/automember directory. 19.4. Using Ansible to ensure that a condition is absent from an IdM user group automember rule The following procedure describes how to use an Ansible playbook to ensure a condition is absent from an automember rule for an Identity Management (IdM) group. In the example, the absence of a condition in the automember rule is ensured that specifies that users whose initials are dp should be included. The automember rule is applied to the testing_group group. By applying the condition, you ensure that no future IdM user whose initials are dp becomes a member of the testing_group . Prerequisites You know the IdM admin password. The testing_group user group and automember user group rule exist in IdM. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the automember-hostgroup-rule-absent.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/automember/ directory and name it, for example, automember-usergroup-rule-absent.yml : Open the automember-usergroup-rule-absent.yml file for editing. Adapt the file by modifying the following parameters: Rename the playbook to correspond to your use case, for example: Automember user group rule member absent . Rename the task to correspond to your use case, for example: Ensure an automember condition for a user group is absent . Set the following variables in the ipaautomember task section: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to testing_group . Set the automember_type variable to group . Ensure that the state variable is set to absent . Ensure that the action variable is set to member . Set the inclusive key variable to initials . Set the inclusive expression variable to dp . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Verification Log in as an IdM administrator. View the automember group: The absence of an Inclusive Regex: initials=dp entry in the output confirms that the testing_group automember rule does not contain the condition specified. Additional resources See Applying automember rules to existing entries using the IdM CLI . See Benefits of automatic group membership and Automember rules . See the README-automember.md file in the /usr/share/doc/ansible-freeipa/ directory. See the /usr/share/doc/ansible-freeipa/playbooks/automember directory. 19.5. Using Ansible to ensure that an automember rule for an IdM user group is absent The following procedure describes how to use an Ansible playbook to ensure an automember rule is absent for an Identity Management (IdM) group. In the example, the absence of an automember rule is ensured for the testing_group group. Note Deleting an automember rule also deletes all conditions associated with the rule. To remove only specific conditions from a rule, see Using Ansible to ensure that a condition is absent in an IdM user group automember rule . Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the automember-group-absent.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/automember/ directory: Open the automember-group-absent-copy.yml file for editing. Adapt the file by setting the following variables in the ipaautomember task section: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to testing_group . Set the automember_type variable to group . Ensure that the state variable is set to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources See Benefits of automatic group membership and Automember rules . See the README-automember.md file in the /usr/share/doc/ansible-freeipa/ directory. See the /usr/share/doc/ansible-freeipa/playbooks/automember directory. 19.6. Using Ansible to ensure that a condition is present in an IdM host group automember rule Follow this procedure to use Ansible to ensure that a condition is present in an IdM host group automember rule. The example describes how to ensure that hosts with the FQDN of .*.idm.example.com are members of the primary_dns_domain_hosts host group and hosts whose FQDN is .*.example.org are not members of the primary_dns_domain_hosts host group. Prerequisites You know the IdM admin password. The primary_dns_domain_hosts host group and automember host group rule exist in IdM. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the automember-hostgroup-rule-present.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/automember/ directory: Open the automember-hostgroup-rule-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipaautomember task section: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to primary_dns_domain_hosts . Set the automember_type variable to hostgroup . Ensure that the state variable is set to present . Ensure that the action variable is set to member . Ensure that the inclusive key variable is set to fqdn . Set the corresponding inclusive expression variable to .*.idm.example.com . Set the exclusive key variable to fqdn . Set the corresponding exclusive expression variable to .*.example.org . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources See Applying automember rules to existing entries using the IdM CLI . See Benefits of automatic group membership and Automember rules . See the README-automember.md file in the /usr/share/doc/ansible-freeipa/ directory. See the /usr/share/doc/ansible-freeipa/playbooks/automember directory. 19.7. Additional resources Managing user accounts using Ansible playbooks Managing hosts using Ansible playbooks Managing user groups using Ansible playbooks Managing host groups using the IdM CLI
[ "mkdir ~/MyPlaybooks/", "cd ~/MyPlaybooks", "[defaults] inventory = /home/ your_username /MyPlaybooks/inventory [privilege_escalation] become=True", "[ipaserver] server.idm.example.com [ipareplicas] replica1.idm.example.com replica2.idm.example.com [ipacluster:children] ipaserver ipareplicas [ipacluster:vars] ipaadmin_password=SomeADMINpassword [ipaclients] ipaclient1.example.com ipaclient2.example.com [ipaclients:vars] ipaadmin_password=SomeADMINpassword", "ssh-keygen", "ssh-copy-id [email protected] ssh-copy-id [email protected]", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-group-present.yml automember-group-present-copy.yml", "--- - name: Automember group present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure group automember rule admins is present ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory automember-group-present-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-hostgroup-rule-present.yml automember-usergroup-rule-present.yml", "--- - name: Automember user group rule member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure an automember condition for a user group is present ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: present action: member inclusive: - key: UID expression: . *", "ansible-playbook --vault-password-file=password_file -v -i inventory automember-usergroup-rule-present.yml", "kinit admin", "ipa user-add user101 --first user --last 101 ----------------------- Added user \"user101\" ----------------------- User login: user101 First name: user Last name: 101 Member of groups: ipausers, testing_group", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-hostgroup-rule-absent.yml automember-usergroup-rule-absent.yml", "--- - name: Automember user group rule member absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure an automember condition for a user group is absent ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: absent action: member inclusive: - key: initials expression: dp", "ansible-playbook --vault-password-file=password_file -v -i inventory automember-usergroup-rule-absent.yml", "kinit admin", "ipa automember-show --type=group testing_group Automember Rule: testing_group", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-group-absent.yml automember-group-absent-copy.yml", "--- - name: Automember group absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure group automember rule admins is absent ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: absent", "ansible-playbook --vault-password-file=password_file -v -i inventory automember-group-absent.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-hostgroup-rule-present.yml automember-hostgroup-rule-present-copy.yml", "--- - name: Automember user group rule member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure an automember condition for a user group is present ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: primary_dns_domain_hosts automember_type: hostgroup state: present action: member inclusive: - key: fqdn expression: .*.idm.example.com exclusive: - key: fqdn expression: .*.example.org", "ansible-playbook --vault-password-file=password_file -v -i inventory automember-hostgroup-rule-present-copy.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/using-ansible-to-automate-group-membership-in-idm_configuring-and-managing-idm
8.2. Moving Resources Due to Failure
8.2. Moving Resources Due to Failure When you create a resource, you can configure the resource so that it will move to a new node after a defined number of failures by setting the migration-threshold option for that resource. Once the threshold has been reached, this node will no longer be allowed to run the failed resource until: The administrator manually resets the resource's failcount using the pcs resource failcount command. The resource's failure-timeout value is reached. The value of migration-threshold is set to INFINITY by default. INFINITY is defined internally as a very large but finite number. A value of 0 disables the migration-threshold feature. Note Setting a migration-threshold for a resource is not the same as configuring a resource for migration, in which the resource moves to another location without loss of state. The following example adds a migration threshold of 10 to the resource named dummy_resource , which indicates that the resource will move to a new node after 10 failures. You can add a migration threshold to the defaults for the whole cluster with the following command. To determine the resource's current failure status and limits, use the pcs resource failcount command. There are two exceptions to the migration threshold concept; they occur when a resource either fails to start or fails to stop. If the cluster property start-failure-is-fatal is set to true (which is the default), start failures cause the failcount to be set to INFINITY and thus always cause the resource to move immediately. For information on the start-failure-is-fatal option, see Table 12.1, "Cluster Properties" . Stop failures are slightly different and crucial. If a resource fails to stop and STONITH is enabled, then the cluster will fence the node in order to be able to start the resource elsewhere. If STONITH is not enabled, then the cluster has no way to continue and will not try to start the resource elsewhere, but will try to stop it again after the failure timeout.
[ "pcs resource meta dummy_resource migration-threshold=10", "pcs resource defaults migration-threshold=10" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-failure_migration-haar
Chapter 64. region
Chapter 64. region This chapter describes the commands under the region command. 64.1. region create Create new region Usage: Table 64.1. Positional arguments Value Summary <region-id> New region id Table 64.2. Command arguments Value Summary -h, --help Show this help message and exit --parent-region <region-id> Parent region id --description <description> New region description Table 64.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 64.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 64.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 64.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 64.2. region delete Delete region(s) Usage: Table 64.7. Positional arguments Value Summary <region-id> Region id(s) to delete Table 64.8. Command arguments Value Summary -h, --help Show this help message and exit 64.3. region list List regions Usage: Table 64.9. Command arguments Value Summary -h, --help Show this help message and exit --parent-region <region-id> Filter by parent region id Table 64.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 64.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 64.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 64.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 64.4. region set Set region properties Usage: Table 64.14. Positional arguments Value Summary <region-id> Region to modify Table 64.15. Command arguments Value Summary -h, --help Show this help message and exit --parent-region <region-id> New parent region id --description <description> New region description 64.5. region show Display region details Usage: Table 64.16. Positional arguments Value Summary <region-id> Region to display Table 64.17. Command arguments Value Summary -h, --help Show this help message and exit Table 64.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 64.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 64.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 64.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack region create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--parent-region <region-id>] [--description <description>] <region-id>", "openstack region delete [-h] <region-id> [<region-id> ...]", "openstack region list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--parent-region <region-id>]", "openstack region set [-h] [--parent-region <region-id>] [--description <description>] <region-id>", "openstack region show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <region-id>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/region
Chapter 8. Apicurio Registry configuration reference
Chapter 8. Apicurio Registry configuration reference This chapter provides reference information on the configuration options that are available for Apicurio Registry. Section 8.1, "Apicurio Registry configuration options" Additional resources For details on setting configuration options by using the Core Registry API, see the /admin/config/properties endpoint in the Apicurio Registry REST API documentation . For details on client configuration options for Kafka serializers and deserializers, see the Red Hat build of Apicurio Registry User Guide . 8.1. Apicurio Registry configuration options The following Apicurio Registry configuration options are available for each component category: 8.1.1. api Table 8.1. api configuration options Name Type Default Available from Description registry.api.errors.include-stack-in-response boolean false 2.1.4.Final Include stack trace in errors responses registry.disable.apis optional<list<string>> 2.0.0.Final Disable APIs 8.1.2. auth Table 8.2. auth configuration options Name Type Default Available from Description registry.auth.admin-override.claim string org-admin 2.1.0.Final Auth admin override claim registry.auth.admin-override.claim-value string true 2.1.0.Final Auth admin override claim value registry.auth.admin-override.enabled boolean false 2.1.0.Final Auth admin override enabled registry.auth.admin-override.from string token 2.1.0.Final Auth admin override from registry.auth.admin-override.role string sr-admin 2.1.0.Final Auth admin override role registry.auth.admin-override.type string role 2.1.0.Final Auth admin override type registry.auth.anonymous-read-access.enabled boolean [dynamic] false 2.1.0.Final Anonymous read access registry.auth.audit.log.prefix string audit 2.2.6 Prefix used for application audit logging. registry.auth.authenticated-read-access.enabled boolean [dynamic] false 2.1.4.Final Authenticated read access registry.auth.basic-auth-client-credentials.cache-expiration integer 10 2.2.6.Final Default client credentials token expiration time. registry.auth.basic-auth-client-credentials.cache-expiration-offset integer 10 2.5.9.Final Client credentials token expiration offset from JWT expiration. registry.auth.basic-auth-client-credentials.enabled boolean [dynamic] false 2.1.0.Final Enable basic auth client credentials registry.auth.basic-auth.scope optional<string> 2.5.0.Final Client credentials scope. registry.auth.client-id string 2.0.0.Final Client identifier used by the server for authentication. registry.auth.client-secret optional<string> 2.1.0.Final Client secret used by the server for authentication. registry.auth.enabled boolean false 2.0.0.Final Enable auth registry.auth.owner-only-authorization boolean [dynamic] false 2.0.0.Final Artifact owner-only authorization registry.auth.owner-only-authorization.limit-group-access boolean [dynamic] false 2.1.0.Final Artifact group owner-only authorization registry.auth.role-based-authorization boolean false 2.1.0.Final Enable role based authorization registry.auth.role-source string token 2.1.0.Final Auth roles source registry.auth.role-source.header.name string 2.4.3.Final Header authorization name registry.auth.roles.admin string sr-admin 2.0.0.Final Auth roles admin registry.auth.roles.developer string sr-developer 2.1.0.Final Auth roles developer registry.auth.roles.readonly string sr-readonly 2.1.0.Final Auth roles readonly registry.auth.tenant-owner-is-admin.enabled boolean true 2.1.0.Final Auth tenant owner admin enabled registry.auth.token.endpoint string 2.1.0.Final Authentication server url. 8.1.3. cache Table 8.3. cache configuration options Name Type Default Available from Description registry.config.cache.enabled boolean true 2.2.2.Final Registry cache enabled 8.1.4. ccompat Table 8.4. ccompat configuration options Name Type Default Available from Description registry.ccompat.group-concat.enabled boolean false 2.6.2.Final Enable group support via concatenation in subject (compatibility API) registry.ccompat.group-concat.separator string : 2.6.2.Final Separator to use when group concatenation is enabled (compatibility API) registry.ccompat.legacy-id-mode.enabled boolean [dynamic] false 2.0.2.Final Legacy ID mode (compatibility API) registry.ccompat.max-subjects integer [dynamic] 1000 2.4.2.Final Maximum number of Subjects returned (compatibility API) registry.ccompat.use-canonical-hash boolean [dynamic] false 2.3.0.Final Canonical hash mode (compatibility API) 8.1.5. download Table 8.5. download configuration options Name Type Default Available from Description registry.download.href.ttl long [dynamic] 30 2.1.2.Final Download link expiry 8.1.6. events Table 8.6. events configuration options Name Type Default Available from Description registry.events.ksink optional<string> 2.0.0.Final Events Kafka sink enabled 8.1.7. health Table 8.7. health configuration options Name Type Default Available from Description registry.liveness.errors.ignored optional<list<string>> 1.2.3.Final Ignored liveness errors registry.metrics.PersistenceExceptionLivenessCheck.counterResetWindowDurationSec integer 60 1.0.2.Final Counter reset window duration of persistence liveness check registry.metrics.PersistenceExceptionLivenessCheck.disableLogging boolean false 2.0.0.Final Disable logging of persistence liveness check registry.metrics.PersistenceExceptionLivenessCheck.errorThreshold integer 1 1.0.2.Final Error threshold of persistence liveness check registry.metrics.PersistenceExceptionLivenessCheck.statusResetWindowDurationSec integer 300 1.0.2.Final Status reset window duration of persistence liveness check registry.metrics.PersistenceTimeoutReadinessCheck.counterResetWindowDurationSec integer 60 1.0.2.Final Counter reset window duration of persistence readiness check registry.metrics.PersistenceTimeoutReadinessCheck.errorThreshold integer 5 1.0.2.Final Error threshold of persistence readiness check registry.metrics.PersistenceTimeoutReadinessCheck.statusResetWindowDurationSec integer 300 1.0.2.Final Status reset window duration of persistence readiness check registry.metrics.PersistenceTimeoutReadinessCheck.timeoutSec integer 15 1.0.2.Final Timeout of persistence readiness check registry.metrics.ResponseErrorLivenessCheck.counterResetWindowDurationSec integer 60 1.0.2.Final Counter reset window duration of response liveness check registry.metrics.ResponseErrorLivenessCheck.disableLogging boolean false 2.0.0.Final Disable logging of response liveness check registry.metrics.ResponseErrorLivenessCheck.errorThreshold integer 1 1.0.2.Final Error threshold of response liveness check registry.metrics.ResponseErrorLivenessCheck.statusResetWindowDurationSec integer 300 1.0.2.Final Status reset window duration of response liveness check registry.metrics.ResponseTimeoutReadinessCheck.counterResetWindowDurationSec instance<integer> 60 1.0.2.Final Counter reset window duration of response readiness check registry.metrics.ResponseTimeoutReadinessCheck.errorThreshold instance<integer> 1 1.0.2.Final Error threshold of response readiness check registry.metrics.ResponseTimeoutReadinessCheck.statusResetWindowDurationSec instance<integer> 300 1.0.2.Final Status reset window duration of response readiness check registry.metrics.ResponseTimeoutReadinessCheck.timeoutSec instance<integer> 10 1.0.2.Final Timeout of response readiness check registry.storage.metrics.cache.check-period long 30000 2.1.0.Final Storage metrics cache check period 8.1.8. import Table 8.8. import configuration options Name Type Default Available from Description registry.import.url optional<url> 2.1.0.Final The import URL 8.1.9. kafka Table 8.9. kafka configuration options Name Type Default Available from Description registry.events.kafka.topic optional<string> 2.0.0.Final Events Kafka topic registry.events.kafka.topic-partition optional<integer> 2.0.0.Final Events Kafka topic partition 8.1.10. limits Table 8.10. limits configuration options Name Type Default Available from Description registry.limits.config.max-artifact-labels long -1 2.2.3.Final Max artifact labels registry.limits.config.max-artifact-properties long -1 2.1.0.Final Max artifact properties registry.limits.config.max-artifacts long -1 2.1.0.Final Max artifacts registry.limits.config.max-description-length long -1 2.1.0.Final Max artifact description length registry.limits.config.max-label-size long -1 2.1.0.Final Max artifact label size registry.limits.config.max-name-length long -1 2.1.0.Final Max artifact name length registry.limits.config.max-property-key-size long -1 2.1.0.Final Max artifact property key size registry.limits.config.max-property-value-size long -1 2.1.0.Final Max artifact property value size registry.limits.config.max-requests-per-second long -1 2.2.3.Final Max artifact requests per second registry.limits.config.max-schema-size-bytes long -1 2.2.3.Final Max schema size (bytes) registry.limits.config.max-total-schemas long -1 2.1.0.Final Max total schemas registry.limits.config.max-versions-per-artifact long -1 2.1.0.Final Max versions per artifacts registry.storage.metrics.cache.max-size long 1000 2.4.1.Final Storage metrics cache max size. 8.1.11. log Table 8.11. log configuration options Name Type Default Available from Description quarkus.log.level string 2.0.0.Final Log level 8.1.12. mt Table 8.12. mt configuration options Name Type Default Available from Description registry.enable.multitenancy boolean false 2.0.0.Final Enable multitenancy registry.enable.multitenancy.standalone boolean false 2.5.0.Final Enable Standalone Multitenancy mode. In this mode, Registry provides basic multi-tenancy features, without dependencies on additional components to manage tenants and their metadata. A new tenant is simply created as soon as a tenant ID is extracted from the request for the first time. The tenant IDs must be managed externally, and tenants can be effectively deleted by deleting their data. registry.multitenancy.authorization.enabled boolean true 2.1.0.Final Enable multitenancy authorization registry.multitenancy.reaper.every optional<string> 2.1.0.Final Multitenancy reaper every registry.multitenancy.reaper.max-tenants-reaped int 100 2.1.0.Final Multitenancy reaper max tenants reaped registry.multitenancy.reaper.period-seconds long 10800 2.1.0.Final Multitenancy reaper period seconds registry.multitenancy.tenant.token-claim.names list<string> 2.1.0.Final Token claims used to resolve the tenant id registry.multitenancy.types.context-path.base-path string t 2.1.0.Final Multitenancy context path type base path registry.multitenancy.types.context-path.enabled boolean true 2.1.0.Final Enable multitenancy context path type registry.multitenancy.types.request-header.enabled boolean true 2.1.0.Final Enable multitenancy request header type registry.multitenancy.types.request-header.name string X-Tenant-Id 2.1.0.Final Multitenancy request header type name registry.multitenancy.types.subdomain.enabled boolean false 2.1.0.Final Enable multitenancy subdomain type registry.multitenancy.types.subdomain.header-name string Host 2.1.0.Final Multitenancy subdomain type header name registry.multitenancy.types.subdomain.location string header 2.1.0.Final Multitenancy subdomain type location registry.multitenancy.types.subdomain.pattern string (\w[\w\d\-]*)\.localhost\.local 2.1.0.Final Multitenancy subdomain type pattern registry.multitenancy.types.token-claims.enabled boolean false 2.1.0.Final Enable multitenancy request header type registry.organization-id.claim-name list<string> 2.1.0.Final Organization ID claim name registry.tenant.manager.auth.client-id optional<string> 2.1.0.Final Tenant manager auth client ID registry.tenant.manager.auth.client-secret optional<string> 2.1.0.Final Tenant manager auth client secret registry.tenant.manager.auth.enabled optional<boolean> 2.1.0.Final Tenant manager auth enabled registry.tenant.manager.auth.token.expiration.reduction.ms optional<long> 2.2.0.Final Tenant manager auth token expiration reduction ms registry.tenant.manager.auth.url.configured optional<string> 2.1.0.Final Tenant manager auth url configured registry.tenant.manager.ssl.ca.path optional<string> 2.2.0.Final Tenant manager SSL Ca path registry.tenant.manager.url optional<string> 2.0.0.Final Tenant manager URL registry.tenants.context.cache.check-period long 60000 2.1.0.Final Tenants context cache check period registry.tenants.context.cache.max-size long 1000 2.4.1.Final Tenants context cache max size 8.1.13. redirects Table 8.13. redirects configuration options Name Type Default Available from Description registry.enable-redirects boolean 2.1.2.Final Enable redirects registry.redirects map<string, string> 2.1.2.Final Registry redirects registry.url.override.host optional<string> 2.5.0.Final Override the hostname used for generating externally-accessible URLs. The host and port overrides are useful when deploying Registry with HTTPS passthrough Ingress or Route. In cases like these, the request URL (and port) that is then re-used for redirection does not belong to actual external URL used by the client, because the request is proxied. The redirection then fails because the target URL is not reachable. registry.url.override.port optional<integer> 2.5.0.Final Override the port used for generating externally-accessible URLs. 8.1.14. rest Table 8.14. rest configuration options Name Type Default Available from Description registry.rest.artifact.deletion.enabled boolean [dynamic] false 2.4.2-SNAPSHOT Enables artifact version deletion registry.rest.artifact.download.maxSize int 1000000 2.2.6-SNAPSHOT Max size of the artifact allowed to be downloaded from URL registry.rest.artifact.download.skipSSLValidation boolean false 2.2.6-SNAPSHOT Skip SSL validation when downloading artifacts from URL 8.1.15. store Table 8.15. store configuration options Name Type Default Available from Description artifacts.skip.disabled.latest boolean true 2.4.2-SNAPSHOT Skip artifact versions with DISABLED state when retrieving latest artifact version quarkus.datasource.db-kind string postgresql 2.0.0.Final Datasource Db kind quarkus.datasource.jdbc.url string 2.1.0.Final Datasource jdbc URL registry.sql.init boolean true 2.0.0.Final SQL init 8.1.16. ui Table 8.16. ui configuration options Name Type Default Available from Description quarkus.oidc.tenant-enabled boolean false 2.0.0.Final UI OIDC tenant enabled registry.ui.config.apiUrl string 1.3.0.Final UI APIs URL registry.ui.config.auth.oidc.client-id string none 2.2.6.Final UI auth OIDC client ID registry.ui.config.auth.oidc.redirect-url string none 2.2.6.Final UI auth OIDC redirect URL registry.ui.config.auth.oidc.url string none 2.2.6.Final UI auth OIDC URL registry.ui.config.auth.type string none 2.2.6.Final UI auth type registry.ui.config.uiCodegenEnabled boolean true 2.4.2.Final UI codegen enabled registry.ui.config.uiContextPath string /ui/ 2.1.0.Final UI context path registry.ui.features.readOnly boolean [dynamic] false 1.2.0.Final UI read-only mode registry.ui.features.settings boolean false 2.2.2.Final UI features settings registry.ui.root string 2.3.0.Final Overrides the UI root context (useful when relocating the UI context using an inbound proxy)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apicurio_registry/2.6/html/installing_and_deploying_apicurio_registry_on_openshift/registry-config-reference_registry
7.125. mdadm
7.125. mdadm 7.125.1. RHBA-2015:1255 - mdadm bug fix and enhancement update Updated mdadm packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The mdadm packages contain a utility for creating, managing, and monitoring Linux multiple disk (MD) devices. Bug Fixes BZ# 1146536 Previously, installing the mdadm packages also installed a redundant udev rule file. With this update, the spec file of the mdadm packages has been adjusted to prevent the redundant rule file from being installed. BZ# 1159399 Prior to this update, when the "AUTO" keyword was configured in the mdadm.conf file, the mdadm utility did not behave accordingly. The parsing of "AUTO" has been corrected, and mdadm now respects this keyword as expected. BZ# 1146994 Prior to this update, when running an Internal Matrix Storage Manager (IMSM) volume as a non-root user, a race condition in some cases occurred that prevented the assembly of the volume. With this update, the mdadm packages have been fixed and this race condition no longer occurs, allowing the array to be assembled as expected. BZ# 1211564 Previously, mdadm was unintentionally capable of creating more Internal Matrix Storage Manager (IMSM) raid volumes than was allowed by the "Max volumes" option in mdadm configuration. This update corrects the bug, and attempting to create a more IMSM raid volumes than set by "Max volumes" now generates an error and does not create the raid volumes. Enhancement BZ# 1211500 Internal Matrix Storage Manager (IMSM) now supports SATA and Non-volatile memory Express (NVMe) spanning. Users of mdadm are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-mdadm
Chapter 2. What is the Dashboard Builder?
Chapter 2. What is the Dashboard Builder? The JBoss Dashboard Builder is an open source dashboard and reporting tool that allows you to do the following: Configure dashboards visually. Create graphical representation of KPIs (key performance indicators). Create definitions of interactive report tables. Filter and search from both in-memory and database sources. Process execution metrics dashboards. Extract data from external systems. Configure access to systems.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/using_the_dashboard_builder/what_is_the_dashboard_builder
Chapter 2. Differences from upstream OpenJDK 11
Chapter 2. Differences from upstream OpenJDK 11 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 11 changes: FIPS support. Red Hat build of OpenJDK 11 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 11 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 11 obtains the list of enabled cryptographic algorithms and key size constraints from RHEL. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources For more information about detecting if a system is in FIPS mode, see the Improve system FIPS detection example on the Red Hat RHEL Planning Jira. For more information about cryptographic policies, see Using system-wide cryptographic policies .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.13/rn-openjdk-diff-from-upstream
D.6. Outline Thumbnail View
D.6. Outline Thumbnail View The Outline View also offers you a way to view a thumbnail sketch of your diagram regardless of its size. To view this diagram thumbnail from the Outline panel, click the Diagram Overview button at the top of the view. The diagram overview displays in the Outline View . Figure D.7. Outline View The view contains a thumbnail of your entire diagram. The shaded portion represents the portion visible in the Diagram Editor view. To move to a specific portion of your diagram, click the shaded area and drag to the position you want displayed in the Diagram Editor view. Figure D.8.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/outline_thumbnail_view
Disconnected installation mirroring
Disconnected installation mirroring OpenShift Container Platform 4.12 Mirroring the installation container images Red Hat OpenShift Documentation Team
[ "./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "sudo ./mirror-registry upgrade -v", "sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --quayStorage <example_directory_name>/quay-storage -v", "sudo ./mirror-registry upgrade --sqliteStorage <example_directory_name>/sqlite-storage -v", "./mirror-registry install -v --targetHostname <host_example_com> --targetUsername <example_user> -k ~/.ssh/my_ssh_key --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key", "./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key --sqliteStorage <example_directory_name>/quay-storage", "./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>", "export QUAY=/USDHOME/quay-install", "cp ~/ssl.crt USDQUAY/quay-config", "cp ~/ssl.key USDQUAY/quay-config", "systemctl --user restart quay-app", "./mirror-registry uninstall -v --quayRoot <example_directory_name>", "sudo systemctl status <service>", "systemctl --user status <service>", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<server_architecture>", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> \\ --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "openshift-install", "podman login registry.redhat.io", "REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json", "podman login <mirror_registry>", "oc adm catalog mirror <index_image> \\ 1 <mirror_registry>:<port>[/<repository>] \\ 2 [-a USD{REG_CREDS}] \\ 3 [--insecure] \\ 4 [--index-filter-by-os='<platform>/<arch>'] \\ 5 [--manifests-only] 6", "src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2", "oc adm catalog mirror <index_image> \\ 1 file:///local/index \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5", "info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2", "podman login <mirror_registry>", "oc adm catalog mirror file://local/index/<repository>/<index_image>:<tag> \\ 1 <mirror_registry>:<port>[/<repository>] \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5", "oc adm catalog mirror <mirror_registry>:<port>/<index_image> <mirror_registry>:<port>[/<repository>] --manifests-only \\ 1 [-a USD{REG_CREDS}] [--insecure]", "manifests-<index_image_name>-<random_number>", "manifests-index/<repository>/<index_image_name>-<random_number>", "tar xvzf oc-mirror.tar.gz", "chmod +x oc-mirror", "sudo mv oc-mirror /usr/local/bin/.", "oc mirror help", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "mkdir -p <directory_name>", "cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file>", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "oc mirror init --registry example.com/mirror/oc-mirror-metadata > imageset-config.yaml 1", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.12 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 6 packages: - name: serverless-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi8/ubi:latest 9 helm: {}", "oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 2", "oc mirror --config=./imageset-config.yaml \\ 1 file://<path_to_output_directory> 2", "cd <path_to_output_directory>", "ls", "mirror_seq1_000000.tar", "oc mirror --from=./mirror_seq1_000000.tar \\ 1 docker://registry.example:5000 2", "oc apply -f ./oc-mirror-workspace/results-1639608409/", "oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/", "oc get imagecontentsourcepolicy", "oc get catalogsource -n openshift-marketplace", "oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 \\ 2 --dry-run 3", "Checking push permissions for registry.example:5000 Creating directory: oc-mirror-workspace/src/publish Creating directory: oc-mirror-workspace/src/v2 Creating directory: oc-mirror-workspace/src/charts Creating directory: oc-mirror-workspace/src/release-signatures No metadata detected, creating new workspace wrote mirroring manifests to oc-mirror-workspace/operators.1658342351/manifests-redhat-operator-index info: Planning completed in 31.48s info: Dry run complete Writing image mapping to oc-mirror-workspace/mapping.txt", "cd oc-mirror-workspace/", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: aws-load-balancer-operator", "oc mirror --config=./imageset-config.yaml \\ 1 --use-oci-feature \\ 2 --oci-feature-action=copy \\ 3 oci://my-oci-catalog 4", "[[registry]] location = \"registry.redhat.io:5000\" insecure = false blocked = false mirror-by-digest-only = true prefix = \"\" [[registry.mirror]] location = \"preprod-registry.example.com\" insecure = false", "ls -l", "my-oci-catalog 1 oc-mirror-workspace 2 olm_artifacts 3", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 mirror: operators: - catalog: oci:///home/user/oc-mirror/my-oci-catalog/redhat-operator-index 1 packages: - name: aws-load-balancer-operator", "oc mirror --config=./imageset-config.yaml \\ 1 --use-oci-feature \\ 2 --oci-feature-action=mirror \\ 3 docker://registry.example:5000 4", "additionalImages: - name: registry.redhat.io/ubi8/ubi:latest", "local: - name: podinfo path: /test/podinfo-5.0.0.tar.gz", "repositories: - name: podinfo url: https://example.github.io/podinfo charts: - name: podinfo version: 5.0.0", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: elasticsearch-operator minVersion: '2.4.0'", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: elasticsearch-operator minVersion: '5.2.3-31'", "architectures: - amd64 - arm64", "channels: - name: stable-4.10 - name: stable-4.12", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.11.37 maxVersion: 4.12.15 shortestPath: true", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: channels: - name: stable-4.10 minVersion: 4.10.10", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: rhacs-operator channels: - name: stable minVersion: 4.0.1", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: mylocalregistry/ocp-mirror/openshift4 skipTLS: false mirror: platform: channels: - name: stable-4.12 type: ocp graph: true operators: - catalog: registry.redhat.io/redhat/certified-operator-index:v4.12 packages: - name: nutanixcsioperator channels: - name: stable additionalImages: - name: registry.redhat.io/ubi9/ubi:latest", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 packages: - name: elasticsearch-operator channels: - name: stable-5.7 - name: stable", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 full: true", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration archiveSize: 4 storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - \"s390x\" channels: - name: stable-4.12 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12 helm: repositories: - name: redhat-helm-charts url: https://raw.githubusercontent.com/redhat-developer/redhat-helm-charts/master charts: - name: ibm-mongodb-enterprise-helm version: 0.2.0 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/disconnected_installation_mirroring/index
9.3. Managing Public SSH Keys for Users
9.3. Managing Public SSH Keys for Users OpenSSH uses public-private key pairs to authenticate users. A user attempts to access some network resource and presents its key pair. The first time the user authenticates, the administrator on the target machine has to approve the request manually. The machine then stores the user's public key in an authorized_keys file. Any time that the user attempts to access the resource again, the machine simply checks its authorized_keys file and then grants access automatically to approved users. There are a couple of problems with this system: SSH keys have to be distributed manually and separately to all machines in an environment. Administrators have to approve user keys to add them to the configuration, but it is difficult to verify either the user or key issuer properly, which can create security problems. On Red Hat Enterprise Linux, the System Security Services Daemon (SSSD) can be configured to cache and retrieve user SSH keys so that applications and services only have to look in one location for user keys. Because SSSD can use Identity Management as one of its identity information providers, Identity Management provides a universal and centralized repository of keys. Administrators do not need to worry about distributing, updating, or verifying user SSH keys. 9.3.1. About the SSH Key Format When keys are uploaded to the IdM entry, the key format can be either an OpenSSH-style key or a raw RFC 4253-style blob . Any RFC 4253-style key is automatically converted into an OpenSSH-style key before it is imported and saved into the IdM LDAP server. The IdM server can identify the type of key, such as an RSA or DSA key, from the uploaded key blob. However, in a key file such as id_rsa.pub , a key entry is identified by its type, then the key itself, and then an additional comment or identifier. For example, for an RSA key associated with a specific hostname: All three parts from the key file can be uploaded to and viewed for the user entry, or only the key itself can be uploaded. 9.3.2. Uploading User SSH Keys Through the Web UI Generate a user key. For example, using the OpenSSH tools: Copy the public key from the key file. The full key entry has the form type key== comment . Only the key== is required, but the entire entry can be stored. Open the Identity tab, and select the Users subtab. Click the name of the user to edit. In the Account Settings area of the Settings tab, click the SSH public keys: Add link. Click the Add link by the SSH public keys field. Paste in the public key for the user, and click the Set button. The SSH public keys field now shows New: key set . Clicking the Show/Set key link opens the submitted key. To upload multiple keys, click the Add link below the list of public keys, and upload the other keys. When all the keys have been submitted, click the Update link at the top of the user's page to save the changes. When the public key is saved, the entry is displayed as the key fingerprint, the comment (if one was included), and the key type [2] . Figure 9.1. Saved Public Key After uploading the user keys, configure SSSD to use Identity Management as one of its identity domains and set up OpenSSH to use SSSD for managing user keys. This is covered in the Deployment Guide . 9.3.3. Uploading User SSH Keys Through the Command Line The --sshpubkey option uploads the 64 bit-encoded public key to the user entry. For example: With a real key, the key is longer and usually ends with an equals sign (=). To upload multiple keys, pass a comma-separated list of keys with a single --sshpubkey option: After uploading the user keys, configure SSSD to use Identity Management as one of its identity domains and set up OpenSSH to use SSSD for managing user keys. This is covered in the Red Hat Enterprise Linux Deployment Guide . 9.3.4. Deleting User Keys Open the Identity tab, and select the Users subtab. Click the name of the user to edit. Open the Account Settings area of the Settings tab. Click the Delete link by the fingerprint of the key to remove. Click the Update link at the top of the user's page to save the changes. The command-line tools can be used to remove all keys. This is done by running ipa user-mod with the --sshpubkey= set to a blank value; this removes all public keys for the user. For example: [2] The key type is determined automatically from the key itself, if it is not included in the uploaded key.
[ "\"ssh-rsa ABCD1234...== ipaclient.example.com\"", "[jsmith@server ~]USD ssh-keygen -t rsa -C [email protected] Generating public/private rsa key pair. Enter file in which to save the key (/home/jsmith/.ssh/id_rsa): Created directory '/home/jsmith/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/jsmith/.ssh/id_rsa. Your public key has been saved in /home/jsmith/.ssh/id_rsa.pub. The key fingerprint is: a5:fd:ac:d3:9b:39:29:d0:ab:0e:9a:44:d1:78:9c:f2 [email protected] The key's randomart image is: +--[ RSA 2048]----+ | | | + . | | + = . | | = + | | . E S.. | | . . .o | | . . . oo. | | . o . +.+o | | o .o..o+o | +-----------------+", "[jsmith@server ~]USD cat /home/jsmith/.ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2E...tJG1PK2Mq++wQ== [email protected]", "[jsmith@server ~]USD ipa user-mod jsmith --sshpubkey=\"ssh-rsa 12345abcde= ipaclient.example.com\"", "--sshpubkey=\"12345abcde==,key2==,key3==\"", "[jsmith@server ~]USD kinit admin [jsmith@server ~]USD ipa user-mod --sshpubkey= jsmith" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/user-keys
3.4. Logging
3.4. Logging All message output passes through a logging module with independent choices of logging levels for: standard output/error syslog log file external log function The logging levels are set in the /etc/lvm/lvm.conf file, which is described in Appendix B, The LVM Configuration Files .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/logging
Chapter 1. Backup and restore
Chapter 1. Backup and restore 1.1. Control plane backup and restore operations As a cluster administrator, you might need to stop an OpenShift Container Platform cluster for a period and restart it later. Some reasons for restarting a cluster are that you need to perform maintenance on a cluster or want to reduce resource costs. In OpenShift Container Platform, you can perform a graceful shutdown of a cluster so that you can easily restart the cluster later. You must back up etcd data before shutting down a cluster; etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects. An etcd backup plays a crucial role in disaster recovery. In OpenShift Container Platform, you can also replace an unhealthy etcd member . When you want to get your cluster running again, restart the cluster gracefully . Note A cluster's certificates expire one year after the installation date. You can shut down a cluster and expect it to restart gracefully while the certificates are still valid. Although the cluster automatically retrieves the expired control plane certificates, you must still approve the certificate signing requests (CSRs) . You might run into several situations where OpenShift Container Platform does not work as expected, such as: You have a cluster that is not functional after the restart because of unexpected conditions, such as node failure or network connectivity issues. You have deleted something critical in the cluster by mistake. You have lost the majority of your control plane hosts, leading to etcd quorum loss. You can always recover from a disaster situation by restoring your cluster to its state using the saved etcd snapshots. Additional resources Quorum protection with machine lifecycle hooks 1.2. Application backup and restore operations As a cluster administrator, you can back up and restore applications running on OpenShift Container Platform by using the OpenShift API for Data Protection (OADP). OADP backs up and restores Kubernetes resources and internal images, at the granularity of a namespace, by using the version of Velero that is appropriate for the version of OADP you install, according to the table in Downloading the Velero CLI tool . OADP backs up and restores persistent volumes (PVs) by using snapshots or Restic. For details, see OADP features . 1.2.1. OADP requirements OADP has the following requirements: You must be logged in as a user with a cluster-admin role. You must have object storage for storing backups, such as one of the following storage types: OpenShift Data Foundation Amazon Web Services Microsoft Azure Google Cloud Platform S3-compatible object storage IBM Cloud(R) Object Storage S3 Note If you want to use CSI backup on OCP 4.11 and later, install OADP 1.1. x . OADP 1.0. x does not support CSI backup on OCP 4.11 and later. OADP 1.0. x includes Velero 1.7. x and expects the API group snapshot.storage.k8s.io/v1beta1 , which is not present on OCP 4.11 and later. Important The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To back up PVs with snapshots, you must have cloud storage that has a native snapshot API or supports Container Storage Interface (CSI) snapshots, such as the following providers: Amazon Web Services Microsoft Azure Google Cloud Platform CSI snapshot-enabled cloud storage, such as Ceph RBD or Ceph FS Note If you do not want to back up PVs by using snapshots, you can use Restic , which is installed by the OADP Operator by default. 1.2.2. Backing up and restoring applications You back up applications by creating a Backup custom resource (CR). See Creating a Backup CR . You can configure the following backup options: Creating backup hooks to run commands before or after the backup operation Scheduling backups Backing up applications with File System Backup: Kopia or Restic You restore application backups by creating a Restore (CR). See Creating a Restore CR . You can configure restore hooks to run commands in init containers or in the application container during the restore operation.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/backup_and_restore/backup-restore-overview
Chapter 2. Preparing a RHEL 6 system for the upgrade
Chapter 2. Preparing a RHEL 6 system for the upgrade This procedure describes the steps that are necessary before performing an in-place upgrade to RHEL 7. Prerequisites You have verified that your system setup can be upgraded from RHEL 6 to RHEL 7. See Planning an upgrade for more information. Procedure Ensure that your system is registered to Red Hat Subscription Management (RHSM) . If your RHEL 6 system is registered to Red Hat Network (RHN), you must migrate to RHSM. See Migrating from RHN to RHSM in Red Hat Enterprise Linux for details. Ensure that you have access to the latest RHEL 6 content. If you use the yum-plugin-versionlock plug-in to lock packages to a specific version, clear the lock: See How to restrict yum to install or upgrade a package to a fixed specific package version? for more information. Enable the Extras repository, which contains necessary packages for a pre-upgrade assessment and an in-place upgrade. For the Server variant on the 64-bit Intel architecture on server edition: For IBM POWER, big endian systems: For the IBM Z architecture: For the HPC Compute Node variant on the 64-bit Intel architecture: Install the Preupgrade Assistant and Red Hat Upgrade Tool: Note If your system does not have internet access, you can download the Preupgrade Assistant and Red Hat Upgrade Tool from the Red Hat Customer Portal . For more information, see How to install preupgrade assessment packages on an offline system for RHEL 6.10 to RHEL 7.9 upgrade . Remove all unsupported package groups: Replace group_name with each unsupported group name. To locate a list of installed group names, run yum grouplist . Check Known Issues and apply workarounds where applicable. Especially, on systems with multiple network interfaces: If the system has static routes configured, replace the static route file. See redhat-upgrade-tool fails to reconfigure the static routes on the network interfaces, preventing the upgrade to happen for more information. If the system runs NetworkManager , stop NetworkManager prior to running the upgrade tool. See redhat-upgrade-tool fails to reconfigure the network interfaces, preventing the upgrade to happen for more information. Update all packages to their latest RHEL 6 version: Reboot the system: Back up all your data before performing the upgrade to prevent potential data loss. Verification Verify that you are registered with the Red Hat Subscription Manager: The Loaded plug-ins: entry must contain subscription-manager . Verify that only supported package groups are installed:
[ "yum versionlock clear", "subscription-manager repos --enable rhel-6-server-extras-rpms --enable rhel-6-server-optional-rpms", "subscription-manager repos --enable rhel-6-for-power-extras-rpms --enable rhel-6-for-power-optional-rpms", "subscription-manager repos --enable rhel-6-for-system-z-extras-rpms --enable rhel-6-for-system-z-optional-rpms", "subscription-manager repos --enable rhel-6-for-hpc-node-extras-rpms --enable rhel-6-for-hpc-node-optional-rpms", "yum install preupgrade-assistant preupgrade-assistant-el6toel7 redhat-upgrade-tool", "yum groupremove group_name", "yum update", "reboot", "yum update", "yum grouplist" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/upgrading_from_rhel_6_to_rhel_7/preparing-a-rhel-6-system-for-the-upgrade_upgrading-from-rhel-6-to-rhel-7
Configuration Guide
Configuration Guide Red Hat JBoss Enterprise Application Platform 7.4 Instructions for setting up and maintaining Red Hat JBoss Enterprise Application Platform, including running applications and services. Red Hat Customer Content Services
[ "EAP_HOME /bin/standalone.sh", "EAP_HOME /bin/domain.sh", "EAP_HOME /bin/jboss-cli.sh --connect", "shutdown", "EAP_HOME /bin/standalone.sh --start-mode=admin-only", ":read-attribute(name=running-mode) { \"outcome\" => \"success\", \"result\" => \"ADMIN_ONLY\" }", "/core-service=server-environment:read-attribute(name=initial-running-mode)", "reload --start-mode=admin-only", "reload --start-mode=normal", "EAP_HOME /bin/domain.sh --admin-only", "/host= HOST_NAME :read-attribute(name=running-mode) { \"outcome\" => \"success\", \"result\" => \"ADMIN_ONLY\" }", "reload --host= HOST_NAME --admin-only=true", "reload --host= HOST_NAME --admin-only=false", "/subsystem=ejb3:write-attribute(name=enable-graceful-txn-shutdown,value=true)", ":read-attribute(name=suspend-state)", "/host=master/server=server-one:read-attribute(name=suspend-state)", ":suspend(suspend-timeout=60)", ":suspend-servers(suspend-timeout=60)", "/host=master/server-config=server-one:suspend(suspend-timeout=60)", "/server-group=main-server-group:suspend-servers(suspend-timeout=60)", "/host=master:suspend-servers(suspend-timeout=60)", ":resume", "EAP_HOME /bin/standalone.sh --start-mode=suspend", "/host= HOST_NAME /server-config= SERVER_NAME :start(start-mode=suspend)", "/server-group= SERVER_GROUP_NAME :start-servers(start-mode=suspend)", "shutdown --suspend-timeout=60", ":stop-servers(suspend-timeout=60)", "/host=master/server-config=server-one:stop(suspend-timeout=60)", "/server-group=main-server-group:stop-servers(suspend-timeout=60)", "/host=master:shutdown(suspend-timeout=60)", "service eap7-standalone start", "systemctl start eap7-standalone.service", "service eap7-domain start", "systemctl start eap7-domain.service", "WILDFLY_SERVER_CONFIG=standalone-full.xml", "WILDFLY_BIND=192.168.0.1", "JAVA_OPTS=\"USDJAVA_OPTS -Xms2048m -Xmx2048m\" JAVA_OPTS=\"USDJAVA_OPTS -Djboss.bind.address.management=192.168.0.1\"", "service eap7-standalone stop", "systemctl stop eap7-standalone.service", "service eap7-domain stop", "systemctl stop eap7-domain.service", ".\\standalone.ps1 \"-c=standalone-full.xml\"", "EAP_HOME /bin/add-user.sh", "EAP_HOME /bin/add-user.sh -u 'mgmtuser1' -p 'password1!' -g 'guest,mgmtgroup'", "EAP_HOME /bin/add-user.sh -u 'mgmtuser2' -p 'password1!' -sc ' /path/to /standaloneconfig/' -dc ' /path/to /domainconfig/' -up 'newname.properties'", "EAP_HOME /bin/add-user.sh What type of user do you wish to add? a) Management User (mgmt-users.properties) b) Application User (application-users.properties) (a): a Enter the details of the new user to add. Using realm 'ManagementRealm' as discovered from the existing property files. Username : test-user User 'test-user' already exists and is enabled, would you like to a) Update the existing user password and roles b) Disable the existing user c) Type a new username (a):", "EAP_HOME /bin/jboss-cli.sh", "connect", "help", "deploy --help", "quit", "/subsystem=datasources/data-source=ExampleDS:read-attribute(name=enabled) { \"outcome\" => \"success\", \"result\" => true }", "/profile=default/subsystem=datasources/data-source=ExampleDS:read-attribute(name=enabled)", "/subsystem=datasources/data-source=ExampleDS:write-attribute(name=enabled,value=false)", "/host= HOST_NAME /server-config=server-one:start", "/core-service=management/management-interface=http-interface:write-attribute(name=console-enabled,value=true)", "/core-service=management/management-interface=http-interface:write-attribute(name=console-enabled,value=false)", "http:// HOST_NAME :9990/management/ PATH_TO_RESOURCE ?operation= OPERATION & PARAMETER = VALUE", "http:// HOST_NAME :9990/management/subsystem/undertow/server/default-server/http-listener/default", "http:// HOST_NAME :9990/management/subsystem/datasources/data-source/ExampleDS?operation=attribute&name=enabled", "curl --digest http:// HOST_NAME :9990/management --header \"Content-Type: application/json\" -u USERNAME : PASSWORD -d '{\"operation\":\"write-attribute\", \"address\":[\"subsystem\",\"datasources\",\"data-source\",\"ExampleDS\"], \"name\":\"enabled\", \"value\":\"false\", \"json.pretty\":\"1\"}'", "curl --digest http://localhost:9990/management --header \"Content-Type: application/json\" -u USERNAME : PASSWORD -d '{\"operation\":\"reload\"}'", "{ \"outcome\" => \"failed\", \"failure-description\" => \"WFLYCTL0458:Disallowed HTTP Header name 'Date'\", \"rolled-back\" => true }", "/core-service=management/management-interface=http-interface:write-attribute(name=constant-headers,value=[{path=\" PATH_PREFIX \",headers=[{name=\" HEADER_NAME \",value=\" HEADER_VALUE \"}]}])", "reload", "/core-service=management/management-interface=http-interface:write-attribute(name=constant-headers,value=[{path=\"/\",headers=[{name=\"X-Help\",value=\"http://mywebsite.com/help\"}]}])", "curl -s -D - -o /dev/null --digest http://localhost:9990/management/ -u USERNAME : PASSWORD", "admin:redhat HTTP/1.1 200 OK Connection: keep-alive X-Frame-Options: SAMEORIGIN Content-Type: application/json; charset=utf-8 Content-Length: 3312 X-Help: http://mywebsite.com Date: Tue, 27 Oct 2020 08:13:17 GMT", "/core-service=management/management-interface=http-interface:write-attribute(name=constant-headers,value=[{path= /PREFIX ,headers=[{name= X-HEADER ,value= HEADERVALUE }]}])", "<management-interfaces> <http-interface security-realm=\"ManagementRealm\"> <http-upgrade enabled=\"true\"/> <socket-binding http=\"management-http\"/> <constant-headers> <header-mapping path=\" /PREFIX \"> <header name=\" X-HEADER \" value=\" HEADERVALUE \"/> </header-mapping> </constant-headers> </http-interface> </management-interfaces>", "/core-service=management/management-interface=http-interface:write-attribute(name=constant-headers,value=[{path= /PREFIX1 ,headers=[{name= X-HEADER ,value= HEADERVALUE-FOR-X }]},{path= /PREFIX2 ,headers=[{name= Y-HEADER ,value= HEADERVALUE-FOR-Y }]}])", "/host=master/core-service=management/management-interface=http-interface:write-attribute(name=constant-headers,value=[{path= /PREFIX ,headers=[{name= X-HEADER ,value= HEADER-VALUE }]}])", "<management-interfaces> <http-interface security-realm=\"ManagementRealm\"> <http-upgrade enabled=\"true\"/> <socket interface=\"management\" port=\"USD{jboss.management.http.port:9990}\"/> <constant-headers> <header-mapping path=\" /PREFIX \"> <header name=\" X-HEADER \" value=\" HEADER-VALUE \"/> </header-mapping> </constant-headers> </http-interface> </management-interfaces>", "/host=master/core-service=management/management-interface=http-interface:write-attribute(name=constant-headers,value=[ {path= /PREFIX-1 ,headers=[{name= X-HEADER ,value= HEADER-VALUE-FOR-X }]},{path= /PREFIX-2 ,headers=[{name= Y-HEADER ,value= HEADER-VALUE-FOR-Y }]}])", "// Create the management client ModelControllerClient client = ModelControllerClient.Factory.create(\"localhost\", 9990); // Create the operation request ModelNode op = new ModelNode(); // Set the operation op.get(\"operation\").set(\"read-resource\"); // Set the address ModelNode address = op.get(\"address\"); address.add(\"subsystem\", \"undertow\"); address.add(\"server\", \"default-server\"); address.add(\"http-listener\", \"default\"); // Execute the operation and manipulate the result ModelNode returnVal = client.execute(op); System.out.println(\"Outcome: \" + returnVal.get(\"outcome\").toString()); System.out.println(\"Result: \" + returnVal.get(\"result\").toString()); // Close the client client.close();", "EAP_HOME /bin/standalone.sh --server-config=standalone-full.xml", "wildfly-configuration: subsystem: datasources: jdbc-driver: postgresql: driver-name: postgresql driver-xa-datasource-class-name: org.postgresql.xa.PGXADataSource driver-module-name: org.postgresql.jdbc data-source: PostgreSQLDS: enabled: true exception-sorter-class-name: org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter jndi-name: java:jboss/datasources/PostgreSQLDS jta: true max-pool-size: 20 min-pool-size: 0 connection-url: \"jdbc:postgresql://localhost:5432}/demo\" driver-name: postgresql user-name: postgres password: postgres validate-on-match: true background-validation: false background-validation-millis: 10000 flush-strategy: FailingConnectionOnly statistics-enable: false stale-connection-checker-class-name: org.jboss.jca.adapters.jdbc.extensions.novendor.NullStaleConnectionChecker valid-connection-checker-class-name: org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker transaction-isolation: TRANSACTION_READ_COMMITTED", "wildfly-configuration: subsystem: logging: console-handler: CONSOLE: level: !undefine", "wildfly-configuration: socket-binding-group: standard-sockets: remote-destination-outbound-socket-binding: remote-artemis: host: localhost port: 61616 subsystem: messaging-activemq: server: default: !remove remote-connector: artemis: socket-binding: remote-artemis pooled-connection-factory: RemoteConnectionFactory: connectors: - artemis entries: - \"java:jboss/RemoteConnectionFactory\" - \"java:jboss/exported/jms/RemoteConnectionFactory\" enable-amq1-prefix: false user: admin password: admin ejb3: default-resource-adapter-name: RemoteConnectionFactory ee: service: default-bindings: jms-connection-factory: \"java:jboss/RemoteConnectionFactory\"", "wildfly-configuration: subsystem: elytron: permission-set: default-permissions: permissions: !list-add - class-name: org.wildfly.transaction.client.RemoteTransactionPermission module: org.wildfly.transaction.client target-name: \"*\" index: 0", "./standalone.sh -y=/home/ehsavoie/dev/wildfly/config2.yml:config.yml -c standalone-full.xml", "EAP_HOME /bin/domain.sh --host-config=host-master.xml", ":take-snapshot { \"outcome\" => \"success\", \"result\" => \" EAP_HOME /standalone/configuration/standalone_xml_history/snapshot/20151022-133109702standalone.xml\" }", ":list-snapshots { \"outcome\" => \"success\", \"result\" => { \"directory\" => \" EAP_HOME /standalone/configuration/standalone_xml_history/snapshot\", \"names\" => [ \"20151022-133109702standalone.xml\", \"20151022-132715958standalone.xml\" ] } }", ":delete-snapshot(name=20151022-133109702standalone.xml)", "EAP_HOME /bin/standalone.sh --server-config=standalone_xml_history/snapshot/20151022-133109702standalone.xml", "/subsystem=core-management/service=configuration-changes:add(max-history=20)", "/host= HOST_NAME /subsystem=core-management/service=configuration-changes:add(max-history=20)", "/subsystem=core-management/service=configuration-changes:list-changes", "/host= HOST_NAME /subsystem=core-management/service=configuration-changes:list-changes", "/host= HOST_NAME /server= SERVER_NAME /subsystem=core-management/service=configuration-changes:list-changes", "{ \"outcome\" => \"success\", \"result\" => [ { \"operation-date\" => \"2016-02-12T18:37:00.354Z\", \"access-mechanism\" => \"NATIVE\", \"remote-address\" => \"127.0.0.1/127.0.0.1\", \"outcome\" => \"success\", \"operations\" => [{ \"address\" => [], \"operation\" => \"reload\", \"operation-headers\" => { \"caller-type\" => \"user\", \"access-mechanism\" => \"NATIVE\" } }] }, { \"operation-date\" => \"2016-02-12T18:34:16.859Z\", \"access-mechanism\" => \"NATIVE\", \"remote-address\" => \"127.0.0.1/127.0.0.1\", \"outcome\" => \"success\", \"operations\" => [{ \"address\" => [ (\"subsystem\" => \"datasources\"), (\"data-source\" => \"ExampleDS\") ], \"operation\" => \"write-attribute\", \"name\" => \"enabled\", \"value\" => false, \"operation-headers\" => { \"caller-type\" => \"user\", \"access-mechanism\" => \"NATIVE\" } }] }, { \"operation-date\" => \"2016-02-12T18:24:11.670Z\", \"access-mechanism\" => \"HTTP\", \"remote-address\" => \"127.0.0.1/127.0.0.1\", \"outcome\" => \"success\", \"operations\" => [{ \"operation\" => \"remove\", \"address\" => [ (\"subsystem\" => \"messaging-activemq\"), (\"server\" => \"default\"), (\"jms-queue\" => \"ExpiryQueue\") ], \"operation-headers\" => {\"access-mechanism\" => \"HTTP\"} }] } ] }", "<interface name=\"public\"> <inet-address value=\"USD{jboss.bind.address:127.0.0.1}\"/> </interface>", "EAP_HOME /bin/standalone.sh -Djboss.bind.address= IP_ADDRESS", "USD{ SYSTEM_VALUE_1 USD{ SYSTEM_VALUE_2 }}", "<password>USD{VAULT::ds_ExampleDS::password::1}</password>", "<password>USD{VAULT::USD{datasource_name}::password::1}</password>", "/subsystem=ee:write-attribute(name=\"jboss-descriptor-property-replacement\",value= VALUE )", "/subsystem=ee:write-attribute(name=\"spec-descriptor-property-replacement\",value= VALUE )", "EAP_HOME /bin/standalone.sh --git-repo=local --git-branch=1.0.x", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <authentication-rules> <rule use-configuration=\"test-login\"> </rule> </authentication-rules> <authentication-configurations> <configuration name=\"test-login\"> <sasl-mechanism-selector selector=\"BASIC\" /> <set-user-name name=\"eap-user\" /> <credentials> <clear-password password=\"my_api_key\" /> </credentials> <set-mechanism-realm name=\"testRealm\" /> </configuration> </authentication-configurations> </authentication-client> </configuration>", "EAP_HOME /bin/standalone.sh --git-repo=https://github.com/ MY_GIT_ID /eap-configuration.git --git-branch=1.0.x --git-auth=file:///home/ USER_NAME /github-wildfly-config.xml --server-config=standalone-full.xml", "<eap_home_path> /bin/standalone.sh --git-repo= <git_repository_url> --git-auth= <elytron_configuration_file_url>", "[~/.ssh]USD ssh-keygen -t ecdsa -b 256 Generating public/private ecdsa key pair. Enter file in which to save the key (/home/user/.ssh/id_ecdsa): Enter passphrase (empty for no passphrase): secret Enter same passphrase again: secret Your identification has been saved in /home/user/.ssh/id_ecdsa. Your public key has been saved in /home/user/.ssh/id_ecdsa.pub.", "<authentication-configurations> <configuration name=\"example\"> <credentials> <key-pair> <openssh-private-key pem=\"-----BEGIN OPENSSH PRIVATE KEY----- b3BlbnNzaC1rZXktdjEAAAAACmFlczI1Ni1jdHIAAAAGYmNyeXB0AAAAGAAAABDaZzGpGV 922xmrL+bMHioPAAAAEAAAAAEAAABoAAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlz dHAyNTYAAABBBIMTU1m6pmpnSTZ2k/cbKnxXkRpXUmWwqN1SSNLpRswGsUhmLG2H21br1Z lEHRiRn6zQmA4YCtCw2hLuz8M8WVoAAADAQk+bMNWFfaI4Ej1AQdlLl6v4RDa2HGjDS3V4 39h0pOx4Ix7YZKydTN4SPkYRt78CNK0AhhtKsWo2lVNwyfh8/6SeqowhgCG9MJYW8yRR1R 3DX/eQTx6MV/gSSRLDTpcVWUY0jrBGpMaEvylKoNcabiEo44flkIYlG6E/YtFXsmXsoBsj nFcjvmfE7Lzyin5Fowwpbqj9f0XOARu9wsUzeyJVAwT7+YCU3mWJ3dnO1bOxK4TuLsxD6j RB7bJemsfr -----END OPENSSH PRIVATE KEY-----\"> <clear-password password=\"secret\"/> </openssh-private-key> </key-pair> </credentials> </configuration> </authentication-configurations>", "<authentication-configurations> <configuration name=\"example\"> <credentials> <ssh-credential ssh-directory=\"/user/home/example/.ssh\" private-key-file=\"id_test_ecdsa\" known-hosts-file=\"known_hosts_test\"> 1 2 3 <clear-password password=\"secret\"/> </ssh-credential> </credentials> </configuration> </authentication-configurations>", ":publish-configuration(location=\"=https://github.com/ MY_GIT_ID /eap-configuration.git\") {\"outcome\" => \"success\"}", ":take-snapshot(name=\"snapshot-01\", comment=\"1st snapshot\") { \"outcome\" => \"success\", \"result\" => \"1st snapshot\" }", ":list-snapshots { \"outcome\" => \"success\", \"result\" => { \"directory\" => \"\", \"names\" => [ \"snapshot : 1st snapshot\", \"refs/tags/snapshot-01\", \"snapshot2 : 2nd snapshot\", \"refs/tags/snapshot-02\" ] } }", ":delete-snapshot(name=\"snapshot-01\") {\"outcome\" => \"success\"}", "<file relative-to=\"jboss.server.log.dir\" path=\"server.log\"/>", "ls /path", "ls /host= HOST_NAME /server= SERVER_NAME /path", "/path= PATH_NAME :read-resource", "/host= HOST_NAME /server= SERVER_NAME /path= PATH_NAME :read-resource", "EAP_HOME /bin/standalone.sh -Djboss.server.log.dir=/var/log", "JAVA_OPTS=\"USDJAVA_OPTS -Djboss.server.log.dir=/var/log\"", "EAP_HOME /bin/domain.sh -Djboss.domain.temp.dir=/opt/jboss_eap/domain_data/all_temp -Djboss.domain.log.dir=/opt/jboss_eap/domain_data/all_logs -Djboss.domain.data.dir=/opt/jboss_eap/domain_data/all_data -Djboss.domain.servers.dir=/opt/jboss_eap/domain_data/all_servers", "/opt/jboss_eap/domain_data/ ├── all_data ├── all_logs ├── all_servers │ ├── server-one │ │ ├── data │ │ ├── log │ │ └── tmp │ └── server-two │ ├── data │ ├── log │ └── tmp └── all_temp", "/path=my.custom.path:add(path=/my/custom/path)", "<subsystem xmlns=\"urn:jboss:domain:logging:6.0\"> <periodic-rotating-file-handler name=\"FILE\" autoflush=\"true\"> <formatter> <named-formatter name=\"PATTERN\"/> </formatter> <file relative-to=\"my.custom.path\" path=\"server.log\"/> <suffix value=\".yyyy-MM-dd\"/> <append value=\"true\"/> </periodic-rotating-file-handler> </subsystem>", "EAP_HOME /domain └─ servers ├── server-one │ ├── data │ ├── tmp │ └── log └── server-two ├── data ├── tmp └── log", "/host= HOST_NAME :write-attribute(name=directory-grouping,value=by-server)", "<servers directory-grouping=\"by-server\"> <server name=\"server-one\" group=\"main-server-group\"/> <server name=\"server-two\" group=\"main-server-group\" auto-start=\"true\"> <socket-bindings port-offset=\"150\"/> </server> </servers>", "EAP_HOME /domain ├── data │ └── servers │ ├── server-one │ └── server-two ├── log │ └── servers │ ├── server-one │ └── server-two └── tmp └── servers ├── server-one └── server-two", "/host= HOST_NAME :write-attribute(name=directory-grouping,value=by-type)", "<servers directory-grouping=\"by-type\"> <server name=\"server-one\" group=\"main-server-group\"/> <server name=\"server-two\" group=\"main-server-group\" auto-start=\"true\"> <socket-bindings port-offset=\"150\"/> </server> </servers>", "<inet-address value=\"USD{jboss.bind.address:127.0.0.1}\"/>", "EAP_HOME /bin/standalone.sh -Djboss.bind.address=192.168.1.2", "/system-property= PROPERTY_NAME :add(value= PROPERTY_VALUE )", "/system-property=jboss.bind.address:add(value=192.168.1.2)", "Set the bind address JAVA_OPTS=\"USDJAVA_OPTS -Djboss.bind.address=192.168.1.2\"", "Set the bind address JAVA_OPTS=\"USDJAVA_OPTS -Djboss.bind.address=192.168.1.2\" The ProcessController process uses its own set of java options if [ \"xUSDPROCESS_CONTROLLER_JAVA_OPTS\" = \"x\" ]; then", "<audit-log> <formatters> <json-formatter name=\"json-formatter\"/> </formatters> <handlers> <file-handler name=\"file\" formatter=\"json-formatter\" path=\"audit-log.log\" relative-to=\"jboss.server.data.dir\"/> </handlers> <logger log-boot=\"true\" log-read-only=\"false\" enabled=\"false\"> <handlers> <handler name=\"file\"/> </handlers> </logger> </audit-log>", "/core-service=management/access=audit:read-resource(recursive=true)", "<audit-log> <formatters> <json-formatter name=\"json-formatter\"/> </formatters> <handlers> <file-handler name=\"host-file\" formatter=\"json-formatter\" relative-to=\"jboss.domain.data.dir\" path=\"audit-log.log\"/> <file-handler name=\"server-file\" formatter=\"json-formatter\" relative-to=\"jboss.server.data.dir\" path=\"audit-log.log\"/> </handlers> <logger log-boot=\"true\" log-read-only=\"false\" enabled=\"false\"> <handlers> <handler name=\"host-file\"/> </handlers> </logger> <server-logger log-boot=\"true\" log-read-only=\"false\" enabled=\"false\"> <handlers> <handler name=\"server-file\"/> </handlers> </server-logger> </audit-log>", "/host= HOST_NAME /core-service=management/access=audit:read-resource(recursive=true)", "/core-service=management/access=audit/logger=audit-log:write-attribute(name=enabled,value=true)", "/host= HOST_NAME /core-service=management/access=audit/logger=audit-log:write-attribute(name=enabled,value=true)", "/host= HOST_NAME /core-service=management/access=audit/server-logger=audit-log:write-attribute(name=enabled,value=true)", "/subsystem=jmx/configuration=audit-log:add() /subsystem=jmx/configuration=audit-log/handler=file:add()", "/host= HOST_NAME /subsystem=jmx/configuration=audit-log:add()", "/host= HOST_NAME /subsystem=jmx/configuration=audit-log/handler=host-file:add()", "/profile= PROFILE_NAME /subsystem=jmx/configuration=audit-log:add()", "/profile= PROFILE_NAME /subsystem=jmx/configuration=audit-log/handler=server-file:add()", "batch /core-service=management/access=audit/syslog-handler= SYSLOG_HANDLER_NAME :add(formatter=json-formatter) /core-service=management/access=audit/syslog-handler= SYSLOG_HANDLER_NAME /protocol=udp:add(host= HOST_NAME ,port= PORT ) run-batch", "/core-service=management/access=audit/syslog-handler= SYSLOG_HANDLER_NAME /protocol=tls/authentication=truststore:add(keystore-path= PATH_TO_TRUSTSTORE ,keystore-password= TRUSTSTORE_PASSWORD )", "/core-service=management/access=audit/logger=audit-log/handler= SYSLOG_HANDLER_NAME :add", "package org.simple.lifecycle.events.listener; import java.io.File; import java.io.FileWriter; import java.io.IOException; import org.wildfly.extension.core.management.client.ProcessStateListener; import org.wildfly.extension.core.management.client.ProcessStateListenerInitParameters; import org.wildfly.extension.core.management.client.RunningStateChangeEvent; import org.wildfly.extension.core.management.client.RuntimeConfigurationStateChangeEvent; public class SimpleListener implements ProcessStateListener { private File file; private FileWriter fileWriter; private ProcessStateListenerInitParameters parameters; public void init(ProcessStateListenerInitParameters parameters) { this.parameters = parameters; this.file = new File(parameters.getInitProperties().get(\"file\")); try { fileWriter = new FileWriter(file, true); } catch (IOException e) { e.printStackTrace(); } } public void cleanup() { try { fileWriter.close(); } catch (IOException e) { e.printStackTrace(); } finally { fileWriter = null; } } public void runtimeConfigurationStateChanged(RuntimeConfigurationStateChangeEvent evt) { try { fileWriter.write(String.format(\"Runtime configuration state change for %s: %s to %s\\n\", parameters.getProcessType(), evt.getOldState(), evt.getNewState())); fileWriter.flush(); } catch (IOException e) { e.printStackTrace(); } } public void runningStateChanged(RunningStateChangeEvent evt) { try { fileWriter.write(String.format(\"Running state change for %s: %s to %s\\n\", parameters.getProcessType(), evt.getOldState(), evt.getNewState())); fileWriter.flush(); } catch (IOException e) { e.printStackTrace(); } } }", "module add --name=org.simple.lifecycle.events.listener --dependencies=org.wildfly.extension.core-management-client --resources=/path/to/simple-listener-0.0.1-SNAPSHOT.jar", "/subsystem=core-management/process-state-listener=my-simple-listener:add(class=org.simple.lifecycle.events.listener.SimpleListener, module=org.simple.lifecycle.events.listener,properties={file=/path/to/my-listener-output.txt})", "Running state change for STANDALONE_SERVER: normal to suspending Running state change for STANDALONE_SERVER: suspending to suspended", "import java.io.BufferedWriter; import java.io.IOException; import java.nio.charset.StandardCharsets; import java.nio.file.Files; import java.nio.file.Path; import java.nio.file.Paths; import java.nio.file.StandardOpenOption; import javax.management.AttributeChangeNotification; import javax.management.Notification; import javax.management.NotificationListener; import org.jboss.logging.Logger; public class StateNotificationListener implements NotificationListener { public static final String RUNTIME_CONFIGURATION_FILENAME = \"runtime-configuration-notifications.txt\"; public static final String RUNNING_FILENAME = \"running-notifications.txt\"; private final Path targetFile; public StateNotificationListener() { this.targetFile = Paths.get(\"notifications/data\").toAbsolutePath(); init(targetFile); } protected Path getRuntimeConfigurationTargetFile() { return this.targetFile.resolve(RUNTIME_CONFIGURATION_FILENAME); } protected Path getRunningConfigurationTargetFile() { return this.targetFile.resolve(RUNNING_FILENAME); } protected final void init(Path targetFile) { try { Files.createDirectories(targetFile); if (!Files.exists(targetFile.resolve(RUNTIME_CONFIGURATION_FILENAME))) { Files.createFile(targetFile.resolve(RUNTIME_CONFIGURATION_FILENAME)); } if (!Files.exists(targetFile.resolve(RUNNING_FILENAME))) { Files.createFile(targetFile.resolve(RUNNING_FILENAME)); } } catch (IOException ex) { Logger.getLogger(StateNotificationListener.class).error(\"Problem handling JMX Notification\", ex); } } @Override public void handleNotification(Notification notification, Object handback) { AttributeChangeNotification attributeChangeNotification = (AttributeChangeNotification) notification; if (\"RuntimeConfigurationState\".equals(attributeChangeNotification.getAttributeName())) { writeNotification(attributeChangeNotification, getRuntimeConfigurationTargetFile()); } else { writeNotification(attributeChangeNotification, getRunningConfigurationTargetFile()); } } private void writeNotification(AttributeChangeNotification notification, Path path) { try (BufferedWriter in = Files.newBufferedWriter(path, StandardCharsets.UTF_8, StandardOpenOption.APPEND)) { in.write(String.format(\"%s %s %s %s\", notification.getType(), notification.getSequenceNumber(), notification.getSource().toString(), notification.getMessage())); in.newLine(); in.flush(); } catch (IOException ex) { Logger.getLogger(StateNotificationListener.class).error(\"Problem handling JMX Notification\", ex); } } }", "MBeanServer server = ManagementFactory.getPlatformMBeanServer(); server.addNotificationListener(ObjectName.getInstance(\"jboss.root:type=state\"), new StateNotificationListener(), null, null);", "jmx.attribute.change 5 jboss.root:type=state The attribute 'RunningState' has changed from 'normal' to 'suspending' jmx.attribute.change 6 jboss.root:type=state The attribute 'RunningState' has changed from 'suspending' to 'suspended'", "<interfaces> <interface name=\"management\"> <inet-address value=\"USD{jboss.bind.address.management:127.0.0.1}\"/> </interface> <interface name=\"public\"> <inet-address value=\"USD{jboss.bind.address:127.0.0.1}\"/> </interface> <interface name=\"private\"> <inet-address value=\"USD{jboss.bind.address.private:127.0.0.1}\"/> </interface> <interface name=\"unsecure\"> <inet-address value=\"USD{jboss.bind.address.unsecure:127.0.0.1}\"/> </interface> </interfaces>", "EAP_HOME /bin/standalone.sh -Djboss.bind.address= IP_ADDRESS", "/interface=external:add(nic=eth0)", "<interface name=\"external\"> <nic name=\"eth0\"/> </interface>", "/interface=default:add(subnet-match=192.168.0.0/16,up=true,multicast=true,not={point-to-point=true})", "<interface name=\"default\"> <subnet-match value=\"192.168.0.0/16\"/> <up/> <multicast/> <not> <point-to-point/> </not> </interface>", "/interface=public:write-attribute(name=inet-address,value=\"USD{jboss.bind.address:192.168.0.0}\")", "<interface name=\"public\"> <inet-address value=\"USD{jboss.bind.address:192.168.0.0}\"/> </interface>", "/host= HOST_NAME /server-config= SERVER_NAME /interface= INTERFACE_NAME :add(inet-address=127.0.0.1)", "<servers> <server name=\" SERVER_NAME \" group=\"main-server-group\"> <interfaces> <interface name=\" INTERFACE_NAME \"> <inet-address value=\"127.0.0.1\"/> </interface> </interfaces> </server> </servers>", "<socket-binding-group name=\"standard-sockets\" default-interface=\"public\" port-offset=\"USD{jboss.socket.binding.port-offset:0}\"> <socket-binding name=\"management-http\" interface=\"management\" port=\"USD{jboss.management.http.port:9990}\"/> <socket-binding name=\"management-https\" interface=\"management\" port=\"USD{jboss.management.https.port:9993}\"/> <socket-binding name=\"ajp\" port=\"USD{jboss.ajp.port:8009}\"/> <socket-binding name=\"http\" port=\"USD{jboss.http.port:8080}\"/> <socket-binding name=\"https\" port=\"USD{jboss.https.port:8443}\"/> <socket-binding name=\"txn-recovery-environment\" port=\"4712\"/> <socket-binding name=\"txn-status-manager\" port=\"4713\"/> <outbound-socket-binding name=\"mail-smtp\"> <remote-destination host=\"localhost\" port=\"25\"/> </outbound-socket-binding> </socket-binding-group>", "<socket-binding-groups> <socket-binding-group name=\"standard-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'default' profile --> <socket-binding name=\"ajp\" port=\"USD{jboss.ajp.port:8009}\"/> <socket-binding name=\"http\" port=\"USD{jboss.http.port:8080}\"/> <socket-binding name=\"https\" port=\"USD{jboss.https.port:8443}\"/> <socket-binding name=\"txn-recovery-environment\" port=\"4712\"/> <socket-binding name=\"txn-status-manager\" port=\"4713\"/> <outbound-socket-binding name=\"mail-smtp\"> <remote-destination host=\"localhost\" port=\"25\"/> </outbound-socket-binding> </socket-binding-group> <socket-binding-group name=\"ha-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'ha' profile --> </socket-binding-group> <socket-binding-group name=\"full-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'full' profile --> </socket-binding-group> <socket-binding-group name=\"full-ha-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'full-ha' profile --> <socket-binding name=\"ajp\" port=\"USD{jboss.ajp.port:8009}\"/> <socket-binding name=\"http\" port=\"USD{jboss.http.port:8080}\"/> <socket-binding name=\"https\" port=\"USD{jboss.https.port:8443}\"/> <socket-binding name=\"iiop\" interface=\"unsecure\" port=\"3528\"/> <socket-binding name=\"iiop-ssl\" interface=\"unsecure\" port=\"3529\"/> <socket-binding name=\"jgroups-mping\" interface=\"private\" port=\"0\" multicast-address=\"USD{jboss.default.multicast.address:230.0.0.4}\" multicast-port=\"45700\"/> <socket-binding name=\"jgroups-tcp\" interface=\"private\" port=\"7600\"/> <socket-binding name=\"jgroups-udp\" interface=\"private\" port=\"55200\" multicast-address=\"USD{jboss.default.multicast.address:230.0.0.4}\" multicast-port=\"45688\"/> <socket-binding name=\"modcluster\" port=\"0\" multicast-address=\"224.0.1.105\" multicast-port=\"23364\"/> <socket-binding name=\"txn-recovery-environment\" port=\"4712\"/> <socket-binding name=\"txn-status-manager\" port=\"4713\"/> <outbound-socket-binding name=\"mail-smtp\"> <remote-destination host=\"localhost\" port=\"25\"/> </outbound-socket-binding> </socket-binding-group> <socket-binding-group name=\"load-balancer-sockets\" default-interface=\"public\"> <!-- Needed for server groups using the 'load-balancer' profile --> </socket-binding-group> </socket-binding-groups>", "/socket-binding-group=new-sockets:add(default-interface=public)", "/socket-binding-group=new-sockets/socket-binding=new-socket-binding:add(port=1234)", "/socket-binding-group=new-sockets/socket-binding=new-socket-binding:write-attribute(name=interface,value=unsecure)", "<socket-binding-groups> <socket-binding-group name=\"new-sockets\" default-interface=\"public\"> <socket-binding name=\"new-socket-binding\" interface=\"unsecure\" port=\"1234\"/> </socket-binding-group> </socket-binding-groups>", "/host=master/server-config=server-two/:write-attribute(name=socket-binding-port-offset,value=250)", "EAP_HOME /bin/standalone.sh -Djboss.socket.binding.port-offset=100", "-Djava.net.preferIPv4Stack=false", "-Djava.net.preferIPv6Addresses=true", "Specify options to pass to the Java VM. # if [ \"xUSDJAVA_OPTS\" = \"x\" ]; then JAVA_OPTS=\"-Xms1303m -Xmx1303m -Djava.net.preferIPv4Stack=false\" JAVA_OPTS=\"USDJAVA_OPTS -Djboss.modules.system.pkgs=USDJBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true\" JAVA_OPTS=\"USDJAVA_OPTS -Djava.net.preferIPv6Addresses=true\" else", "/interface=management:write-attribute(name=inet-address,value=\"USD{jboss.bind.address.management:[::1]}\")", "<interfaces> <interface name=\"management\"> <inet-address value=\"USD{jboss.bind.address.management:[::1]}\"/> </interface> . </interfaces>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.mysql\"> <resources> <resource-root path=\"mysql-connector-java-8.0.12.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"oracle.jdbc\"> <resources> <resource-root path=\"/home/redhat/test.jar\"/> </resources> </module>", "cd EAP_HOME /modules/ mkdir -p com/mysql/main", "cp /path/to /mysql-connector-java-8.0.12.jar EAP_HOME /modules/com/mysql/main/", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.mysql\"> <resources> <resource-root path=\"mysql-connector-java-8.0.12.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "EAP_HOME /bin/jboss-cli.sh", "module add --name= MODULE_NAME --resources= PATH_TO_RESOURCE --dependencies= DEPENDENCIES", "module add --name=com.mysql --resources= /path/to /mysql-connector-java-8.0.12.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "module add --name=myprops --resources= /path/to /properties.jar", "/subsystem=ee:list-add(name=global-modules,value={name=myprops})", "Thread.currentThread().getContextClassLoader().getResource(\"my.properties\");", "EAP_HOME /bin/jboss-cli.sh", "module remove --name= MODULE_NAME", "module remove --name=com.mysql", "/subsystem=ee:write-attribute(name=global-modules,value=[{name= MODULE_NAME_1 },{name= MODULE_NAME_2 }]", "/subsystem=ee:list-add(name=global-modules,value={name= MODULE_NAME })", "/subsystem=ee:write-attribute(name=global-modules,value=[{name=module1,services=true}]", "/subsystem=ee:write-attribute(name=global-modules,value=[{name=module1,services=true},{name=module2,services=false}]", "/subsystem=ee:list-add(name=global-modules,value={name=module1,services=true})", "/my-common-libs/log4j2.xml /my-common-libs/libs/log4j-api-2.14.1.jar /my-common-libs/libs/log4j-core-2.14.1.jar", "[standalone@localhost:9990 /] /subsystem=ee/global-directory=my-common-libs:add(path=my-common-libs, relative-to=jboss.home.dir)", "[domain@localhost:9990 /] /profile=default/subsystem=ee/global-directory=my-common-libs:add(path=my-common-libs, relative-to=jboss.server.data.dir)", "[standalone@localhost:9990 /] /subsystem=ee/global-directory=my-common-libs:read-resource { \"outcome\" => \"success\", \"result\" => { \"path\" => \"my-common-libs\", \"relative-to\" => \"jboss.home.dir\" } }", "[domain@localhost:9990 /] /subsystem=ee/global-directory=my-common-libs:read-resource { \"outcome\" => \"success\", \"result\" => { \"path\" => \"my-common-libs\", \"relative-to\" => \"jboss.server.data.dir\" } }", "[standalone@localhost:9990 /] /subsystem=ee/global-directory=my-common-libs:remove()", "[domain@localhost:9990 /] /profile=default/subsystem=ee/global-directory=my-common-libs:remove()", "/subsystem=ee:write-attribute(name=ear-subdeployments-isolated,value=true)", "JBOSS_MODULEPATH=\" /path/to /modules/directory/\"", "set \"JBOSS_MODULEPATH /path/to /modules/directory/\"", "deployment. DEPLOYMENT_NAME", "deployment. EAR_NAME . SUBDEPLOYMENT_NAME", "EAP_HOME /bin/standalone.sh -Dorg.jboss.metadata.parser.validate=true", "/system-property=org.jboss.metadata.parser.validate:add(value=true)", "deployment deploy-file /path/to /test-application.war", "WFLYSRV0027: Starting deployment of \"test-application.war\" (runtime-name: \"test-application.war\") WFLYUT0021: Registered web context: /test-application WFLYSRV0010: Deployed \"test-application.war\" (runtime-name : \"test-application.war\")", "deployment undeploy test-application.war", "WFLYUT0022: Unregistered web context: /test-application WFLYSRV0028: Stopped deployment test-application.war (runtime-name: test-application.war) in 62ms WFLYSRV0009: Undeployed \"test-application.war\" (runtime-name: \"test-application.war\")", "deployment undeploy *", "deployment disable test-application.war", "deployment disable-all", "deployment enable test-application.war", "deployment enable-all", "deployment info", "NAME RUNTIME-NAME PERSISTENT ENABLED STATUS helloworld.war helloworld.war true true OK test-application.war test-application.war true true OK", "deployment info helloworld.war", "deployment deploy-file /path/to /test-application.war --all-server-groups", "deployment deploy-file /path/to /test-application.war --server-groups=main-server-group,other-server-group", "[Server:server-one] WFLYSRV0027: Starting deployment of \"test-application.war\" (runtime-name: \"test-application.war\") [Server:server-one] WFLYUT0021: Registered web context: /test-application [Server:server-one] WFLYSRV0010: Deployed \"test-application.war\" (runtime-name : \"test-application.war\")", "deployment undeploy test-application.war --all-relevant-server-groups", "[Server:server-one] WFLYUT0022: Unregistered web context: /test-application [Server:server-one] WFLYSRV0028: Stopped deployment test-application.war (runtime-name: test-application.war) in 74ms [Server:server-one] WFLYSRV0009: Undeployed \"test-application.war\" (runtime-name: \"test-application.war\")", "deployment undeploy * --all-relevant-server-groups", "deployment disable test-application.war --server-groups=other-server-group", "deployment disable-all --server-groups=other-server-group", "deployment enable test-application.war", "deployment enable-all --server-groups=other-server-group", "deployment info helloworld.war", "NAME RUNTIME-NAME helloworld.war helloworld.war SERVER-GROUP STATE main-server-group enabled other-server-group added", "deployment info --server-group=other-server-group", "NAME RUNTIME-NAME STATE helloworld.war helloworld.war added test-application.war test-application.war enabled", "cp /path/to /test-application.war EAP_HOME /standalone/deployments/", "touch EAP_HOME /standalone/deployments/test-application.war.dodeploy", "rm EAP_HOME /standalone/deployments/test-application.war.deployed", "touch EAP_HOME /standalone/deployments/test-application.war.dodeploy", "/subsystem=deployment-scanner/scanner=default:write-attribute(name=scan-enabled,value=false)", "/subsystem=deployment-scanner/scanner=default:write-attribute(name=scan-interval,value=10000)", "/subsystem=deployment-scanner/scanner=default:write-attribute(name=path,value= /path/to /deployments)", "/subsystem=deployment-scanner/scanner=default:write-attribute(name=auto-deploy-exploded,value=true)", "/subsystem=deployment-scanner/scanner=default:write-attribute(name=auto-deploy-zipped,value=false)", "/subsystem=deployment-scanner/scanner=default:write-attribute(name=auto-deploy-xml,value=false)", "/subsystem=deployment-scanner/scanner=new-scanner:add(path=new_deployment_dir,relative-to=jboss.server.base.dir,scan-interval=5000)", "<plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>USD{version.wildfly.maven.plugin}</version> </plugin>", "mvn clean install wildfly:deploy", "[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2.981 s [INFO] Finished at: 2015-12-23T15:06:13-05:00 [INFO] Final Memory: 21M/231M [INFO] ------------------------------------------------------------------------", "WFLYSRV0027: Starting deployment of \"helloworld.war\" (runtime-name: \"helloworld.war\") WFLYUT0021: Registered web context: /helloworld WFLYSRV0010: Deployed \"helloworld.war\" (runtime-name : \"helloworld.war\")", "mvn wildfly:undeploy", "[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.237 s [INFO] Finished at: 2015-12-23T15:09:10-05:00 [INFO] Final Memory: 10M/183M [INFO] ------------------------------------------------------------------------", "WFLYUT0022: Unregistered web context: /helloworld WFLYSRV0028: Stopped deployment helloworld.war (runtime-name: helloworld.war) in 27ms WFLYSRV0009: Undeployed \"helloworld.war\" (runtime-name: \"helloworld.war\")", "<plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>USD{version.wildfly.maven.plugin}</version> <configuration> <domain> <server-groups> <server-group>main-server-group</server-group> </server-groups> </domain> </configuration> </plugin>", "mvn clean install wildfly:deploy", "[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 4.005 s [INFO] Finished at: 2016-09-02T14:36:17-04:00 [INFO] Final Memory: 21M/226M [INFO] ------------------------------------------------------------------------", "WFLYSRV0027: Starting deployment of \"helloworld.war\" (runtime-name: \"helloworld.war\") WFLYUT0021: Registered web context: /helloworld WFLYSRV0010: Deployed \"helloworld.war\" (runtime-name : \"helloworld.war\")", "mvn wildfly:undeploy", "[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.750 s [INFO] Finished at: 2016-09-02T14:45:10-04:00 [INFO] Final Memory: 10M/184M [INFO] ------------------------------------------------------------------------", "WFLYUT0022: Unregistered web context: /helloworld WFLYSRV0028: Stopped deployment helloworld.war (runtime-name: helloworld.war) in 106ms WFLYSRV0009: Undeployed \"helloworld.war\" (runtime-name: \"helloworld.war\")", "curl --digest -L -D - http:// HOST : PORT /management --header \"Content-Type: application/json\" -u USER : PASSWORD -d '{\"operation\" : \"composite\", \"address\" : [], \"steps\" : [{\"operation\" : \"add\", \"address\" : {\"deployment\" : \"test-application.war\"}, \"content\" : [{\"url\" : \"file:/path/to/test-application.war\"}]},{\"operation\" : \"deploy\", \"address\" : {\"deployment\" : \"test-application.war\"}}],\"json.pretty\":1}'", "curl --digest -L -D - http:// HOST : PORT /management --header \"Content-Type: application/json\" -u USER : PASSWORD -d '{\"operation\" : \"composite\", \"address\" : [], \"steps\" : [{\"operation\" : \"undeploy\", \"address\" : {\"deployment\" : \"test-application.war\"}},{\"operation\" : \"remove\", \"address\" : {\"deployment\" : \"test-application.war\"}}],\"json.pretty\":1}'", "curl --digest -L -D - http:// HOST : PORT /management --header \"Content-Type: application/json\" -u USER : PASSWORD -d '{\"operation\" : \"add\", \"address\" : {\"deployment\" : \"test-application.war\"}, \"content\" : [{\"url\" : \"file: /path/to /test-application.war\"}],\"json.pretty\":1}'", "curl --digest -L -D - http:// HOST : PORT /management --header \"Content-Type: application/json\" -u USER : PASSWORD -d '{\"operation\" : \"add\", \"address\" : {\"server-group\" : \"main-server-group\",\"deployment\":\"test-application.war\"},\"json.pretty\":1}'", "curl --digest -L -D - http:// HOST : PORT /management --header \"Content-Type: application/json\" -u USER : PASSWORD -d '{\"operation\" : \"deploy\", \"address\" : {\"server-group\" : \"main-server-group\",\"deployment\":\"test-application.war\"},\"json.pretty\":1}'", "curl --digest -L -D - http:// HOST : PORT /management --header \"Content-Type: application/json\" -u USER : PASSWORD -d '{\"operation\" : \"remove\", \"address\" : {\"server-group\" : \"main-server-group\",\"deployment\":\"test-application.war\"},\"json.pretty\":1}'", "curl --digest -L -D - http:// HOST : PORT /management --header \"Content-Type: application/json\" -u USER : PASSWORD -d '{\"operation\" : \"remove\", \"address\" : {\"deployment\" : \"test-application.war\"}, \"json.pretty\":1}'", "EAP_HOME /bin/standalone.sh -Djboss.server.deploy.dir= /path/to /new_deployed_content", "EAP_HOME /bin/domain.sh -Djboss.domain.deployment.dir= /path/to /new_deployed_content", "<jboss xmlns=\"urn:jboss:1.0\"> <jboss-deployment-dependencies xmlns=\"urn:jboss:deployment-dependencies:1.0\"> <dependency name=\"framework.ear\" /> </jboss-deployment-dependencies> </jboss>", "deployment-overlay add --name=new-deployment-overlay --content=WEB-INF/web.xml= /path/to /other/web.xml --deployments=test-application.war --redeploy-affected", "<jboss-web xmlns=\"http://www.jboss.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-web_10_0.xsd\" version=\"10.0\"> <overlay>{example.path.to.overlay}</overlay> </jboss-web>", "/subsystem=ee:write-attribute(name=jboss-descriptor-property-replacement,value=true)", "<subsystem xmlns=\"urn:jboss:domain:ee:4.0\"> <jboss-descriptor-property-replacement>true</jboss-descriptor-property-replacement> </subsystem>", "{\"my-rollout-plan\" => {\"rollout-plan\" => { \"in-series\" => [ {\"concurrent-groups\" => { \"group-A\" => { \"max-failure-percentage\" => \"20\", \"rolling-to-servers\" => \"true\" }, \"group-B\" => undefined }}, {\"server-group\" => {\"group-C\" => { \"rolling-to-servers\" => \"false\", \"max-failed-servers\" => \"1\" }}}, {\"concurrent-groups\" => { \"group-D\" => { \"max-failure-percentage\" => \"20\", \"rolling-to-servers\" => \"true\" }, \"group-E\" => undefined }} ], \"rollback-across-groups\" => \"true\" }}}", "rollout (id= PLAN_NAME | SERVER_GROUP_LIST ) [rollback-across-groups]", "deploy /path/to /test-application.war --server-groups=main-server-group --headers={rollout main-server-group(rolling-to-servers=true)}", "rollout-plan add --name=my-rollout-plan --content={rollout main-server-group(rolling-to-servers=false,max-failed-servers=1),other-server-group(rolling-to-servers=true,max-failure-percentage=20) rollback-across-groups=true}", "\"rollout-plan\" => { \"in-series\" => [ {\"server-group\" => {\"main-server-group\" => { \"rolling-to-servers\" => false, \"max-failed-servers\" => 1 }}}, {\"server-group\" => {\"other-server-group\" => { \"rolling-to-servers\" => true, \"max-failure-percentage\" => 20 }}} ], \"rollback-across-groups\" => true }", "deploy /path/to /test-application.war --all-server-groups --headers={rollout id=my-rollout-plan}", "rollout-plan remove --name=my-rollout-plan", "/deployment= DEPLOYMENT_NAME .war:add(content=[{empty=true}])", "/deployment= ARCHIVE_DEPLOYMENT_NAME .ear:explode", "/deployment= DEPLOYMENT_NAME .war:add-content(content=[{target-path= /path/to/FILE_IN_DEPLOYMENT , input-stream-index= /path/to/LOCAL_FILE_TO_UPLOAD }]", "/deployment= DEPLOYMENT_NAME .war:remove-content(paths=[ /path/to/FILE_1 , /path/to/FILE_2 ])", "/deployment=helloworld.war:browse-content(path=META-INF/)", "{ \"outcome\" => \"success\", \"result\" => [ { \"path\" => \"MANIFEST.MF\", \"directory\" => false, \"file-size\" => 827L }, { \"path\" => \"maven/org.jboss.eap.quickstarts/helloworld/pom.properties\", \"directory\" => false, \"file-size\" => 106L }, { \"path\" => \"maven/org.jboss.eap.quickstarts/helloworld/pom.xml\", \"directory\" => false, \"file-size\" => 2713L }, { \"path\" => \"maven/org.jboss.eap.quickstarts/helloworld/\", \"directory\" => true }, { \"path\" => \"maven/org.jboss.eap.quickstarts/\", \"directory\" => true }, { \"path\" => \"maven/\", \"directory\" => true }, { \"path\" => \"INDEX.LIST\", \"directory\" => false, \"file-size\" => 251L } ] }", "/deployment=helloworld.war:read-content(path=META-INF/MANIFEST.MF)", "{ \"outcome\" => \"success\", \"result\" => {\"uuid\" => \"24ba8e06-21bd-4505-b4d4-bdfb16451b95\"}, \"response-headers\" => {\"attached-streams\" => [{ \"uuid\" => \"24ba8e06-21bd-4505-b4d4-bdfb16451b95\", \"mime-type\" => \"text/plain\" }]} }", "attachment display --operation=/deployment=helloworld.war:read-content(path=META-INF/MANIFEST.MF)", "ATTACHMENT 8af87836-2abd-423a-8e44-e731cc57bd80: Manifest-Version: 1.0 Implementation-Title: Quickstart: helloworld Implementation-Version: 7.4.0.GA Java-Version: 1.8.0_131 Built-By: username Scm-Connection: scm:git:[email protected]:jboss/jboss-parent-pom.git/quic kstart-parent/helloworld Specification-Vendor: JBoss by Red Hat", "attachment save --operation=/deployment=helloworld.war:read-content(path=META-INF/MANIFEST.MF) --file= /path/to /MANIFEST.MF", "/profile= PROFILE_NAME /subsystem= SUBSYSTEM_NAME :read-resource(recursive=true)", "/subsystem=datasources/data-source=ExampleDS:read-resource", "/profile=default/subsystem=datasources/data-source=ExampleDS:read-resource", "/host= HOST_NAME /server= SERVER_NAME :read-resource", "/host= HOST_NAME /server-config= SERVER_NAME :write-attribute(name= ATTRIBUTE_NAME ,value= VALUE )", "/server-group= SERVER_GROUP_NAME :read-resource", "EAP_HOME /bin/domain.sh --host-config=host-master.xml", "EAP_HOME /bin/domain.sh --host-config=host-slave.xml", "<domain-controller> <local/> </domain-controller>", "<management-interfaces> <http-interface security-realm=\"ManagementRealm\" http-upgrade-enabled=\"true\"> <socket interface=\"management\" port=\"USD{jboss.management.http.port:9990}\"/> </http-interface> </management-interfaces>", "<domain-controller> <remote security-realm=\"ManagementRealm\"> <discovery-options> <static-discovery name=\"primary\" protocol=\"USD{jboss.domain.master.protocol:remote}\" host=\"USD{jboss.domain.master.address}\" port=\"USD{jboss.domain.master.port:9990}\"/> </discovery-options> </remote> </domain-controller>", "EAP_HOME /bin/domain.sh --host-config=host-slave.xml -Djboss.domain.master.address= IP_ADDRESS", "<host xmlns=\"urn:jboss:domain:8.0\" name=\"host1\">", "EAP_HOME /bin/domain.sh --host-config=host-slave.xml", "EAP_HOME /bin/jboss-cli.sh --connect --controller= DOMAIN_CONTROLLER_IP_ADDRESS", "/host= EXISTING_HOST_NAME :write-attribute(name=name,value= NEW_HOST_NAME )", "<host name=\" NEW_HOST_NAME \" xmlns=\"urn:jboss:domain:8.0\">", "reload --host= EXISTING_HOST_NAME", "EAP_HOME /bin/domain.sh --host-config=host-slave.xml -Djboss.host.name= HOST_NAME", "<server-group name=\"main-server-group\" profile=\"full\"> <jvm name=\"default\"> <heap size=\"64m\" max-size=\"512m\"/> </jvm> <socket-binding-group ref=\"full-sockets\"/> <deployments> <deployment name=\"test-application.war\" runtime-name=\"test-application.war\"/> <deployment name=\"helloworld.war\" runtime-name=\"helloworld.war\" enabled=\"false\"/> </deployments> </server-group>", "/server-group= SERVER_GROUP_NAME :add(profile= PROFILE_NAME ,socket-binding-group= SOCKET_BINDING_GROUP_NAME )", "/server-group= SERVER_GROUP_NAME :write-attribute(name= ATTRIBUTE_NAME ,value= VALUE )", "/server-group= SERVER_GROUP_NAME :remove", "<servers> <server name=\"server-one\" group=\"main-server-group\"> </server> <server name=\"server-two\" group=\"main-server-group\" auto-start=\"true\"> <socket-bindings port-offset=\"150\"/> </server> <server name=\"server-three\" group=\"other-server-group\" auto-start=\"false\"> <socket-bindings port-offset=\"250\"/> </server> </servers>", "/host= HOST_NAME /server-config= SERVER_NAME :add(group= SERVER_GROUP_NAME )", "/host= HOST_NAME /server-config= SERVER_NAME :write-attribute(name= ATTRIBUTE_NAME ,value= VALUE )", "/host= HOST_NAME /server-config= SERVER_NAME :remove", "/host= HOST_NAME /server-config= SERVER_NAME :start", "/server-group= SERVER_GROUP_NAME :start-servers", "/host= HOST_NAME /server-config= SERVER_NAME :stop", "/server-group= SERVER_GROUP_NAME :stop-servers", "/host= HOST_NAME /server-config= SERVER_NAME :reload", "/server-group= SERVER_GROUP_NAME :reload-servers", "/server-group= SERVER_GROUP_NAME :kill-servers", "<domain-controller> <remote security-realm=\"ManagementRealm\"> <discovery-options> <static-discovery name=\"primary\" protocol=\"USD{jboss.domain.master.protocol:remote}\" host=\"172.16.81.100\" port=\"USD{jboss.domain.master.port:9990}\"/> <static-discovery name=\"backup\" protocol=\"USD{jboss.domain.master.protocol:remote}\" host=\"172.16.81.101\" port=\"USD{jboss.domain.master.port:9990}\"/> </discovery-options> </remote> </domain-controller>", "EAP_HOME /bin/domain.sh --host-config=host-slave.xml --cached-dc", "EAP_HOME /bin/domain.sh --host-config=host-slave.xml --backup", "<domain-controller> <remote username=\"USDlocal\" security-realm=\"ManagementRealm\" ignore-unused-configuration=\"false\"> <discovery-options> </discovery-options> </remote> </domain-controller>", "/host=backup:write-attribute(name=domain-controller.local, value={})", "reload --host= HOST_NAME", "cp -r EAP_HOME /domain /path/to /domain1", "cp -r EAP_HOME /domain /path/to /host1", "EAP_HOME /bin/domain.sh --host-config=host-master.xml -Djboss.domain.base.dir= /path/to /domain1", "EAP_HOME /bin/domain.sh --host-config=host-slave.xml -Djboss.domain.base.dir= /path/to /host1 -Djboss.domain.master.address= IP_ADDRESS -Djboss.management.http.port= PORT", "<secret value=\" SECRET_VALUE \" />", "EAP_HOME /bin/domain.sh --host-config=host-master.xml -Djboss.bind.address.management= IP1", "<host xmlns=\"urn:jboss:domain:8.0\" name=\" HOST_NAME \"> <management> <security-realms> <security-realm name=\"ManagementRealm\"> <server-identities> <secret value=\" SECRET_VALUE \" /> </server-identities>", "EAP_HOME /bin/domain.sh --host-config=host-slave.xml -Djboss.domain.master.address= IP1 -Djboss.bind.address= IP2", "EAP_HOME /bin/jboss-cli.sh --connect --controller= IP1", "<profiles> <profile name=\"eap6-default\"> </profile> </profiles>", "<extensions> <extension module=\"org.jboss.as.configadmin\"/> <extension module=\"org.jboss.as.threads\"/> <extension module=\"org.jboss.as.web\"/> <extensions>", "<socket-binding-groups> <socket-binding-group name=\"eap6-standard-sockets\" default-interface=\"public\"> </socket-binding-group> </socket-binding-groups>", "<server-groups> <server-group name=\"eap6-main-server-group\" profile=\"eap6-default\"> <socket-binding-group ref=\"eap6-standard-sockets\"/> </server-group> </server-groups>", "/profile=eap6-default/subsystem=bean-validation:remove", "/profile=eap6-default/subsystem=weld:write-attribute(name=require-bean-descriptor,value=true) /profile=eap6-default/subsystem=weld:write-attribute(name=non-portable-mode,value=true)", "/profile=eap6-default/subsystem=datasources/data-source=ExampleDS:write-attribute(name=statistics-enabled,value=true)", "<servers> <server name=\"server-one\" group=\"eap6-main-server-group\"/> <server name=\"server-two\" group=\"eap6-main-server-group\"> <socket-bindings port-offset=\"150\"/> </server> </servers>", "/host-exclude=EAP64z:write-attribute(name=active-server-groups,value=[eap6-main-server-group])", "<profiles> <profile name=\"eap73-default\"> </profile> </profiles>", "<socket-binding-groups> <socket-binding-group name=\"eap73-standard-sockets\" default-interface=\"public\"> </socket-binding-group> </socket-binding-groups>", "<server-groups> <server-group name=\"eap73-main-server-group\" profile=\"eap73-default\"> <socket-binding-group ref=\"eap73-standard-sockets\"/> </server-group> </server-groups>", "<servers> <server name=\"server-one\" group=\"eap73-main-server-group\"/> <server name=\"server-two\" group=\"eap73-main-server-group\"> <socket-bindings port-offset=\"150\"/> </server> </servers>", "<domain-controller> <remote security-realm=\"ManagementRealm\" ignore-unused-configuration=\"true\"> <discovery-options> <static-discovery name=\"primary\" protocol=\"USD{jboss.domain.master.protocol:remote}\" host=\"USD{jboss.domain.master.address}\" port=\"USD{jboss.domain.master.port:9990}\"/> </discovery-options> </remote> </domain-controller>", "/profile=full-ha:clone(to-profile=cloned-profile)", "/profile=new-profile:list-add(name=includes, value= PROFILE_NAME )", "export JAVA_OPTS=\"-Xmx1024M\"", "set JAVA_OPTS=\"Xmx1024M\"", "EAP_HOME/bin/standalone.sh -Dmyproperty=value", "/host= HOST_NAME /jvm=production_jvm:add(heap-size=2048m, max-heap-size=2048m, max-permgen-size=512m, stack-size=1024k, jvm-options=[\"-XX:-UseParallelGC\"])", "/server-group=groupA:add(profile=default, socket-binding-group=standard-sockets) /server-group=groupA/jvm=production_jvm:add", "/server-group=groupA/jvm=production_jvm:write-attribute(name=heap-size,value=\"1024m\")", "/host= HOST_NAME /server-config=server-one/jvm=default:add", "/host= HOST_NAME /server-config=server-one/jvm=default:write-attribute(name=heap-size,value=\"1024m\")", "/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=my-smtp:add(host=localhost, port=25)", "/subsystem=mail/mail-session=mySession:add(jndi-name=java:jboss/mail/MySession)", "/subsystem=mail/mail-session=mySession/server=smtp:add(outbound-socket-binding-ref=my-smtp, username=user, password=pass, tls=true)", "@Resource(lookup=\"java:jboss/mail/MySession\") private Session session;", "/subsystem=mail/mail-session=mySession:add(jndi-name=java:jboss/mail/MySession)", "/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=my-smtp-binding:add(host=localhost, port=25)", "/subsystem=mail/mail-session=mySession/server=smtp:add(outbound-socket-binding-ref=my-smtp-binding, username=user, password=pass, tls=true)", "/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=my-pop3-binding:add(host=localhost, port=110) /subsystem=mail/mail-session=mySession/server=pop3:add(outbound-socket-binding-ref=my-pop3-binding, username=user, password=pass)", "/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=my-imap-binding:add(host=localhost, port=143) /subsystem=mail/mail-session=mySession/server=imap:add(outbound-socket-binding-ref=my-imap-binding, username=user, password=pass)", "/subsystem=mail/mail-session=mySession/custom=myCustomServer:add(username=user,password=pass, properties={\"host\" => \"myhost\", \"my-property\" =>\"value\"})", "<subsystem xmlns=\"urn:jboss:domain:mail:3.0\"> <mail-session name=\"default\" jndi-name=\"java:jboss/mail/Default\"> <smtp-server outbound-socket-binding-ref=\"mail-smtp\"/> </mail-session> <mail-session name=\"myMail\" from=\"[email protected]\" jndi-name=\"java:/Mail\"> <smtp-server password=\"password\" username=\"user\" tls=\"true\" outbound-socket-binding-ref=\"mail-smtp\"/> <pop3-server outbound-socket-binding-ref=\"mail-pop3\"/> <imap-server password=\"password\" username=\"nobody\" outbound-socket-binding-ref=\"mail-imap\"/> </mail-session> <mail-session name=\"custom\" jndi-name=\"java:jboss/mail/Custom\" debug=\"true\"> <custom-server name=\"smtp\" password=\"password\" username=\"username\"> <property name=\"host\" value=\"mail.example.com\"/> </custom-server> </mail-session> <mail-session name=\"custom2\" jndi-name=\"java:jboss/mail/Custom2\" debug=\"true\"> <custom-server name=\"pop3\" outbound-socket-binding-ref=\"mail-pop3\"> <property name=\"custom-prop\" value=\"some-custom-prop-value\"/> </custom-server> </mail-session> </subsystem>", "/subsystem=mail/mail-session=mySession/server=smtp:add(outbound-socket-binding-ref=my-smtp-binding, username=user, credential-reference={store=exampleCS, alias=mail-session-pw}, tls=true)", "credential-reference={clear-text=\"MASK-Ewcyuqd/nP9;A1B2C3D4;351\"}", "2016-03-16 14:32:01,627 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-7) MSC000001: Failed to start service jboss.undertow.listener.default: org.jboss.msc.service.StartException in service jboss.undertow.listener.default: Could not start http listener at org.wildfly.extension.undertow.ListenerService.start(ListenerService.java:142) at org.jboss.msc.service.ServiceControllerImplUSDStartTask.startService(ServiceControllerImpl.java:1948) at org.jboss.msc.service.ServiceControllerImplUSDStartTask.run(ServiceControllerImpl.java:1881) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutorUSDWorker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.net.BindException: Address already in use", "/core-service=management:read-boot-errors", "{ \"outcome\" => \"success\", \"result\" => [ { \"failed-operation\" => { \"operation\" => \"add\", \"address\" => [ (\"subsystem\" => \"undertow\"), (\"server\" => \"default-server\"), (\"http-listener\" => \"default\") ] }, \"failure-description\" => \"{\\\"WFLYCTL0080: Failed services\\\" => {\\\"jboss.undertow.listener.default\\\" => \\\"org.jboss.msc.service.StartException in service jboss.undertow.listener.default: Could not start http listener Caused by: java.net.BindException: Address already in use\\\"}}\", \"failed-services\" => {\"jboss.undertow.listener.default\" => \"org.jboss.msc.service.StartException in service jboss.undertow.listener.default: Could not start http listener Caused by: java.net.BindException: Address already in use\"} } ] }", "export GC_LOG=false EAP_HOME /bin/standalone.sh", "JAVA_OPTS=\"USDJAVA_OPTS -Duser.language=fr\"", "JAVA_OPTS=\"USDJAVA_OPTS -Duser.language=pt -Duser.country=BR\"", "JAVA_OPTS=\"USDJAVA_OPTS -Dorg.jboss.logging.locale=pt-BR\"", "/subsystem=logging/log-file= LOG_FILE_NAME :read-log-file", "/subsystem=logging/log-file=server.log:read-log-file(lines=5,tail=false)", "{ \"outcome\" => \"success\", \"result\" => [ \"2016-03-24 08:49:26,612 INFO [org.jboss.modules] (main) JBoss Modules version 1.5.1.Final-redhat-1\", \"2016-03-24 08:49:26,788 INFO [org.jboss.msc] (main) JBoss MSC version 1.2.6.Final-redhat-1\", \"2016-03-24 08:49:26,863 INFO [org.jboss.as] (MSC service thread 1-7) WFLYSRV0049: JBoss EAP 7.0.0.GA (WildFly Core 2.0.13.Final-redhat-1) starting\", \"2016-03-24 08:49:27,973 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0039: Creating http management service using socket-binding (management-http)\", \"2016-03-24 08:49:27,994 INFO [org.xnio] (MSC service thread 1-1) XNIO version 3.3.4.Final-redhat-1\" ] }", "/subsystem=logging/console-handler=CONSOLE:write-attribute(name=filter-spec, value=\"substituteAll(\\\"WFLY\\\"\\,\\\"YLFW\\\")\")", "/subsystem=logging:write-attribute(name=add-logging-api-dependencies, value=false)", "/subsystem=logging/logger= LOG_CATEGORY :add", "/subsystem=logging/logger= LOG_CATEGORY :write-attribute(name=level,value= LEVEL )", "/subsystem=logging/logger= LOG_CATEGORY :write-attribute(name=use-parent-handlers,value= USE_PARENT_HANDLERS )", "/subsystem=logging/logger= LOG_CATEGORY :write-attribute(name=filter-spec, value= FILTER_EXPRESSION )", "/subsystem=logging/logger= LOG_CATEGORY :add-handler(name= LOG_HANDLER_NAME )", "/subsystem=logging/logger= LOG_CATEGORY :remove", "/subsystem=logging/console-handler= CONSOLE_HANDLER_NAME :add", "/subsystem=logging/console-handler= CONSOLE_HANDLER_NAME :write-attribute(name=level,value= LEVEL )", "/subsystem=logging/console-handler= CONSOLE_HANDLER_NAME :write-attribute(name=target,value= TARGET )", "/subsystem=logging/console-handler= CONSOLE_HANDLER_NAME :write-attribute(name=encoding,value= ENCODING )", "/subsystem=logging/console-handler= CONSOLE_HANDLER_NAME :write-attribute(name=formatter,value= FORMAT )", "/subsystem=logging/console-handler= CONSOLE_HANDLER_NAME :write-attribute(name=autoflush,value= AUTO_FLUSH )", "/subsystem=logging/console-handler= CONSOLE_HANDLER_NAME :write-attribute(name=filter-spec, value= FILTER_EXPRESSION )", "/subsystem=logging/root-logger=ROOT:add-handler(name= CONSOLE_HANDLER_NAME )", "/subsystem=logging/logger= CATEGORY :add-handler(name= CONSOLE_HANDLER_NAME )", "/subsystem=logging/console-handler= CONSOLE_HANDLER_NAME :remove", "/subsystem=logging/file-handler= FILE_HANDLER_NAME :add(file={path= FILE_PATH ,relative-to= RELATIVE_TO_PATH })", "/subsystem=logging/file-handler= FILE_HANDLER_NAME :write-attribute(name=level,value= LEVEL )", "/subsystem=logging/file-handler= FILE_HANDLER_NAME :write-attribute(name=append,value= APPEND )", "/subsystem=logging/file-handler= FILE_HANDLER_NAME :write-attribute(name=encoding,value= ENCODING )", "/subsystem=logging/file-handler= FILE_HANDLER_NAME :write-attribute(name=formatter,value= FORMAT )", "/subsystem=logging/file-handler= FILE_HANDLER_NAME :write-attribute(name=autoflush,value= AUTO_FLUSH )", "/subsystem=logging/file-handler= FILE_HANDLER_NAME :write-attribute(name=filter-spec, value= FILTER_EXPRESSION )", "/subsystem=logging/root-logger=ROOT:add-handler(name= FILE_HANDLER_NAME )", "/subsystem=logging/logger= CATEGORY :add-handler(name= FILE_HANDLER_NAME )", "/subsystem=logging/file-handler= FILE_HANDLER_NAME :remove", "/subsystem=logging/periodic-rotating-file-handler= PERIODIC_HANDLER_NAME :add(file={path= FILE_PATH ,relative-to= RELATIVE_TO_PATH },suffix= SUFFIX )", "/subsystem=logging/periodic-rotating-file-handler= PERIODIC_HANDLER_NAME :write-attribute(name=level,value= LEVEL )", "/subsystem=logging/periodic-rotating-file-handler= PERIODIC_HANDLER_NAME :write-attribute(name=append,value= APPEND )", "/subsystem=logging/periodic-rotating-file-handler= PERIODIC_HANDLER_NAME :write-attribute(name=encoding,value= ENCODING )", "/subsystem=logging/periodic-rotating-file-handler= PERIODIC_HANDLER_NAME :write-attribute(name=formatter,value= FORMAT )", "/subsystem=logging/periodic-rotating-file-handler= PERIODIC_HANDLER_NAME :write-attribute(name=autoflush,value= AUTO_FLUSH )", "/subsystem=logging/periodic-rotating-file-handler= PERIODIC_HANDLER_NAME :write-attribute(name=filter-spec, value= FILTER_EXPRESSION )", "/subsystem=logging/root-logger=ROOT:add-handler(name= PERIODIC_HANDLER_NAME )", "/subsystem=logging/logger= CATEGORY :add-handler(name= PERIODIC_HANDLER_NAME )", "/subsystem=logging/periodic-rotating-file-handler= PERIODIC_HANDLER_NAME :remove", "/subsystem=logging/size-rotating-file-handler= SIZE_HANDLER_NAME :add(file={path= FILE_PATH ,relative-to= RELATIVE_TO_PATH })", "/subsystem=logging/size-rotating-file-handler= SIZE_HANDLER_NAME :write-attribute(name=level,value= LEVEL )", "/subsystem=logging/size-rotating-file-handler= SIZE_HANDLER_NAME :write-attribute(name=suffix, value= SUFFIX )", "/subsystem=logging/size-rotating-file-handler= SIZE_HANDLER_NAME :write-attribute(name=rotate-size, value= ROTATE_SIZE )", "/subsystem=logging/size-rotating-file-handler= SIZE_HANDLER_NAME :write-attribute(name=max-backup-index, value= MAX_BACKUPS )", "/subsystem=logging/size-rotating-file-handler= SIZE_HANDLER_NAME :write-attribute(name=rotate-on-boot, value= ROTATE_ON_BOOT )", "/subsystem=logging/size-rotating-file-handler= SIZE_HANDLER_NAME :write-attribute(name=append,value= APPEND )", "/subsystem=logging/size-rotating-file-handler= SIZE_HANDLER_NAME :write-attribute(name=encoding,value= ENCODING )", "/subsystem=logging/size-rotating-file-handler= SIZE_HANDLER_NAME :write-attribute(name=formatter,value= FORMAT )", "/subsystem=logging/size-rotating-file-handler= SIZE_HANDLER_NAME :write-attribute(name=autoflush,value= AUTO_FLUSH )", "/subsystem=logging/size-rotating-file-handler= SIZE_HANDLER_NAME :write-attribute(name=filter-spec, value= FILTER_EXPRESSION )", "/subsystem=logging/root-logger=ROOT:add-handler(name= SIZE_HANDLER_NAME )", "/subsystem=logging/logger= CATEGORY :add-handler(name= SIZE_HANDLER_NAME )", "/subsystem=logging/size-rotating-file-handler= SIZE_HANDLER_NAME :remove", "/subsystem=logging/periodic-size-rotating-file-handler= PERIODIC_SIZE_HANDLER_NAME :add(file={path= FILE_PATH ,relative-to= RELATIVE_TO_PATH },suffix= SUFFIX )", "/subsystem=logging/periodic-size-rotating-file-handler= PERIODIC_SIZE_HANDLER_NAME :write-attribute(name=level,value= LEVEL )", "/subsystem=logging/periodic-size-rotating-file-handler= PERIODIC_SIZE_HANDLER_NAME :write-attribute(name=rotate-size, value= ROTATE_SIZE )", "/subsystem=logging/periodic-size-rotating-file-handler= PERIODIC_SIZE_HANDLER_NAME :write-attribute(name=max-backup-index, value= MAX_BACKUPS )", "/subsystem=logging/periodic-size-rotating-file-handler= PERIODIC_SIZE_HANDLER_NAME :write-attribute(name=rotate-on-boot, value= ROTATE_ON_BOOT )", "/subsystem=logging/periodic-size-rotating-file-handler= PERIODIC_SIZE_HANDLER_NAME :write-attribute(name=append,value= APPEND )", "/subsystem=logging/periodic-size-rotating-file-handler= PERIODIC_SIZE_HANDLER_NAME :write-attribute(name=encoding,value= ENCODING )", "/subsystem=logging/periodic-size-rotating-file-handler= PERIODIC_SIZE_HANDLER_NAME :write-attribute(name=formatter,value= FORMAT )", "/subsystem=logging/periodic-size-rotating-file-handler= PERIODIC_SIZE_HANDLER_NAME :write-attribute(name=autoflush,value= AUTO_FLUSH )", "/subsystem=logging/periodic-size-rotating-file-handler= PERIODIC_SIZE_HANDLER_NAME :write-attribute(name=filter-spec, value= FILTER_EXPRESSION )", "/subsystem=logging/root-logger=ROOT:add-handler(name= PERIODIC_SIZE_HANDLER_NAME )", "/subsystem=logging/logger= CATEGORY :add-handler(name= PERIODIC_SIZE_HANDLER_NAME )", "/subsystem=logging/periodic-size-rotating-file-handler= PERIODIC_SIZE_HANDLER_NAME :remove", "/subsystem=logging/syslog-handler= SYSLOG_HANDLER_NAME :add", "/subsystem=logging/syslog-handler= SYSLOG_HANDLER_NAME :write-attribute(name=level,value= LEVEL )", "/subsystem=logging/syslog-handler= SYSLOG_HANDLER_NAME :write-attribute(name=app-name,value= APP_NAME )", "/subsystem=logging/syslog-handler= SYSLOG_HANDLER_NAME :write-attribute(name=server-address,value= SERVER_ADDRESS )", "/subsystem=logging/syslog-handler= SYSLOG_HANDLER_NAME :write-attribute(name=port,value= PORT )", "/subsystem=logging/syslog-handler= SYSLOG_HANDLER_NAME :write-attribute(name=syslog-format,value= SYSLOG_FORMAT )", "/subsystem=logging/syslog-handler= SYSLOG_HANDLER_NAME :write-attribute(name=named-formatter, value=FORMATTER_NAME)", "/subsystem=logging/root-logger=ROOT:add-handler(name= SYSLOG_HANDLER_NAME )", "/subsystem=logging/logger= CATEGORY :add-handler(name= SYSLOG_HANDLER_NAME )", "/subsystem=logging/syslog-handler= SYSLOG_HANDLER_NAME :remove", "/socket-binding-group= SOCKET_BINDING_GROUP /remote-destination-outbound-socket-binding= SOCKET_BINDING_NAME :add(host= HOST , port= PORT )", "/subsystem=logging/json-formatter= FORMATTER :add", "/subsystem=logging/socket-handler= SOCKET_HANDLER_NAME :add(outbound-socket-binding-ref= SOCKET_BINDING_NAME ,named-formatter= FORMATTER )", "/subsystem=logging/socket-handler= SOCKET_HANDLER_NAME :write-attribute(name=protocol,value= PROTOCOL )", "/subsystem=logging/socket-handler= SOCKET_HANDLER_NAME :write-attribute(name=level,value= LEVEL )", "/subsystem=logging/socket-handler= SOCKET_HANDLER_NAME :write-attribute(name=encoding,value= ENCODING )", "/subsystem=logging/socket-handler= SOCKET_HANDLER_NAME :write-attribute(name=autoflush,value= AUTO_FLUSH )", "/subsystem=logging/socket-handler= SOCKET_HANDLER_NAME :write-attribute(name=filter-spec, value= FILTER_EXPRESSION )", "/subsystem=logging/root-logger=ROOT:add-handler(name= SOCKET_HANDLER_NAME )", "/subsystem=logging/logger= CATEGORY :add-handler(name= SOCKET_HANDLER_NAME )", "/subsystem=logging/socket-handler= SOCKET_HANDLER_NAME :remove", "/subsystem=elytron/key-store=log-server-ks:add(path=/path/to/keystore.jks, type=JKS, credential-reference={clear-text=mypassword})", "/subsystem=elytron/trust-manager=log-server-tm:add(key-store=log-server-ks)", "/subsystem=elytron/client-ssl-context=log-server-context:add(trust-manager=log-server-tm, protocols=[\"TLSv1.2\"])", "/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=log-server:add(host=localhost, port=4560)", "/subsystem=logging/json-formatter=json:add", "/subsystem=logging/socket-handler=log-server-handler:add(named-formatter=json, level=INFO, outbound-socket-binding-ref=log-server, protocol=SSL_TCP, ssl-context=log-server-context)", "/subsystem=logging/root-logger=ROOT:add-handler(name=log-server-handler)", "/subsystem=logging/custom-handler= CUSTOM_HANDLER_NAME :add(class= CLASS_NAME ,module= MODULE_NAME )", "/subsystem=logging/custom-handler= CUSTOM_HANDLER_NAME :write-attribute(name=level,value= LEVEL )", "/subsystem=logging/custom-handler= CUSTOM_HANDLER_NAME :write-attribute(name=properties. PROPERTY_NAME ,value= PROPERTY_VALUE )", "/subsystem=logging/custom-handler= CUSTOM_HANDLER_NAME :write-attribute(name=encoding,value= ENCODING )", "/subsystem=logging/custom-handler= CUSTOM_HANDLER_NAME :write-attribute(name=formatter,value= FORMAT )", "/subsystem=logging/custom-handler= CUSTOM_HANDLER_NAME :write-attribute(name=filter-spec, value= FILTER_EXPRESSION )", "/subsystem=logging/root-logger=ROOT:add-handler(name= CUSTOM_HANDLER_NAME )", "/subsystem=logging/logger= CATEGORY :add-handler(name= CUSTOM_HANDLER_NAME )", "/subsystem=logging/custom-handler= CUSTOM_HANDLER_NAME :remove", "/subsystem=logging/async-handler= ASYNC_HANDLER_NAME :add(queue-length= QUEUE_LENGTH )", "/subsystem=logging/async-handler= ASYNC_HANDLER_NAME :add-handler(name= HANDLER_NAME )", "/subsystem=logging/async-handler= ASYNC_HANDLER_NAME :write-attribute(name=level,value= LEVEL )", "/subsystem=logging/async-handler= ASYNC_HANDLER_NAME :write-attribute(name=overflow-action,value= OVERFLOW_ACTION )", "/subsystem=logging/async-handler= ASYNC_HANDLER_NAME :write-attribute(name=filter-spec, value= FILTER_EXPRESSION )", "/subsystem=logging/root-logger=ROOT:add-handler(name= ASYNC_HANDLER_NAME )", "/subsystem=logging/logger= CATEGORY :add-handler(name= ASYNC_HANDLER_NAME )", "/subsystem=logging/async-handler= ASYNC_HANDLER_NAME :remove", "/subsystem=logging/root-logger=ROOT:add-handler(name= LOG_HANDLER_NAME )", "/subsystem=logging/root-logger=ROOT:remove-handler(name= LOG_HANDLER_NAME )", "/subsystem=logging/root-logger=ROOT:write-attribute(name=level,value= LEVEL )", "/subsystem=logging/pattern-formatter= PATTERN_FORMATTER_NAME :add(pattern= PATTERN )", "2016-03-18 15:49:32,075 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:9990", "/subsystem=logging/pattern-formatter= PATTERN_FORMATTER_NAME :write-attribute(name=color-map,value=\" LEVEL : COLOR , LEVEL : COLOR \")", "/subsystem=logging/json-formatter= JSON_FORMATTER_NAME :add(pretty-print=true, exception-output-type=formatted)", "{ \"timestamp\": \"2018-10-18T13:53:43.031-04:00\", \"sequence\": 62, \"loggerClassName\": \"org.jboss.as.server.logging.ServerLogger_USDlogger\", \"loggerName\": \"org.jboss.as\", \"level\": \"INFO\", \"message\": \"WFLYSRV0025: JBoss EAP 7.4.0.GA (WildFly Core 15.0.2.Final-redhat-00001) started in 5227ms - Started 317 of 556 services (343 services are lazy, passive or on-demand), \"threadName\": \"Controller Boot Thread\", \"threadId\": 22, \"mdc\": { }, \"ndc\": \"\", \"hostName\": \"localhost.localdomain\", \"processName\": \"jboss-modules.jar\", \"processId\": 7461 }", "/subsystem=logging/json-formatter=logstash:add(exception-output-type=formatted, key-overrides=[timestamp=\"@timestamp\"], meta-data=[@version=1])", "/subsystem=logging/xml-formatter= XML_FORMATTER_NAME :add(pretty-print=true, exception-output-type=detailed-and-formatted)", "<record> <timestamp>2018-10-18T13:55:53.419-04:00</timestamp> <sequence>62</sequence> <loggerClassName>org.jboss.as.server.logging.ServerLogger_USDlogger</loggerClassName> <loggerName>org.jboss.as</loggerName> <level>INFO</level> <message>WFLYSRV0025: {ProductCurrentVersionExamples} (WildFly Core 10.0.0.Final-redhat-20190924) started in 6271ms - Started 495 of 679 services (331 services are lazy, passive or on-demand)</message> <threadName>Controller Boot Thread</threadName> <threadId>22</threadId> <mdc> </mdc> <ndc></ndc> <hostName>localhost.localdomain</hostName> <processName>jboss-modules.jar</processName> <processId>7790</processId> </record>", "/subsystem=logging/xml-formatter= XML_FORMATTER_NAME :add(pretty-print=true, print-namespace=true, namespace-uri=\"urn:custom:1.0\", key-overrides={message=msg, record=logRecord, timestamp=date}, print-details=true)", "/subsystem=logging/custom-formatter= CUSTOM_FORMATTER_NAME :add(class= CLASS_NAME , module= MODULE_NAME )", "/subsystem=logging/custom-formatter= CUSTOM_FORMATTER_NAME :write-attribute(name=properties. PROPERTY_NAME ,value= PROPERTY_VALUE )", "/subsystem=logging/periodic-rotating-file-handler= FILE_HANDLER_NAME :write-attribute(name=named-formatter, value= CUSTOM_FORMATTER_NAME )", "/subsystem=logging/custom-formatter=custom-xml-formatter:add(class=java.util.logging.XMLFormatter, module=org.jboss.logmanager) /subsystem=logging/console-handler=CONSOLE:write-attribute(name=named-formatter, value=custom-xml-formatter)", "<record> <date>2016-03-23T12:58:13</date> <millis>1458752293091</millis> <sequence>93963</sequence> <logger>org.jboss.as</logger> <level>INFO</level> <class>org.jboss.as.server.BootstrapListener</class> <method>logAdminConsole</method> <thread>22</thread> <message>WFLYSRV0051: Admin console listening on http://%s:%d</message> <param>127.0.0.1</param> <param>9990</param> </record>", "/subsystem=logging:write-attribute(name=use-deployment-logging-config,value=false)", "/subsystem=logging/logging-profile= PROFILE_NAME :add", "/subsystem=logging/logging-profile= PROFILE_NAME /file-handler= FILE_HANDLER_NAME :add(file={path=>\" LOG_NAME .log\", \"relative-to\"=>\"jboss.server.log.dir\"})", "/subsystem=logging/logging-profile= PROFILE_NAME /file-handler= FILE_HANDLER_NAME :write-attribute(name=\"level\", value=\"DEBUG\")", "/subsystem=logging/logging-profile= PROFILE_NAME /logger= CATEGORY_NAME :add(level=TRACE)", "/subsystem=logging/logging-profile= PROFILE_NAME /logger= CATEGORY_NAME :add-handler(name=\" FILE_HANDLER_NAME \")", "/subsystem=logging/logging-profile=accounts-app-profile:add /subsystem=logging/logging-profile=accounts-app-profile/file-handler=ejb-trace-file:add(file={path=>\"ejb-trace.log\", \"relative-to\"=>\"jboss.server.log.dir\"}) /subsystem=logging/logging-profile=accounts-app-profile/file-handler=ejb-trace-file:write-attribute(name=\"level\", value=\"DEBUG\") /subsystem=logging/logging-profile=accounts-app-profile/logger=com.company.accounts.ejbs:add(level=TRACE) /subsystem=logging/logging-profile=accounts-app-profile/logger=com.company.accounts.ejbs:add-handler(name=\"ejb-trace-file\")", "<logging-profiles> <logging-profile name=\"accounts-app-profile\"> <file-handler name=\"ejb-trace-file\"> <level name=\"DEBUG\"/> <file relative-to=\"jboss.server.log.dir\" path=\"ejb-trace.log\"/> </file-handler> <logger category=\"com.company.accounts.ejbs\"> <level name=\"TRACE\"/> <handlers> <handler name=\"ejb-trace-file\"/> </handlers> </logger> </logging-profile> </logging-profiles>", "Manifest-Version: 1.0 Logging-Profile: accounts-app-profile", "/deployment= DEPLOYMENT_NAME /subsystem=logging/configuration= CONFIG :read-resource", "/deployment=mydeployment.war/subsystem=logging/configuration=profile-MYPROFILE:read-resource(recursive=true,include-runtime=true)", "{ \"outcome\" => \"success\", \"result\" => { \"error-manager\" => undefined, \"filter\" => undefined, \"formatter\" => { \"MYFORMATTER\" => { \"class-name\" => \"org.jboss.logmanager.formatters.PatternFormatter\", \"module\" => undefined, \"properties\" => {\"pattern\" => \"%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n\"} } }, \"handler\" => { \"MYPERIODIC\" => { \"class-name\" => \"org.jboss.logmanager.handlers.PeriodicRotatingFileHandler\", \"encoding\" => undefined, \"error-manager\" => undefined, \"filter\" => undefined, \"formatter\" => \"MYFORMATTER\", \"handlers\" => [], \"level\" => \"ALL\", \"module\" => undefined, \"properties\" => { \"append\" => \"true\", \"autoFlush\" => \"true\", \"enabled\" => \"true\", \"suffix\" => \".yyyy-MM-dd\", \"fileName\" => \" EAP_HOME /standalone/log/deployment.log\" } } }, \"logger\" => {\"MYCATEGORY\" => { \"filter\" => undefined, \"handlers\" => [], \"level\" => \"DEBUG\", \"use-parent-handlers\" => true }}, \"pojo\" => undefined } }", "/deployment= DEPLOYMENT_NAME /subsystem=logging:read-resource(include-runtime=true, recursive=true)", "EAP_HOME /bin/jboss-cli.sh", "[disconnected /] module add --name= MODULE_NAME --resources= PATH_TO_JDBC_JAR --dependencies= DEPENDENCIES", "[disconnected /] module add --name=com.mysql --resources= /path/to /mysql-connector-java-8.0.12.jar --dependencies=javax.transaction.api,sun.jdk,ibm.jdk,javaee.api,javax.api", "EAP_HOME /bin/jboss-cli.sh --command=\"module add --name= MODULE_NAME --resources= PATH_TO_JDBC_JAR --dependencies= DEPENDENCIES \"", "/subsystem=datasources/jdbc-driver= DRIVER_NAME :add(driver-name= DRIVER_NAME ,driver-module-name= MODULE_NAME ,driver-xa-datasource-class-name= XA_DATASOURCE_CLASS_NAME , driver-class-name= DRIVER_CLASS_NAME )", "/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-xa-datasource-class-name=com.mysql.cj.jdbc.MysqlXADataSource, driver-class-name=com.mysql.cj.jdbc.Driver)", "deploy PATH_TO_JDBC_JAR", "deploy /path/to /mysql-connector-java-8.0.12.jar", "WFLYJCA0018: Started Driver service with driver-name = mysql-connector-java-8.0.12.jar", "com.mysql.cj.jdbc.Driver", "jar \\-uf jdbc-driver.jar META-INF/services/java.sql.Driver", "Dependencies: com.mysql", "<jboss-deployment-structure> <deployment> <dependencies> <module name=\"com.mysql\"/> </dependencies> </deployment> </jboss-deployment-structure>", "import java.sql.Connection; Connection c = ds.getConnection(); if (c.isWrapperFor(com.mysql.jdbc.Connection.class)) { com.mysql.jdbc.Connection mc = c.unwrap(com.mysql.jdbc.Connection.class); }", "data-source add --name= DATASOURCE_NAME --jndi-name= JNDI_NAME --driver-name= DRIVER_NAME --connection-url= CONNECTION_URL --user-name= USER_NAME --password= PASSWORD", "WFLYJCA0018: Started Driver service with driver-name = mysql-connector-java-5.1.36-bin.jar_com.mysql.cj.jdbc.Driver_5_1", "xa-data-source add --name= XA_DATASOURCE_NAME --jndi-name= JNDI_NAME --driver-name= DRIVER_NAME --xa-datasource-class= XA_DATASOURCE_CLASS --xa-datasource-properties={\"ServerName\"=>\" HOST_NAME \",\"DatabaseName\"=>\" DATABASE_NAME \"}", "/subsystem=datasources/xa-data-source= XA_DATASOURCE_NAME /xa-datasource-properties=ServerName:add(value= HOST_NAME )", "/subsystem=datasources/xa-data-source= XA_DATASOURCE_NAME /xa-datasource-properties=DatabaseName:add(value= DATABASE_NAME )", "WFLYJCA0018: Started Driver service with driver-name = mysql-connector-java-5.1.36-bin.jar_com.mysql.cj.jdbc.Driver_5_1", "data-source --name= DATASOURCE_NAME -- ATTRIBUTE_NAME = ATTRIBUTE_VALUE", "xa-data-source --name= XA_DATASOURCE_NAME -- ATTRIBUTE_NAME = ATTRIBUTE_VALUE", "/subsystem=datasources/xa-data-source= XA_DATASOURCE_NAME /xa-datasource-properties= PROPERTY :add(value= VALUE )", "data-source remove --name= DATASOURCE_NAME", "xa-data-source remove --name= XA_DATASOURCE_NAME", "/subsystem=datasources/data-source= DATASOURCE_NAME :test-connection-in-pool", "/subsystem=datasources/data-source= DATASOURCE_NAME :flush-all-connection-in-pool", "/subsystem=datasources/data-source= DATASOURCE_NAME :flush-gracefully-connection-in-pool", "/subsystem=datasources/data-source= DATASOURCE_NAME :flush-idle-connection-in-pool", "/subsystem=datasources/data-source= DATASOURCE_NAME :flush-invalid-connection-in-pool", "/subsystem=datasources/xa-data-source= XA_DATASOURCE_NAME :write-attribute(name=no-recovery,value=true)", "GRANT SELECT ON sys.dba_pending_transactions TO USER; GRANT SELECT ON sys.pending_transUSD TO USER; GRANT SELECT ON sys.dba_2pc_pending TO USER; GRANT EXECUTE ON sys.dbms_xa TO USER;", "WARN [com.arjuna.ats.jta.logging.loggerI18N] [com.arjuna.ats.internal.jta.recovery.xarecovery1] Local XARecoveryModule.xaRecovery got XA exception javax.transaction.xa.XAException, XAException.XAER_RMERR", "sp_configure 'enable xact coordination', 1", "select 1 from dual", "select 1", "/subsystem=security/security-domain=DsRealm:add(cache-type=default) /subsystem=security/security-domain=DsRealm/authentication=classic:add(login-modules=[{code=ConfiguredIdentity,flag=required,module-options={userName=sa, principal=sa, password=sa}}])", "<security-domain name=\"DsRealm\" cache-type=\"default\"> <authentication> <login-module code=\"ConfiguredIdentity\" flag=\"required\"> <module-option name=\"userName\" value=\"sa\"/> <module-option name=\"principal\" value=\"sa\"/> <module-option name=\"password\" value=\"sa\"/> </login-module> </authentication> </security-domain>", "data-source add --name=securityDs --jndi-name=java:jboss/datasources/securityDs --connection-url=jdbc:h2:mem:test;DB_CLOSE_DELAY=-1 --driver-name=h2 --new-connection-sql=\"select current_user()\"", "data-source --name=securityDs --security-domain=DsRealm", "reload", "<datasources> <datasource jndi-name=\"java:jboss/datasources/securityDs\" pool-name=\"securityDs\"> <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1</connection-url> <driver>h2</driver> <new-connection-sql>select current_user()</new-connection-sql> <security> <security-domain>DsRealm</security-domain> </security> </datasource> </datasources>", "data-source --name=ExampleDS --password=USD{VAULT::ds_ExampleDS::password::N2NhZDYzOTMtNWE0OS00ZGQ0LWE4MmEtMWNlMDMyNDdmNmI2TElORV9CUkVBS3ZhdWx0}", "reload", "<security> <user-name>admin</user-name> <password>USD{VAULT::ds_ExampleDS::password::N2NhZDYzOTMtNWE0OS00ZGQ0LWE4MmEtMWNlMDMyNDdmNmI2TElORV9CUkVBS3ZhdWx0}</password> </security>", "/subsystem=datasources/data-source=ExampleDS:write-attribute(name=credential-reference,value={store=exampleCS, alias=example-ds-pw})", "/subsystem=datasources/data-source=ExampleDS:undefine-attribute(name=password) /subsystem=datasources/data-source=ExampleDS:undefine-attribute(name=user-name)", "/subsystem=datasources/data-source=ExampleDS:write-attribute(name=elytron-enabled,value=true) reload", "/subsystem=elytron/authentication-configuration=exampleAuthConfig:add(authentication-name=sa,credential-reference={clear-text=sa})", "/subsystem=elytron/authentication-context=exampleAuthContext:add(match-rules=[{authentication-configuration=exampleAuthConfig}])", "/subsystem=datasources/data-source=ExampleDS:write-attribute(name=authentication-context,value=exampleAuthContext) reload", "/system-property=java.security.krb5.conf:add(value=\"/path/to/krb5.conf\") /system-property=sun.security.krb5.debug:add(value=\"false\") /system-property=sun.security.spnego.debug:add(value=\"false\")", "batch /subsystem=infinispan/cache-container=security:add(default-cache=auth-cache) /subsystem=infinispan/cache-container=security/local-cache=auth-cache:add() /subsystem=infinispan/cache-container=security/local-cache=auth-cache/expiration=EXPIRATION:add(lifespan=3540000,max-idle=3540000) /subsystem=infinispan/cache-container=security/local-cache=auth-cache/memory=object:add(size=1000) run-batch", "batch /subsystem=security/security-domain=KerberosDatabase:add(cache-type=infinispan) /subsystem=security/security-domain=KerberosDatabase/authentication=classic:add /subsystem=security/security-domain=KerberosDatabase/authentication=classic/login-module=\"KerberosDatabase-Module\":add(code=\"org.jboss.security.negotiation.KerberosLoginModule\",module=\"org.jboss.security.negotiation\",flag=required, module-options={ \"debug\" => \"false\", \"storeKey\" => \"false\", \"useKeyTab\" => \"true\", \"keyTab\" => \"/path/to/eap.keytab\", \"principal\" => \"[email protected]\", \"doNotPrompt\" => \"true\", \"refreshKrb5Config\" => \"true\", \"isInitiator\" => \"true\", \"addGSSCredential\" => \"true\", \"credentialLifetime\" => \"-1\"}) run-batch", "/subsystem=elytron/kerberos-security-factory=krbsf:add(debug=false, [email protected], path=/path/to/keytab, request-lifetime=-1, obtain-kerberos-ticket=true, server=false)", "/subsystem=elytron/authentication-configuration=kerberos-conf:add(kerberos-security-factory=krbsf)", "/subsystem=elytron/authentication-context=ds-context:add(match-rules=[{authentication-configuration=kerberos-conf}])", "/subsystem=datasources/data-source=KerberosDS:add(connection-url=\"URL\", min-pool-size=0, max-pool-size=10, jndi-name=\"java:jboss/datasource/KerberosDS\", driver-name=<jdbc-driver>.jar, security-domain=KerberosDatabase, allow-multiple-users=false, pool-prefill=false, pool-use-strict-min=false, idle-timeout-minutes=2)", "/subsystem=datasources/data-source=KerberosDS/connection-properties=<connection-property-name>:add(value=\"(<kerberos-value>)\")", "/subsystem=datasources/data-source=KerberosDS/connection-properties=oracle.net.authentication_services:add(value=\"(KERBEROS5)\")", "/subsystem=datasources/data-source=KerberosDS:add(connection-url=\"URL\", min-pool-size=0, max-pool-size=10, jndi-name=\"java:jboss/datasource/KerberosDS\", driver-name=<jdbc-driver>.jar, elytron-enabled=true, authentication-context=ds-context, allow-multiple-users=false, pool-prefill=false, pool-use-strict-min=false, idle-timeout-minutes=2)", "/subsystem=datasources/data-source=KerberosDS/connection-properties=<connection-property-name>:add(value=\"(<kerberos-value>)\")", "/subsystem=datasources/data-source=KerberosDS/connection-properties=oracle.net.authentication_services:add(value=\"(KERBEROS5)\")", "/subsystem=datasources/data-source=ExampleDS:write-attribute(name=statistics-enabled,value=true)", "/subsystem=datasources/data-source=ExampleDS/statistics=pool:read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"ActiveCount\" => 1, \"AvailableCount\" => 20, \"AverageBlockingTime\" => 0L, \"AverageCreationTime\" => 122L, \"AverageGetTime\" => 128L, \"AveragePoolTime\" => 0L, \"AverageUsageTime\" => 0L, \"BlockingFailureCount\" => 0, \"CreatedCount\" => 1, \"DestroyedCount\" => 0, \"IdleCount\" => 1, }", "/subsystem=datasources/data-source=ExampleDS/statistics=jdbc:read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"PreparedStatementCacheAccessCount\" => 0L, \"PreparedStatementCacheAddCount\" => 0L, \"PreparedStatementCacheCurrentSize\" => 0, \"PreparedStatementCacheDeleteCount\" => 0L, \"PreparedStatementCacheHitCount\" => 0L, \"PreparedStatementCacheMissCount\" => 0L, \"statistics-enabled\" => true } }", "/subsystem=datasources/data-source=ExampleDS:write-attribute(name=capacity-incrementer-class, value=\"org.jboss.jca.core.connectionmanager.pool.capacity.SizeIncrementer\") /subsystem=datasources/data-source=ExampleDS:write-attribute(name=capacity-decrementer-class, value=\"org.jboss.jca.core.connectionmanager.pool.capacity.SizeDecrementer\")", "/subsystem=datasources/data-source=ExampleDS:write-attribute(name=capacity-incrementer-properties.size, value=2) /subsystem=datasources/data-source=ExampleDS:write-attribute(name=capacity-decrementer-properties.size, value=2)", "data-source --name= DATASOURCE_NAME --enlistment-trace=true", "xa-data-source --name= XA_DATASOURCE_NAME --enlistment-trace=true", "<datasources> <datasource jndi-name=\"java:jboss/MySqlDS\" pool-name=\"MySqlDS\"> <connection-url>jdbc:mysql://localhost:3306/jbossdb</connection-url> <driver>mysql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"mysql\" module=\"com.mysql\"> <driver-class>com.mysql.cj.jdbc.Driver</driver-class> <xa-datasource-class>com.mysql.cj.jdbc.MysqlXADataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.mysql\"> <resources> <resource-root path=\"mysql-connector-java-8.0.12.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=com.mysql --resources= /path/to/mysql-connector-java-8.0.12.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-xa-datasource-class-name=com.mysql.cj.jdbc.MysqlXADataSource, driver-class-name=com.mysql.cj.jdbc.Driver)", "data-source add --name=MySqlDS --jndi-name=java:jboss/MySqlDS --driver-name=mysql --connection-url=jdbc:mysql://localhost:3306/jbossdb --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter", "<datasources> <xa-datasource jndi-name=\"java:jboss/MySqlXADS\" pool-name=\"MySqlXADS\"> <xa-datasource-property name=\"ServerName\"> localhost </xa-datasource-property> <xa-datasource-property name=\"DatabaseName\"> mysqldb </xa-datasource-property> <driver>mysql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"mysql\" module=\"com.mysql\"> <driver-class>com.mysql.cj.jdbc.Driver</driver-class> <xa-datasource-class>com.mysql.cj.jdbc.MysqlXADataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.mysql\"> <resources> <resource-root path=\"mysql-connector-java-8.0.12.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=com.mysql --resources= /path/to/mysql-connector-java-8.0.12.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-xa-datasource-class-name=com.mysql.cj.jdbc.MysqlXADataSource, driver-class-name=com.mysql.cj.jdbc.Driver)", "xa-data-source add --name=MySqlXADS --jndi-name=java:jboss/MySqlXADS --driver-name=mysql --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter --xa-datasource-properties={\"ServerName\"=>\"localhost\",\"DatabaseName\"=>\"mysqldb\"}", "<datasources> <datasource jndi-name=\"java:jboss/PostgresDS\" pool-name=\"PostgresDS\"> <connection-url>jdbc:postgresql://localhost:5432/postgresdb</connection-url> <driver>postgresql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"postgresql\" module=\"com.postgresql\"> <xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.postgresql\"> <resources> <resource-root path=\"postgresql-42.x.y.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=com.postgresql --resources= /path/to/postgresql-42.x.y.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=com.postgresql,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)", "data-source add --name=PostgresDS --jndi-name=java:jboss/PostgresDS --driver-name=postgresql --connection-url=jdbc:postgresql://localhost:5432/postgresdb --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter", "<datasources> <xa-datasource jndi-name=\"java:jboss/PostgresXADS\" pool-name=\"PostgresXADS\"> <xa-datasource-property name=\"ServerName\"> localhost </xa-datasource-property> <xa-datasource-property name=\"PortNumber\"> 5432 </xa-datasource-property> <xa-datasource-property name=\"DatabaseName\"> postgresdb </xa-datasource-property> <driver>postgresql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"postgresql\" module=\"com.postgresql\"> <xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.postgresql\"> <resources> <resource-root path=\"postgresql-42.x.y.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=com.postgresql --resources= /path/to/postgresql-42.x.y.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=com.postgresql,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)", "xa-data-source add --name=PostgresXADS --jndi-name=java:jboss/PostgresXADS --driver-name=postgresql --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter --xa-datasource-properties={\"ServerName\"=>\"localhost\",\"PortNumber\"=>\"5432\",\"DatabaseName\"=>\"postgresdb\"}", "<datasources> <datasource jndi-name=\"java:jboss/OracleDS\" pool-name=\"OracleDS\"> <connection-url>jdbc:oracle:thin:@localhost:1521:XE</connection-url> <driver>oracle</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"oracle\" module=\"com.oracle\"> <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.oracle\"> <resources> <resource-root path=\"ojdbc7.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=com.oracle --resources= /path/to/ojdbc7.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle,driver-module-name=com.oracle,driver-xa-datasource-class-name=oracle.jdbc.xa.client.OracleXADataSource)", "data-source add --name=OracleDS --jndi-name=java:jboss/OracleDS --driver-name=oracle --connection-url=jdbc:oracle:thin:@localhost:1521:XE --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter", "<datasources> <xa-datasource jndi-name=\"java:jboss/OracleXADS\" pool-name=\"OracleXADS\"> <xa-datasource-property name=\"URL\"> jdbc:oracle:thin:@oracleHostName:1521:orcl </xa-datasource-property> <driver>oracle</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"oracle\" module=\"com.oracle\"> <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.oracle\"> <resources> <resource-root path=\"ojdbc7.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=com.oracle --resources= /path/to/ojdbc7.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle,driver-module-name=com.oracle,driver-xa-datasource-class-name=oracle.jdbc.xa.client.OracleXADataSource)", "xa-data-source add --name=OracleXADS --jndi-name=java:jboss/OracleXADS --driver-name=oracle --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter --same-rm-override=false --xa-datasource-properties={\"URL\"=>\"jdbc:oracle:thin:@oracleHostName:1521:orcl\"}", "<datasources> <datasource jndi-name=\"java:jboss/OracleDS\" pool-name=\"OracleDS\"> <connection-url>jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521))</connection-url> <driver>oracle</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"oracle\" module=\"com.oracle\"> <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.oracle\"> <resources> <resource-root path=\"ojdbc7.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=com.oracle --resources= /path/to/ojdbc7.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle,driver-module-name=com.oracle,driver-xa-datasource-class-name=oracle.jdbc.xa.client.OracleXADataSource)", "data-source add --name=OracleDS --jndi-name=java:jboss/OracleDS --driver-name=oracle --connection-url=\"jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521))\" --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter", "<datasources> <xa-datasource jndi-name=\"java:jboss/OracleXADS\" pool-name=\"OracleXADS\"> <xa-datasource-property name=\"URL\"> jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521)) </xa-datasource-property> <driver>oracle</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"oracle\" module=\"com.oracle\"> <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.oracle\"> <resources> <resource-root path=\"ojdbc7.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=com.oracle --resources= /path/to/ojdbc7.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle,driver-module-name=com.oracle,driver-xa-datasource-class-name=oracle.jdbc.xa.client.OracleXADataSource)", "xa-data-source add --name=OracleXADS --jndi-name=java:jboss/OracleXADS --driver-name=oracle --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter --same-rm-override=false --xa-datasource-properties={\"URL\"=>\"jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521))\"}", "<datasources> <datasource jndi-name=\"java:jboss/MSSQLDS\" pool-name=\"MSSQLDS\"> <connection-url>jdbc:sqlserver://localhost:1433;DatabaseName=MyDatabase</connection-url> <driver>sqlserver</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"sqlserver\" module=\"com.microsoft\"> <xa-datasource-class>com.microsoft.sqlserver.jdbc.SQLServerXADataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.microsoft\"> <resources> <resource-root path=\"sqljdbc42.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> <module name=\"javax.xml.bind.api\"/> </dependencies> </module>", "module add --name=com.microsoft --resources= /path/to/sqljdbc42.jar --dependencies=javax.api,javax.transaction.api,javax.xml.bind.api,javaee.api,sun.jdk,ibm.jdk", "/subsystem=datasources/jdbc-driver=sqlserver:add(driver-name=sqlserver,driver-module-name=com.microsoft,driver-xa-datasource-class-name=com.microsoft.sqlserver.jdbc.SQLServerXADataSource)", "data-source add --name=MSSQLDS --jndi-name=java:jboss/MSSQLDS --driver-name=sqlserver --connection-url=jdbc:sqlserver://localhost:1433;DatabaseName=MyDatabase --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter", "<datasources> <xa-datasource jndi-name=\"java:jboss/MSSQLXADS\" pool-name=\"MSSQLXADS\"> <xa-datasource-property name=\"ServerName\"> localhost </xa-datasource-property> <xa-datasource-property name=\"DatabaseName\"> mssqldb </xa-datasource-property> <xa-datasource-property name=\"SelectMethod\"> cursor </xa-datasource-property> <driver>sqlserver</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"sqlserver\" module=\"com.microsoft\"> <xa-datasource-class>com.microsoft.sqlserver.jdbc.SQLServerXADataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.microsoft\"> <resources> <resource-root path=\"sqljdbc42.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> <module name=\"javax.xml.bind.api\"/> </dependencies> </module>", "module add --name=com.microsoft --resources= /path/to/sqljdbc42.jar --dependencies=javax.api,javax.transaction.api,javax.xml.bind.api,javaee.api,sun.jdk,ibm.jdk", "/subsystem=datasources/jdbc-driver=sqlserver:add(driver-name=sqlserver,driver-module-name=com.microsoft,driver-xa-datasource-class-name=com.microsoft.sqlserver.jdbc.SQLServerXADataSource)", "xa-data-source add --name=MSSQLXADS --jndi-name=java:jboss/MSSQLXADS --driver-name=sqlserver --user-name=admin --password=admin --validate-on-match=true --background-validation=false --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter --same-rm-override=false --xa-datasource-properties={\"ServerName\"=>\"localhost\",\"DatabaseName\"=>\"mssqldb\",\"SelectMethod\"=>\"cursor\"}", "<datasources> <datasource jndi-name=\"java:jboss/DB2DS\" pool-name=\"DB2DS\"> <connection-url>jdbc:db2://localhost:50000/ibmdb2db</connection-url> <driver>ibmdb2</driver> <pool> <min-pool-size>0</min-pool-size> <max-pool-size>50</max-pool-size> </pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.db2.DB2ValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.db2.DB2ExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"ibmdb2\" module=\"com.ibm\"> <xa-datasource-class>com.ibm.db2.jcc.DB2XADataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.ibm\"> <resources> <resource-root path=\"db2jcc4.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=com.ibm --resources= /path/to/db2jcc4.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=datasources/jdbc-driver=ibmdb2:add(driver-name=ibmdb2,driver-module-name=com.ibm,driver-xa-datasource-class-name=com.ibm.db2.jcc.DB2XADataSource)", "data-source add --name=DB2DS --jndi-name=java:jboss/DB2DS --driver-name=ibmdb2 --connection-url=jdbc:db2://localhost:50000/ibmdb2db --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.db2.DB2ValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.db2.DB2ExceptionSorter --min-pool-size=0 --max-pool-size=50", "<datasources> <xa-datasource jndi-name=\"java:jboss/DB2XADS\" pool-name=\"DB2XADS\"> <xa-datasource-property name=\"ServerName\"> localhost </xa-datasource-property> <xa-datasource-property name=\"DatabaseName\"> ibmdb2db </xa-datasource-property> <xa-datasource-property name=\"PortNumber\"> 50000 </xa-datasource-property> <xa-datasource-property name=\"DriverType\"> 4 </xa-datasource-property> <driver>ibmdb2</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <recovery> <recover-plugin class-name=\"org.jboss.jca.core.recovery.ConfigurableRecoveryPlugin\"> <config-property name=\"EnableIsValid\"> false </config-property> <config-property name=\"IsValidOverride\"> false </config-property> <config-property name=\"EnableClose\"> false </config-property> </recover-plugin> </recovery> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.db2.DB2ValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.db2.DB2ExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"ibmdb2\" module=\"com.ibm\"> <xa-datasource-class>com.ibm.db2.jcc.DB2XADataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.ibm\"> <resources> <resource-root path=\"db2jcc4.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=com.ibm --resources= /path/to/db2jcc4.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=datasources/jdbc-driver=ibmdb2:add(driver-name=ibmdb2,driver-module-name=com.ibm,driver-xa-datasource-class-name=com.ibm.db2.jcc.DB2XADataSource)", "xa-data-source add --name=DB2XADS --jndi-name=java:jboss/DB2XADS --driver-name=ibmdb2 --user-name=admin --password=admin --validate-on-match=true --background-validation=false --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.db2.DB2ValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.db2.DB2ExceptionSorter --same-rm-override=false --recovery-plugin-class-name=org.jboss.jca.core.recovery.ConfigurableRecoveryPlugin --recovery-plugin-properties={\"EnableIsValid\"=>\"false\",\"IsValidOverride\"=>\"false\",\"EnableClose\"=>\"false\"} --xa-datasource-properties={\"ServerName\"=>\"localhost\",\"DatabaseName\"=>\"ibmdb2db\",\"PortNumber\"=>\"50000\",\"DriverType\"=>\"4\"}", "<datasources> <datasource jndi-name=\"java:jboss/SybaseDB\" pool-name=\"SybaseDB\"> <connection-url>jdbc:sybase:Tds:localhost:5000/DATABASE?JCONNECT_VERSION=6</connection-url> <driver>sybase</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"sybase\" module=\"com.sybase\"> <xa-datasource-class>com.sybase.jdbc4.jdbc.SybXADataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.sybase\"> <resources> <resource-root path=\"jconn4.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=com.sybase --resources= /path/to/jconn4.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=datasources/jdbc-driver=sybase:add(driver-name=sybase,driver-module-name=com.sybase,driver-xa-datasource-class-name=com.sybase.jdbc4.jdbc.SybXADataSource)", "data-source add --name=SybaseDB --jndi-name=java:jboss/SybaseDB --driver-name=sybase --connection-url=jdbc:sybase:Tds:localhost:5000/DATABASE?JCONNECT_VERSION=6 --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseExceptionSorter", "<datasources> <xa-datasource jndi-name=\"java:jboss/SybaseXADS\" pool-name=\"SybaseXADS\"> <xa-datasource-property name=\"ServerName\"> localhost </xa-datasource-property> <xa-datasource-property name=\"DatabaseName\"> mydatabase </xa-datasource-property> <xa-datasource-property name=\"PortNumber\"> 4100 </xa-datasource-property> <xa-datasource-property name=\"NetworkProtocol\"> Tds </xa-datasource-property> <driver>sybase</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"sybase\" module=\"com.sybase\"> <xa-datasource-class>com.sybase.jdbc4.jdbc.SybXADataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.sybase\"> <resources> <resource-root path=\"jconn4.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=com.sybase --resources= /path/to/jconn4.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=datasources/jdbc-driver=sybase:add(driver-name=sybase,driver-module-name=com.sybase,driver-xa-datasource-class-name=com.sybase.jdbc4.jdbc.SybXADataSource)", "xa-data-source add --name=SybaseXADS --jndi-name=java:jboss/SybaseXADS --driver-name=sybase --user-name=admin --password=admin --validate-on-match=true --background-validation=false --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseExceptionSorter --same-rm-override=false --xa-datasource-properties={\"ServerName\"=>\"localhost\",\"DatabaseName\"=>\"mydatabase\",\"PortNumber\"=>\"4100\",\"NetworkProtocol\"=>\"Tds\"}", "<datasources> <datasource jndi-name=\"java:jboss/MariaDBDS\" pool-name=\"MariaDBDS\"> <connection-url>jdbc:mariadb://localhost:3306/jbossdb</connection-url> <driver>mariadb</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"mariadb\" module=\"org.mariadb\"> <driver-class>org.mariadb.jdbc.Driver</driver-class> <xa-datasource-class>org.mariadb.jdbc.MySQLDataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"org.mariadb\"> <resources> <resource-root path=\"mariadb-java-client-3.3.0.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"org.slf4j\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=org.mariadb --resources= /path/to/mariadb-java-client-3.3.0.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api,org.slf4j", "/subsystem=datasources/jdbc-driver=mariadb:add(driver-name=mariadb,driver-module-name=org.mariadb,driver-xa-datasource-class-name=org.mariadb.jdbc.MySQLDataSource, driver-class-name=org.mariadb.jdbc.Driver)", "data-source add --name=MariaDBDS --jndi-name=java:jboss/MariaDBDS --driver-name=mariadb --connection-url=jdbc:mariadb://localhost:3306/jbossdb --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter", "<datasources> <xa-datasource jndi-name=\"java:jboss/MariaDBXADS\" pool-name=\"MariaDBXADS\"> <xa-datasource-property name=\"ServerName\"> localhost </xa-datasource-property> <xa-datasource-property name=\"DatabaseName\"> mariadbdb </xa-datasource-property> <driver>mariadb</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"mariadb\" module=\"org.mariadb\"> <driver-class>org.mariadb.jdbc.Driver</driver-class> <xa-datasource-class>org.mariadb.jdbc.MySQLDataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"org.mariadb\"> <resources> <resource-root path=\"mariadb-java-client-3.3.0.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"org.slf4j\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=org.mariadb --resources= /path/to/mariadb-java-client-3.3.0.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api,org.slf4j", "/subsystem=datasources/jdbc-driver=mariadb:add(driver-name=mariadb,driver-module-name=org.mariadb,driver-xa-datasource-class-name=org.mariadb.jdbc.MySQLDataSource, driver-class-name=org.mariadb.jdbc.Driver)", "xa-data-source add --name=MariaDBXADS --jndi-name=java:jboss/MariaDBXADS --driver-name=mariadb --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter --xa-datasource-properties={\"ServerName\"=>\"localhost\",\"DatabaseName\"=>\"mariadbdb\"}", "<datasources> <datasource jndi-name=\"java:jboss/MariaDBGaleraClusterDS\" pool-name=\"MariaDBGaleraClusterDS\"> <connection-url>jdbc:mariadb://192.168.1.1:3306,192.168.1.2:3306/jbossdb</connection-url> <driver>mariadb</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"mariadb\" module=\"org.mariadb\"> <driver-class>org.mariadb.jdbc.Driver</driver-class> <xa-datasource-class>org.mariadb.jdbc.MySQLDataSource</xa-datasource-class> </driver> </drivers> </datasources>", "<?xml version='1.0' encoding='UTF-8'?> <module xmlns=\"urn:jboss:module:1.1\" name=\"org.mariadb\"> <resources> <resource-root path=\"mariadb-java-client-3.3.0.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"org.slf4j\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=org.mariadb --resources= /path/to/mariadb-java-client-3.3.0.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api,org.slf4j", "/subsystem=datasources/jdbc-driver=mariadb:add(driver-name=mariadb,driver-module-name=org.mariadb,driver-xa-datasource-class-name=org.mariadb.jdbc.MySQLDataSource, driver-class-name=org.mariadb.jdbc.Driver)", "data-source add --name=MariaDBGaleraClusterDS --jndi-name=java:jboss/MariaDBGaleraClusterDS --driver-name=mariadb --connection-url=jdbc:mariadb://192.168.1.1:3306,192.168.1.2:3306/jbossdb --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter", "/extension=org.wildfly.extension.datasources-agroal:add", "/subsystem=datasources-agroal:add", "reload", "<server xmlns=\"urn:jboss:domain:8.0\"> <extensions> <extension module=\"org.wildfly.extension.datasources-agroal\"/> </extensions> <subsystem xmlns=\"urn:jboss:domain:datasources-agroal:1.0\"/> </server>", "EAP_HOME /bin/jboss-cli.sh", "[disconnected /] module add --name= MODULE_NAME --resources= PATH_TO_JDBC_JAR --dependencies= DEPENDENCIES", "[disconnected /] module add --name=com.mysql --resources= /path/to /mysql-connector-java-8.0.12.jar --dependencies=javax.transaction.api,sun.jdk,ibm.jdk,javaee.api,javax.api", "EAP_HOME /bin/jboss-cli.sh --command=\"module add --name= MODULE_NAME --resources= PATH_TO_JDBC_JAR --dependencies= DEPENDENCIES \"", "/subsystem=datasources-agroal/driver= DRIVER_NAME :add(module= MODULE_NAME ,class= CLASS_NAME )", "/subsystem=datasources-agroal/datasource= DATASOURCE_NAME :add(jndi-name= JNDI_NAME ,connection-factory={driver= DRIVER_NAME ,url= URL },connection-pool={max-size= MAX_POOL_SIZE })", "/subsystem=datasources-agroal/datasource= DATASOURCE_NAME :remove", "/subsystem=datasources-agroal/xa-datasource= XA_DATASOURCE_NAME :add(jndi-name= JNDI_NAME ,connection-factory={driver= DRIVER_NAME ,connection-properties={ServerName= HOST_NAME ,PortNumber= PORT ,DatabaseName= DATABASE_NAME }},connection-pool={max-size= MAX_POOL_SIZE })", "/subsystem=datasources-agroal/xa-datasource= DATASOURCE_NAME :remove", "<subsystem xmlns=\"urn:jboss:domain:datasources-agroal:1.0\"> <datasource name=\"ExampleAgroalDS\" jndi-name=\"java:jboss/datasources/ExampleAgroalDS\"> <connection-factory driver=\"mysql\" url=\"jdbc:mysql://localhost:3306/jbossdb\" username=\"admin\" password=\"admin\"/> <connection-pool max-size=\"30\"/> </datasource> <drivers> <driver name=\"mysql\" module=\"com.mysql\" class=\"com.mysql.cj.jdbc.Driver\"/> </drivers> </subsystem>", "<?xml version='1.0' encoding='UTF-8'?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.mysql\"> <resources> <resource-root path=\"mysql-connector-java-8.0.12.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=com.mysql --resources= /path/to/mysql-connector-java-8.0.12.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=datasources-agroal/driver=mysql:add(module=com.mysql,class=com.mysql.cj.jdbc.Driver)", "/subsystem=datasources-agroal/datasource=ExampleAgroalDS:add(jndi-name=java:jboss/datasources/ExampleAgroalDS,connection-factory={driver=mysql,url=jdbc:mysql://localhost:3306/jbossdb,username=admin,password=admin},connection-pool={max-size=30})", "<subsystem xmlns=\"urn:jboss:domain:datasources-agroal:1.0\"> <xa-datasource name=\"ExampleAgroalXADS\" jndi-name=\"java:jboss/datasources/ExampleAgroalXADS\"> <connection-factory driver=\"mysqlXA\" username=\"admin\" password=\"admin\"> <connection-properties> <property name=\"ServerName\" value=\"localhost\"/> <property name=\"PortNumber\" value=\"3306\"/> <property name=\"DatabaseName\" value=\"jbossdb\"/> </connection-properties> </connection-factory> <connection-pool max-size=\"30\"/> </xa-datasource> <drivers> <driver name=\"mysqlXA\" module=\"com.mysql\" class=\"com.mysql.cj.jdbc.MysqlXADataSource\"/> </drivers> </subsystem>", "<?xml version='1.0' encoding='UTF-8'?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.mysql\"> <resources> <resource-root path=\"mysql-connector-java-8.0.12.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "module add --name=com.mysql --resources= /path/to/mysql-connector-java-8.0.12.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=datasources-agroal/driver=mysqlXA:add(module=com.mysql,class=com.mysql.cj.jdbc.MysqlXADataSource)", "/subsystem=datasources-agroal/xa-datasource=ExampleAgroalXADS:add(jndi-name=java:jboss/datasources/ExampleAgroalXADS,connection-factory={driver=mysqlXA,connection-properties={ServerName=localhost,PortNumber=3306,DatabaseName=jbossdb},username=admin,password=admin},connection-pool={max-size=30})", "/profile=full/subsystem=iiop-openjdk:write-attribute(name=security,value=identity)", "/profile=full/subsystem=iiop-openjdk:write-attribute(name=transactions, value=full)", "/profile=full/subsystem=transactions:write-attribute(name=jts,value=true)", "/subsystem=iiop-openjdk:read-attribute(name=security-domain) { \"outcome\" => \"success\", \"result\" => \"iiopSSLSecurityDomain\" }", "/subsystem=iiop-openjdk:undefine-attribute(name=security-realm)", "batch /subsystem=iiop-openjdk:write-attribute(name=client-ssl-context,value=iiopClientSSC) /subsystem=iiop-openjdk:write-attribute(name=server-ssl-context,value=iiopServerSSC) run-batch reload", "/subsystem=iiop-openjdk:write-attribute(name=support-ssl,value=true) /subsystem=iiop-openjdk:write-attribute(name=client-requires-ssl,value=true) /subsystem=iiop-openjdk:write-attribute(name=server-requires-ssl,value=true) /subsystem=iiop-openjdk:write-attribute(name=ssl-socket-binding,value=iiop-ssl) reload", "Severity: ERROR Section: 19.4.2 Description: A ResourceAdapter must implement a \"public int hashCode()\" method. Code: com.mycompany.myproject.ResourceAdapterImpl Severity: ERROR Section: 19.4.2 Description: A ResourceAdapter must implement a \"public boolean equals(Object)\" method. Code: com.mycompany.myproject.ResourceAdapterImpl", "batch /subsystem=jca/distributed-workmanager=myDistWorkMgr:add(name=myDistWorkMgr) /subsystem=jca/distributed-workmanager=myDistWorkMgr/short-running-threads=myDistWorkMgr:add(queue-length=10,max-threads=10) /subsystem=jca/bootstrap-context=myCustomContext:add(name=myCustomContext,workmanager=myDistWorkMgr) run-batch", "/subsystem=jca/distributed-workmanager=myDistWorkMgr:write-attribute(name=policy-options,value={watermark=3})", "deploy /path/to /resource-adapter.rar", "deploy /path/to /resource-adapter.rar --all-server-groups", "/subsystem=resource-adapters/resource-adapter=eis.rar:add(archive=eis.rar, transaction-support=XATransaction)", "/subsystem=resource-adapters/resource-adapter=eis.rar/config-properties=server:add(value=localhost)", "/subsystem=resource-adapters/resource-adapter=eis.rar/config-properties=port:add(value=9000)", "/subsystem=resource-adapters/resource-adapter=eis.rar/admin-objects=aoName:add(class-name=com.acme.eis.ra.EISAdminObjectImpl, jndi-name=java:/eis/AcmeAdminObject)", "/subsystem=resource-adapters/resource-adapter=eis.rar/admin-objects=aoName/config-properties=threshold:add(value=10)", "/subsystem=resource-adapters/resource-adapter=eis.rar/connection-definitions=cfName:add(class-name=com.acme.eis.ra.EISManagedConnectionFactory, jndi-name=java:/eis/AcmeConnectionFactory)", "/subsystem=resource-adapters/resource-adapter=eis.rar/connection-definitions=cfName/config-properties=name:add(value=Acme Inc)", "/subsystem=resource-adapters/resource-adapter=eis.rar/connection-definitions=cfName:write-attribute(name=enlistment-trace,value=true)", "/subsystem=resource-adapters/resource-adapter=eis.rar:activate", "/subsystem=resource-adapters/resource-adapter= RAR_NAME /connection-definitions= FACTORY_NAME :write-attribute(name=elytron-enabled,value=true)", "/subsystem=elytron/authentication-configuration=exampleAuthConfig:add(authentication-name=sa,credential-reference={clear-text=sa})", "/subsystem=elytron/authentication-context=exampleAuthContext:add(match-rules=[{authentication-configuration=exampleAuthConfig}])", "/subsystem=jca/workmanager=customWM:add(name=customWM, elytron-enabled=true)", "<subsystem xmlns=\"urn:jboss:domain:jca:5.0\"> <archive-validation enabled=\"true\" fail-on-error=\"true\" fail-on-warn=\"false\"/> <bean-validation enabled=\"true\"/> <default-workmanager> <short-running-threads> <core-threads count=\"50\"/> <queue-length count=\"50\"/> <max-threads count=\"50\"/> <keepalive-time time=\"10\" unit=\"seconds\"/> </short-running-threads> <long-running-threads> <core-threads count=\"50\"/> <queue-length count=\"50\"/> <max-threads count=\"50\"/> <keepalive-time time=\"10\" unit=\"seconds\"/> </long-running-threads> </default-workmanager> <workmanager name=\"customWM\"> <elytron-enabled>true</elytron-enabled> <short-running-threads> <core-threads count=\"20\"/> <queue-length count=\"20\"/> <max-threads count=\"20\"/> </short-running-threads> </workmanager> <bootstrap-contexts> <bootstrap-context name=\"customContext\" workmanager=\"customWM\"/> </bootstrap-contexts> <cached-connection-manager/> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:resource-adapters:5.0\"> <resource-adapters> <resource-adapter id=\"ra-with-elytron-security-domain\"> <archive> ra-with-elytron-security-domain.rar </archive> <bootstrap-context>customContext</bootstrap-context> <transaction-support>NoTransaction</transaction-support> <workmanager> <security> <elytron-security-domain>wm-realm</elytron-security-domain> <default-principal>wm-default-principal</default-principal> <default-groups> <group> wm-default-group </group> </default-groups> </security> </workmanager> </resource-adapter> </resource-adapters> </subsystem>", "/subsystem=elytron/properties-realm=wm-properties-realm:add(users-properties={path=/security-dir/users.properties, plain-text=true}, groups-properties={path=/security-dir/groups.properties}) /subsystem=elytron/simple-role-decoder=wm-role-decoder:add(attribute=groups) /subsystem=elytron/constant-permission-mapper=wm-permission-mapper:add(permissions=[{class-name=\"org.wildfly.security.auth.permission.LoginPermission\"}]) /subsystem=elytron/security-domain=wm-realm:add(default-realm=wm-properties-realm, permission-mapper=wm-permission-mapper, realms=[{role-decoder=wm-role-decoder, realm=wm-properties-realm}])", "public interface WorkContextProvider { /** * Gets an instance of <code>WorkContexts</code> that needs to be used * by the <code>WorkManager</code> to set up the execution context while * executing a <code>Work</code> instance. * * @return an <code>List</code> of <code>WorkContext</code> instances. */ List<WorkContext> getWorkContexts(); }", "public class ExampleWork implements Work, WorkContextProvider { private final String username; private final String role; public MyWork(TestBean bean, String username, String role) { this.principals = null; this.roles = null; this.bean = bean; this.username = username; this.role = role; } public List<WorkContext> getWorkContexts() { List<WorkContext> l = new ArrayList<>(1); l.add(new MySecurityContext(username, role)); return l; } public void run() { } public void release() { } public class ExampleSecurityContext extends SecurityContext { public void setupSecurityContext(CallbackHandler handler, Subject executionSubject, Subject serviceSubject) { try { List<javax.security.auth.callback.Callback> cbs = new ArrayList<>(); cbs.add(new CallerPrincipalCallback(executionSubject, new SimplePrincipal(username))); cbs.add(new GroupPrincipalCallback(executionSubject, new String[]{role})); handler.handle(cbs.toArray(new javax.security.auth.callback.Callback[cbs.size()])); } catch (Throwable t) { throw new RuntimeException(t); } } }", "/subsystem=datasources/data-source= DATA_SOURCE :write-attribute(name=mcp,value= MCP_CLASS )", "/subsystem=resource-adapters/resource-adapter= RESOURCE_ADAPTER /connection-definitions= CONNECTION_DEFINITION :write-attribute(name=mcp,value= MCP_CLASS )", "/subsystem=messaging-activemq/server= SERVER /pooled-connection-factory= CONNECTION_FACTORY :write-attribute(name=managed-connection-pool,value= MCP_CLASS )", "/deployment= NAME .rar/subsystem=resource-adapters/statistics=statistics/connection-definitions=java\\:\\/testMe:read-resource(include-runtime=true)", "/subsystem=resource-adapters/resource-adapter= RESOURCE_ADAPTER /connection-definitions= CONNECTION_DEFINITION :flush-all-connection-in-pool", "/subsystem=resource-adapters/resource-adapter= RESOURCE_ADAPTER /connection-definitions= CONNECTION_DEFINITION :flush-gracefully-connection-in-pool", "/subsystem=resource-adapters/resource-adapter= RESOURCE_ADAPTER /connection-definitions= CONNECTION_DEFINITION :flush-idle-connection-in-pool", "/subsystem=resource-adapters/resource-adapter= RESOURCE_ADAPTER /connection-definitions= CONNECTION_DEFINITION :flush-invalid-connection-in-pool", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" enable-http2=\"true\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <http-invoker security-realm=\"ApplicationRealm\"/> </host> </server> <servlet-container name=\"default\"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> </subsystem>", "/subsystem=undertow/application-security-domain=ApplicationDomain:add(security-domain=ApplicationDomain)", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" ... default-security-domain=\"other\"> <application-security-domains> <application-security-domain name=\"ApplicationDomain\" security-domain=\"ApplicationDomain\"/> </application-security-domains> </subsystem>", "/subsystem=undertow/application-security-domain=MyAppSecurity:add(http-authentication-factory=application-http-authentication)", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" ... default-security-domain=\"other\"> <application-security-domains> <application-security-domain name=\"MyAppSecurity\" http-authentication-factory=\"application-http-authentication\"/> </application-security-domains> </subsystem>", "/subsystem=undertow/application-security-domain=MyAppSecurity:read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"enable-jacc\" => false, \"http-authentication-factory\" => undefined, \"override-deployment-config\" => false, \"referencing-deployments\" => [\"simple-webapp.war\"], \"security-domain\" => \"ApplicationDomain\", \"setting\" => undefined } }", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> </subsystem>", "/subsystem=undertow/buffer-cache=default/:write-attribute(name=buffer-size,value=2048)", "reload", "/subsystem=undertow/buffer-cache=new-buffer:add", "/subsystem=undertow/buffer-cache=new-buffer:remove", "reload", "/subsystem=undertow/byte-buffer-pool=myByteBufferPool:write-attribute(name=buffer-size,value=1024)", "reload", "/subsystem=undertow/byte-buffer-pool=newByteBufferPool:add", "/subsystem=undertow/byte-buffer-pool=newByteBufferPool:remove", "reload", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" enable-http2=\"true\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <http-invoker security-realm=\"ApplicationRealm\"/> </host> </server> </subsystem>", "/subsystem=undertow/server=default-server:write-attribute(name=default-host,value=default-host)", "reload", "/subsystem=undertow/server=new-server:add", "reload", "/subsystem=undertow/server=new-server:remove", "reload", "/subsystem=undertow/server=default-server/host=default-host/setting=access-log:add", "/subsystem=undertow/server=default-server/host=default-host/setting=access-log:write-attribute(name=pattern,value=\"combined\"", "/subsystem=undertow/server=default-server/host=default-host/setting=console-access-log:add", "/subsystem=undertow/server=default-server/host=default-host/setting=console-access-log:add(metadata={\"@version\"=\"1\", \"qualifiedHostName\"=USD{jboss.qualified.host.name:unknown}}, attributes={bytes-sent={}, date-time={key=\"@timestamp\", date-format=\"yyyy-MM-dd'T'HH:mm:ssSSS\"}, remote-host={}, request-line={}, response-header={key-prefix=\"responseHeader\", names=[\"Content-Type\"]}, response-code={}, remote-user={}})", "{ \"eventSource\":\"web-access\", \"hostName\":\"default-host\", \"@version\":\"1\", \"qualifiedHostName\":\"localhost.localdomain\", \"bytesSent\":1504, \"@timestamp\":\"2019-05-02T11:57:37123\", \"remoteHost\":\"127.0.0.1\", \"remoteUser\":null, \"requestLine\":\"GET / HTTP/2.0\", \"responseCode\":200, \"responseHeaderContent-Type\":\"text/html\" }", "/subsystem=undertow/server=default-server/host=default-host/setting=console-access-log:write-attribute(name=attributes,value={bytes-sent={}, date-time={key=\"@timestamp\", date-format=\"yyyy-MM-dd'T'HH:mm:ssSSS\"}, remote-host={}, request-line={}, response-header={key-prefix=\"responseHeader\", names=[\"Content-Type\"]}, response-code={}, remote-user={}})", "/subsystem=undertow/server=default-server/host=default-host/setting=console-access-log:write-attribute(name=metadata,value={\"@version\"=\"1\", \"qualifiedHostName\"=USD{jboss.qualified.host.name:unknown}})", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> </server> <servlet-container name=\"default\"> <jsp-config/> <websockets/> </servlet-container> </subsystem>", "/subsystem=undertow/servlet-container=default:write-attribute(name=ignore-flush,value=true)", "reload", "/subsystem=undertow/servlet-container=new-servlet-container:add", "reload", "/subsystem=undertow/servlet-container=new-servlet-container:remove", "reload", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> </server> <servlet-container name=\"default\"> </servlet-container> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> </subsystem>", "/subsystem=undertow/configuration=handler/file=welcome-content:write-attribute(name=case-sensitive,value=true)", "reload", "/subsystem=undertow/configuration=handler/file=new-file-handler:add(path=\"USD{jboss.home.dir}/welcome-content\")", "/subsystem=undertow/configuration=handler/file=new-file-handler:remove", "reload", "/subsystem=undertow/configuration=filter/response-header=myHeader:write-attribute(name=header-value,value=\"JBoss-EAP\")", "reload", "/subsystem=undertow/configuration=filter/response-header=new-response-header:add(header-name=new-response-header,header-value=\"My Value\")", "/subsystem=undertow/configuration=filter/response-header=new-response-header:remove", "reload", "/subsystem=undertow/configuration=filter/expression-filter=buf:add(expression=\"buffer-request(buffers=1)\") /subsystem=undertow/server=default-server/host=default-host/filter-ref=buf:add", "/subsystem=undertow/configuration=filter/expression-filter=addSameSiteLax:add(expression=\"path-prefix('/mypathprefix') -> samesite-cookie(Lax)\")", "/subsystem=undertow/server=default-server/host=default-host/filter-ref=addSameSiteLax:add", "samesite-cookie(mode=<mode>)", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" enable-http2=\"true\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <http-invoker security-realm=\"ApplicationRealm\"/> </host> </server> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> </subsystem>", "/subsystem=undertow/configuration=handler/file=welcome-content:write-attribute(name=path,value=\" /path/to/content \")", "/subsystem=undertow/configuration=handler/file= NEW_FILE_HANDLER :add(path=\" /path/to/content \") /subsystem=undertow/server=default-server/host=default-host/location=\\/:write-attribute(name=handler,value= NEW_FILE_HANDLER )", "reload", "/subsystem=undertow/server=default-server/host=default-host:write-attribute(name=default-web-module,value=hello.war)", "reload", "/subsystem=undertow/server=default-server/host=default-host/location=\\/:remove", "reload", "/subsystem=undertow/servlet-container=default:write-attribute(name=default-session-timeout, value=60)", "reload", "/subsystem=undertow/servlet-container=default/setting=session-cookie:add", "/subsystem=undertow/servlet-container=default/setting=session-cookie:write-attribute(name=http-only,value=true)", "reload", "/subsystem=undertow/server=default-server/host=default-host/setting=single-sign-on:add", "/subsystem=undertow/server=default-server/host=default-host/setting=single-sign-on:write-attribute(name=http-only,value=true)", "reload", "/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=enable-http2,value=true)", "/subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=enable-http2,value=true)", "/subsystem=undertow/configuration=filter/expression-filter=requestDumperExpression:add(expression=\"dump-request\")", "/subsystem=undertow/server=default-server/host=default-host/filter-ref=requestDumperExpression:add", "dump-request", "path(/test) -> dump-request", "<subsystem xmlns=\"urn:jboss:domain:remoting:4.0\"> <endpoint/> <http-connector name=\"http-remoting-connector\" connector-ref=\"default\" security-realm=\"ApplicationRealm\"/> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:remoting:4.0\"> <endpoint/> </subsystem>", "/subsystem=remoting/configuration=endpoint:write-attribute(name=authentication-retries,value=2)", "reload", "/subsystem=remoting/configuration=endpoint:add", "/subsystem=remoting/configuration=endpoint:remove", "reload", "/subsystem=remoting/connector=new-connector:write-attribute(name=socket-binding,value=my-socket-binding)", "reload", "/subsystem=remoting/connector=new-connector:add(socket-binding=my-socket-binding)", "/subsystem=remoting/connector=new-connector:remove", "reload", "<subsystem xmlns=\"urn:jboss:domain:remoting:4.0\"> <http-connector name=\"http-remoting-connector\" connector-ref=\"default\" security-realm=\"ApplicationRealm\"/> </subsystem>", "/subsystem=remoting/http-connector=new-connector:write-attribute(name=connector-ref,value=new-connector-ref)", "reload", "/subsystem=remoting/http-connector=new-connector:add(connector-ref=default)", "/subsystem=remoting/http-connector=new-connector:remove", "/subsystem=remoting/outbound-connection=new-outbound-connection:write-attribute(name=uri,value=http://example.com)", "/subsystem=remoting/outbound-connection=new-outbound-connection:add(uri=http://example.com)", "/subsystem=remoting/outbound-connection=new-outbound-connection:remove", "/subsystem=remoting/remote-outbound-connection=new-remote-outbound-connection:write-attribute(name=outbound-socket-binding-ref,value=outbound-socket-binding)", "/subsystem=remoting/remote-outbound-connection=new-remote-outbound-connection:add(outbound-socket-binding-ref=outbound-socket-binding)", "/subsystem=remoting/remote-outbound-connection=new-remote-outbound-connection:remove", "/subsystem=remoting/local-outbound-connection=new-local-outbound-connection:write-attribute(name=outbound-socket-binding-ref,value=outbound-socket-binding)", "/subsystem=remoting/local-outbound-connection=new-local-outbound-connection:add(outbound-socket-binding-ref=outbound-socket-binding)", "/subsystem=remoting/local-outbound-connection=new-local-outbound-connection:remove", "/subsystem=remoting/configuration=endpoint:write-attribute(name=worker, value= WORKER_NAME )", "<interfaces> <interface name=\"management\"> <inet-address value=\"USD{jboss.bind.address.management:127.0.0.1}\"/> </interface> <interface name=\"public\"> <inet-address value=\"USD{jboss.bind.address:127.0.0.1}\"/> </interface> <interface name=\"unsecure\"> <inet-address value=\"USD{jboss.bind.address.unsecure:127.0.0.1}\"/> </interface> </interfaces>", "<subsystem xmlns=\"urn:jboss:domain:io:3.0\"> <worker name=\"default\"/> <buffer-pool name=\"default\"/> </subsystem>", "/subsystem=io/worker=default:write-attribute(name=io-threads,value=10)", "reload", "/subsystem=io/worker=newWorker:add", "/subsystem=io/worker=newWorker:remove", "reload", "/subsystem=io/buffer-pool=default:write-attribute(name=direct-buffers,value=true)", "reload", "/subsystem=io/buffer-pool=newBuffer:add", "/subsystem=io/buffer-pool=newBuffer:remove", "reload", "cd EAP_HOME /modules/ mkdir -p com/sun/jsf-impl/ IMPL_NAME - VERSION", "cd EAP_HOME /modules/ mkdir -p javax/faces/api/ IMPL_NAME - VERSION", "cd EAP_HOME /modules/ mkdir -p org/jboss/as/jsf-injection/ IMPL_NAME - VERSION", "cd EAP_HOME /modules/ mkdir -p org/apache/commons/digester/main", "/subsystem=jsf:write-attribute(name=default-jsf-impl-slot,value= IMPL_NAME - VERSION )", "<context-param> <param-name>org.jboss.jbossfaces. JSF_CONFIG_NAME </param-name> <param-value>myfaces-2.2.12</param-value> </context-param>", "/subsystem=jsf:write-attribute(name=default-jsf-impl-slot,value= JSF_IMPLEMENTATION )", "reload", "/subsystem=jsf:list-active-jsf-impls { \"outcome\" => \"success\", \"result\" => [ \"myfaces-2.1.12\", \"mojarra-2.2.0-m05\", \"main\" ] }", "/subsystem=jsf:write-attribute(name=disallow-doctype-decl,value=true) reload", "<context-param> <param-name>com.sun.faces.disallowDoctypeDecl</param-name> <param-value>false</param-value> </context-param>", "<subsystem xmlns=\"urn:jboss:domain:batch-jberet:2.0\"> <default-job-repository name=\"in-memory\"/> <default-thread-pool name=\"batch\"/> <job-repository name=\"in-memory\"> <in-memory/> </job-repository> <thread-pool name=\"batch\"> <max-threads count=\"10\"/> <keepalive-time time=\"30\" unit=\"seconds\"/> </thread-pool> </subsystem>", "/subsystem=batch-jberet:write-attribute(name=restart-jobs-on-resume,value=false)", "/subsystem=batch-jberet/in-memory-job-repository= REPOSITORY_NAME :add", "/subsystem=batch-jberet/jdbc-job-repository= REPOSITORY_NAME :add(data-source= DATASOURCE )", "/subsystem=batch-jberet:write-attribute(name=default-job-repository,value= REPOSITORY_NAME )", "reload", "/subsystem=batch-jberet/thread-pool= THREAD_POOL_NAME :add(max-threads=10)", "/subsystem=batch-jberet/thread-pool= THREAD_POOL_NAME :write-attribute(name=keepalive-time,value={time=60,unit=SECONDS})", "/subsystem=batch-jberet/thread-factory= THREAD_FACTORY_NAME :add", "/subsystem=batch-jberet/thread-pool= THREAD_POOL_NAME :write-attribute(name=thread-factory,value= THREAD_FACTORY_NAME )", "reload", "/subsystem=batch-jberet:write-attribute(name=default-thread-pool,value= THREAD_POOL_NAME )", "reload", "/subsystem=batch-jberet/thread-pool= THREAD_POOL_NAME :read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"active-count\" => 0, \"completed-task-count\" => 0L, \"current-thread-count\" => 0, \"keepalive-time\" => undefined, \"largest-thread-count\" => 0, \"max-threads\" => 15, \"name\" => \" THREAD_POOL_NAME \", \"queue-size\" => 0, \"rejected-count\" => 0, \"task-count\" => 0L, \"thread-factory\" => \" THREAD_FACTORY_NAME \" } }", "/deployment= DEPLOYMENT_NAME /subsystem=batch-jberet:restart-job(execution-id= EXECUTION_ID ,properties={ PROPERTY = VALUE })", "/deployment= DEPLOYMENT_NAME /subsystem=batch-jberet:start-job(job-xml-name= JOB_XML_NAME ,properties={ PROPERTY = VALUE })", "/deployment= DEPLOYMENT_NAME /subsystem=batch-jberet:stop-job(execution-id= EXECUTION_ID )", "/deployment= DEPLOYMENT_NAME /subsystem=batch-jberet:read-resource(recursive=true,include-runtime=true) { \"outcome\" => \"success\", \"result\" => {\"job\" => {\"import-file\" => { \"instance-count\" => 2, \"running-executions\" => 0, \"execution\" => { \"2\" => { \"batch-status\" => \"COMPLETED\", \"create-time\" => \"2016-04-11T22:03:12.708-0400\", \"end-time\" => \"2016-04-11T22:03:12.718-0400\", \"exit-status\" => \"COMPLETED\", \"instance-id\" => 58L, \"last-updated-time\" => \"2016-04-11T22:03:12.719-0400\", \"start-time\" => \"2016-04-11T22:03:12.708-0400\" }, \"1\" => { \"batch-status\" => \"FAILED\", \"create-time\" => \"2016-04-11T21:57:17.567-0400\", \"end-time\" => \"2016-04-11T21:57:17.596-0400\", \"exit-status\" => \"Error : org.hibernate.exception.ConstraintViolationException: could not execute statement\", \"instance-id\" => 15L, \"last-updated-time\" => \"2016-04-11T21:57:17.597-0400\", \"start-time\" => \"2016-04-11T21:57:17.567-0400\" } } }}} }", "/subsystem=batch-jberet:write-attribute(name=security-domain, value=ExampleDomain) reload", "<subsystem xmlns=\"urn:jboss:domain:naming:2.0\"> <bindings> <simple name=\"java:global/simple-integer-binding\" value=\"100\" type=\"int\" /> <simple name=\"java:global/jboss.org/docs/url\" value=\"https://docs.jboss.org\" type=\"java.net.URL\" /> <object-factory name=\"java:global/foo/bar/factory\" module=\"org.foo.bar\" class=\"org.foo.bar.ObjectFactory\" /> <external-context name=\"java:global/federation/ldap/example\" class=\"javax.naming.directory.InitialDirContext\" cache=\"true\"> <environment> <property name=\"java.naming.factory.initial\" value=\"com.sun.jndi.ldap.LdapCtxFactory\" /> <property name=\"java.naming.provider.url\" value=\"ldap://ldap.example.com:389\" /> <property name=\"java.naming.security.authentication\" value=\"simple\" /> <property name=\"java.naming.security.principal\" value=\"uid=admin,ou=system\" /> <property name=\"java.naming.security.credentials\" value=\"secret\" /> </environment> </external-context> <lookup name=\"java:global/new-alias-name\" lookup=\"java:global/original-name\" /> </bindings> <remote-naming/> </subsystem>", "/subsystem=naming/binding=java\\:global\\/simple-integer-binding:add(binding-type=simple, type=int, value=100)", "<subsystem xmlns=\"urn:jboss:domain:naming:2.0\"> <bindings> <simple name=\"java:global/simple-integer-binding\" value=\"100\" type=\"int\"/> </bindings> <remote-naming/> </subsystem>", "/subsystem=naming/binding=java\\:global\\/simple-integer-binding:remove", "/subsystem=naming/binding=java\\:global\\/foo\\/bar\\/factory:add(binding-type=object-factory, module=org.foo.bar, class=org.foo.bar.ObjectFactory, environment=[p1=v1, p2=v2])", "<subsystem xmlns=\"urn:jboss:domain:naming:2.0\"> <bindings> <object-factory name=\"java:global/foo/bar/factory\" module=\"org.foo.bar\" class=\"org.foo.bar.ObjectFactory\"> <environment> <property name=\"p1\" value=\"v1\" /> <property name=\"p2\" value=\"v2\" /> </environment> </object-factory> </bindings> </subsystem>", "/subsystem=naming/binding=java\\:global\\/foo\\/bar\\/factory:remove", "/subsystem=naming/binding=java\\:global\\/federation\\/ldap\\/example:add(binding-type=external-context, cache=true, class=javax.naming.directory.InitialDirContext, module=org.jboss.as.naming, environment=[java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory, java.naming.provider.url=\"ldap://ldap.example.com:389\", java.naming.security.authentication=simple, java.naming.security.principal=\"uid=admin,ou=system\", java.naming.security.credentials=secret])", "<subsystem xmlns=\"urn:jboss:domain:naming:2.0\"> <bindings> <external-context name=\"java:global/federation/ldap/example\" module=\"org.jboss.as.naming\" class=\"javax.naming.directory.InitialDirContext\" cache=\"true\"> <environment> <property name=\"java.naming.factory.initial\" value=\"com.sun.jndi.ldap.LdapCtxFactory\"/> <property name=\"java.naming.provider.url\" value=\"ldap://ldap.example.com:389\"/> <property name=\"java.naming.security.authentication\" value=\"simple\"/> <property name=\"java.naming.security.principal\" value=\"uid=admin,ou=system\"/> <property name=\"java.naming.security.credentials\" value=\"secret\"/> </environment> </external-context> </bindings> </subsystem>", "/subsystem=naming/binding=java\\:global\\/federation\\/ldap\\/example:remove", "<property name=\"org.jboss.as.naming.lookup.by.string\" value=\"true\"/>", "/subsystem=naming/binding=java\\:global\\/new-alias-name:add(binding-type=lookup, lookup=java\\:global\\/original-name)", "<lookup name=\"java:global/new-alias-name\" lookup=\"java:global/original-name\" />", "/subsystem=naming/binding=java\\:global\\/c:remove", "/subsystem=naming/binding=java\\:global\\/simple-integer-binding:rebind(binding-type=simple, type=int, value=200)", "/subsystem=naming/service=remote-naming:add", "/subsystem=naming/service=remote-naming:remove", "<channels default=\"ee\"> <channel name=\"ee\" stack=\"udp\"/> </channels> <stacks> <stack name=\"udp\"> <transport type=\"UDP\" socket-binding=\"jgroups-udp\"/> <protocol type=\"PING\"/> </stack> <stack name=\"tcp\"> <transport type=\"TCP\" socket-binding=\"jgroups-tcp\"/> <protocol type=\"MPING\" socket-binding=\"jgroups-mping\"/> </stack> </stacks>", "/subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcp)", "<channel name=\"ee\" stack=\"tcp\" cluster=\"ejb\"/>", "<server xmlns=\"urn:jboss:domain:8.0\" name=\"node_1\">", "<servers> <server name=\"server-one\" group=\"main-server-group\"/> <server name=\"server-two\" group=\"other-server-group\"> <socket-bindings port-offset=\"150\"/> </server> </servers>", "Define the socket bindings /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=jgroups-host-a:add(host= HOST_A ,port=7600) /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=jgroups-host-b:add(host= HOST_B ,port=7600) batch Add the tcpping stack /subsystem=jgroups/stack=tcpping:add /subsystem=jgroups/stack=tcpping/transport=TCP:add(socket-binding=jgroups-tcp) /subsystem=jgroups/stack=tcpping/protocol=TCPPING:add(socket-bindings=[jgroups-host-a,jgroups-host-b]) /subsystem=jgroups/stack=tcpping/protocol=MERGE3:add /subsystem=jgroups/stack=tcpping/protocol=FD_SOCK:add /subsystem=jgroups/stack=tcpping/protocol=FD_ALL:add /subsystem=jgroups/stack=tcpping/protocol=VERIFY_SUSPECT:add /subsystem=jgroups/stack=tcpping/protocol=pbcast.NAKACK2:add /subsystem=jgroups/stack=tcpping/protocol=UNICAST3:add /subsystem=jgroups/stack=tcpping/protocol=pbcast.STABLE:add /subsystem=jgroups/stack=tcpping/protocol=pbcast.GMS:add /subsystem=jgroups/stack=tcpping/protocol=MFC:add /subsystem=jgroups/stack=tcpping/protocol=FRAG3:add Set tcpping as the stack for the ee channel /subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcpping) run-batch reload", "/subsystem=jgroups/stack=tcpping/protocol=UNICAST3:add(add-index=6)", "EAP_HOME /bin/jboss-cli.sh --connect --file= /path/to/SCRIPT_NAME", "<channel name=\"ee\" stack=\"tcp\" cluster=\"ejb\"/>", "<stack name=\"tcp\"> <transport type=\"TCP\" socket-binding=\"jgroups-tcp\"/> <protocol type=\"TCPPING\"> <property name=\"initial_hosts\">192.168.1.5[7600],192.168.1.9[7600]</property> <property name=\"port_range\">0</property> </protocol> <protocol type=\"MERGE3\"/> <protocol type=\"FD_SOCK\" socket-binding=\"jgroups-tcp-fd\"/> <protocol type=\"FD_ALL\"/> <protocol type=\"VERIFY_SUSPECT\"/> <protocol type=\"pbcast.NAKACK2\"/> <protocol type=\"UNICAST3\"/> <protocol type=\"pbcast.STABLE\"/> <protocol type=\"pbcast.GMS\"/> <protocol type=\"MFC\"/> <protocol type=\"FRAG3\"/> </stack>", "<interface name=\"private\"> <inet-address value=\"USD{jboss.bind.address.private:192.168.1.5}\"/> </interface>", "INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2,ee,node_1) ISPN000094: Received new cluster view for channel server: [node_1|1] (2) [node_1, node_2] INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2,ee,node_1) ISPN000094: Received new cluster view for channel web: [node_1|1] (2) [node_1, node_2]", "<protocol type=\"TCPPING\"> <property name=\"initial_hosts\">USD{jboss.cluster.tcp.initial_hosts}</property> Set the system property at the `server-group` level: <server-groups> <server-group name=\"a-server-group\" profile=\"ha\"> <socket-binding-group ref=\"ha-sockets\"/> <system-properties> <property name=\"jboss.cluster.tcp.initial_hosts\" value=\"192.168.1.5[7600],192.168.1.9[7600]\" /> </system-properties>", "<interfaces> . <interface name=\"private\"> <inet-address value=\"USD{jboss.bind.address.private:192.168.1.5}\"/> </interface> </interfaces>", "INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2,ee,node_1) ISPN000094: Received new cluster view for channel server: [node_1|1] (2) [node_1, node_2] INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2,ee,node_1) ISPN000094: Received new cluster view for channel web: [node_1|1] (2) [node_1, node_2]", "Define the socket bindings /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=jgroups-host-a:add(host= HOST_A ,port=13001) batch Add the tcpgossip stack /subsystem=jgroups/stack=tcpgossip:add /subsystem=jgroups/stack=tcpgossip/transport=TCP:add(socket-binding=jgroups-tcp) /subsystem=jgroups/stack=tcpgossip/protocol=TCPGOSSIP:add(socket-bindings=[jgroups-host-a]) /subsystem=jgroups/stack=tcpgossip/protocol=MERGE3:add /subsystem=jgroups/stack=tcpgossip/protocol=FD_SOCK:add /subsystem=jgroups/stack=tcpgossip/protocol=FD_ALL:add /subsystem=jgroups/stack=tcpgossip/protocol=VERIFY_SUSPECT:add /subsystem=jgroups/stack=tcpgossip/protocol=pbcast.NAKACK2:add /subsystem=jgroups/stack=tcpgossip/protocol=UNICAST3:add /subsystem=jgroups/stack=tcpgossip/protocol=pbcast.STABLE:add /subsystem=jgroups/stack=tcpgossip/protocol=pbcast.GMS:add /subsystem=jgroups/stack=tcpgossip/protocol=MFC:add /subsystem=jgroups/stack=tcpgossip/protocol=FRAG3:add Set tcpgossip as the stack for the ee channel /subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcpgossip) run-batch reload", "/subsystem=jgroups/stack=tcpgossip/protocol=UNICAST3:add(add-index=6)", "EAP_HOME /bin/jboss-cli.sh --connect --file= /path/to/SCRIPT_NAME", "batch Add the JDBC_PING stack /subsystem=jgroups/stack=JDBC_PING:add /subsystem=jgroups/stack=JDBC_PING/transport=TCP:add(socket-binding=jgroups-tcp) /subsystem=jgroups/stack=JDBC_PING/protocol=JDBC_PING:add(data-source= ExampleDS ) /subsystem=jgroups/stack=JDBC_PING/protocol=MERGE3:add /subsystem=jgroups/stack=JDBC_PING/protocol=FD_SOCK:add /subsystem=jgroups/stack=JDBC_PING/protocol=FD_ALL:add /subsystem=jgroups/stack=JDBC_PING/protocol=VERIFY_SUSPECT:add /subsystem=jgroups/stack=JDBC_PING/protocol=pbcast.NAKACK2:add /subsystem=jgroups/stack=JDBC_PING/protocol=UNICAST3:add /subsystem=jgroups/stack=JDBC_PING/protocol=pbcast.STABLE:add /subsystem=jgroups/stack=JDBC_PING/protocol=pbcast.GMS:add /subsystem=jgroups/stack=JDBC_PING/protocol=MFC:add /subsystem=jgroups/stack=JDBC_PING/protocol=FRAG3:add Set JDBC_PING as the stack for the ee channel /subsystem=jgroups/channel=ee:write-attribute(name=stack,value=JDBC_PING) run-batch reload", "/subsystem=jgroups/stack=JDBC_PING/protocol=UNICAST3:add(add-index=6)", "EAP_HOME /bin/jboss-cli.sh --connect --file= /path/to/SCRIPT_NAME", "<protocol type=\"pbcast.STABLE\"/> <auth-protocol type=\"AUTH\"> <plain-token> <shared-secret-reference clear-text=\" my_password \"/> </plain-token> </auth-protocol> <protocol type=\"pbcast.GMS\"/>", "<protocol type=\"pbcast.STABLE\"/> <auth-protocol type=\"AUTH\"> <digest-token algorithm=\"SHA-512\"> <shared-secret-reference clear-text=\" my_password \"/> </digest-token> </auth-protocol> <protocol type=\"pbcast.GMS\"/>", "keytool -genkeypair -alias jgroups_key -keypass my_password -storepass my_password -storetype jks -keystore jgroups.keystore -keyalg RSA", "/subsystem=elytron/key-store=jgroups-token-store:add(type=jks,path=/path/to/jgroups.keystore,credential-reference={clear-text=my_password}, required=true)", "<protocol type=\"pbcast.STABLE\"/> <auth-protocol type=\"AUTH\"> <cipher-token algorithm=\"RSA\" key-alias=\"jgroups_key\" key-store=\"jgroups-token-store\"> <shared-secret-reference clear-text=\"my_password\"/> <key-credential-reference clear-text=\"my_password\"/> </cipher-token> </auth-protocol> <protocol type=\"pbcast.GMS\"/>", "java -cp EAP_HOME /modules/system/layers/base/org/jgroups/main/jgroups- VERSION .jar org.jgroups.demos.KeyStoreGenerator --alg AES --size 128 --storeName defaultStore.keystore --storepass PASSWORD --alias mykey", "<subsystem xmlns=\"urn:jboss:domain:jgroups:6.0\"> <stacks> <stack name=\"udp\"> <transport type=\"UDP\" socket-binding=\"jgroups-udp\"/> <protocol type=\"PING\"/> <protocol type=\"MERGE3\"/> <protocol type=\"FD_SOCK\"/> <protocol type=\"FD_ALL\"/> <protocol type=\"VERIFY_SUSPECT\"/> <protocol type=\"SYM_ENCRYPT\"> <property name=\"provider\">SunJCE</property> <property name=\"sym_algorithm\">AES</property> <property name=\"encrypt_entire_message\">true</property> <property name=\"keystore_name\"> /path/to/defaultStore.keystore </property> <property name=\"store_password\"> PASSWORD </property> <property name=\"alias\">mykey</property> </protocol> <protocol type=\"pbcast.NAKACK2\"/> <protocol type=\"UNICAST3\"/> <protocol type=\"pbcast.STABLE\"/> <protocol type=\"pbcast.GMS\"/> <protocol type=\"UFC\"/> <protocol type=\"MFC\"/> <protocol type=\"FRAG3\"/> </stack> </stacks> </subsystem>", "/subsystem=elytron/key-store=jgroups-keystore:add(path=/path/to/defaultStore.keystore,credential-reference={clear-text= PASSWORD },type=JCEKS)", "<subsystem xmlns=\"urn:jboss:domain:jgroups:6.0\"> <stacks> <stack name=\"udp\"> <transport type=\"UDP\" socket-binding=\"jgroups-udp\"/> <protocol type=\"PING\"/> <protocol type=\"MERGE3\"/> <protocol type=\"FD_SOCK\"/> <protocol type=\"FD_ALL\"/> <protocol type=\"VERIFY_SUSPECT\"/> <encrypt-protocol type=\"SYM_ENCRYPT\" key-alias=\"mykey\" key-store=\"jgroups-keystore\"> <key-credential-reference clear-text=\" PASSWORD \"/> <property name=\"provider\">SunJCE</property> <property name=\"encrypt_entire_message\">true</property> </encrypt-protocol> <protocol type=\"pbcast.NAKACK2\"/> <protocol type=\"UNICAST3\"/> <protocol type=\"pbcast.STABLE\"/> <protocol type=\"pbcast.GMS\"/> <protocol type=\"UFC\"/> <protocol type=\"MFC\"/> <protocol type=\"FRAG3\"/> </stack> </stacks> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:jgroups:6.0\"> <stacks> <stack name=\"udp\"> <transport type=\"UDP\" socket-binding=\"jgroups-udp\"/> <protocol type=\"PING\"/> <protocol type=\"MERGE3\"/> <protocol type=\"FD_SOCK\"/> <protocol type=\"FD_ALL\"/> <protocol type=\"VERIFY_SUSPECT\"/> <protocol type=\"ASYM_ENCRYPT\"> <property name=\"encrypt_entire_message\">true</property> <property name=\"sym_keylength\">128</property> <property name=\"sym_algorithm\">AES/ECB/PKCS5Padding</property> <property name=\"asym_keylength\">512</property> <property name=\"asym_algorithm\">RSA</property> </protocol> <protocol type=\"pbcast.NAKACK2\"/> <protocol type=\"UNICAST3\"/> <protocol type=\"pbcast.STABLE\"/> <!-- Configure AUTH protocol here --> <protocol type=\"pbcast.GMS\"/> <protocol type=\"UFC\"/> <protocol type=\"MFC\"/> <protocol type=\"FRAG3\"/> </stack> </stacks> </subsystem>", "keytool -genkeypair -alias mykey -keyalg RSA -keysize 1024 -keystore defaultKeystore.keystore -dname \"CN=localhost\" -keypass secret -storepass secret", "/subsystem=elytron/key-store=jgroups-keystore:add(path=/path/to/defaultStore.keystore,credential-reference={clear-text= PASSWORD },type=JCEKS)", "<subsystem xmlns=\"urn:jboss:domain:jgroups:6.0\"> <stacks> <stack name=\"udp\"> <transport type=\"UDP\" socket-binding=\"jgroups-udp\"/> <protocol type=\"PING\"/> <protocol type=\"MERGE3\"/> <protocol type=\"FD_SOCK\"/> <protocol type=\"FD_ALL\"/> <protocol type=\"VERIFY_SUSPECT\"/> <encrypt-protocol type=\"ASYM_ENCRYPT\" key-alias=\"mykey\" key-store=\"jgroups-keystore\"> <key-credential-reference clear-text=\"secret\" /> <property name=\"encrypt_entire_message\">true</property> </encrypt-protocol> <protocol type=\"pbcast.NAKACK2\"/> <protocol type=\"UNICAST3\"/> <protocol type=\"pbcast.STABLE\"/> <!-- Configure AUTH protocol here --> <protocol type=\"pbcast.GMS\"/> <protocol type=\"UFC\"/> <protocol type=\"MFC\"/> <protocol type=\"FRAG3\"/> </stack> </stacks> </subsystem>", "/subsystem=jgroups/stack= STACK_TYPE /transport= TRANSPORT_TYPE /thread-pool= THREAD_POOL_NAME :write-attribute(name= ATTRIBUTE_NAME , value= ATTRIBUTE_VALUE )", "/subsystem=jgroups/stack=udp/transport=UDP/thread-pool=default:write-attribute(name=\"max-threads\", value=\"500\")", "WARNING [org.jgroups.protocols.UDP] (ServerService Thread Pool -- 68) JGRP000015: the send buffer of socket DatagramSocket was set to 640KB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux) WARNING [org.jgroups.protocols.UDP] (ServerService Thread Pool -- 68) JGRP000015: the receive buffer of socket DatagramSocket was set to 20MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)", "Allow a 25MB UDP receive buffer for JGroups net.core.rmem_max = 26214400 Allow a 1MB UDP send buffer for JGroups net.core.wmem_max = 1048576", "/subsystem=jgroups/stack= STACK_NAME /transport= TRANSPORT /property= PROPERTY_NAME :add(value= BUFFER_SIZE )", "/subsystem=jgroups/stack=tcp/transport= TRANSPORT /property=recv_buf_size:add(value=20000000)", "java -cp EAP_HOME /bin/client/jboss-client.jar org.jgroups.tests.McastReceiverTest -mcast_addr 230.11.11.11 -port 5555", "java -cp EAP_HOME /bin/client/jboss-client.jar org.jgroups.tests.McastSenderTest -mcast_addr 230.11.11.11 -port 5555", "<subsystem xmlns=\"urn:jboss:domain:infinispan:7.0\"> <cache-container name=\"server\" aliases=\"singleton cluster\" default-cache=\"default\" module=\"org.wildfly.clustering.server\"> <transport lock-timeout=\"60000\"/> <replicated-cache name=\"default\"> <transaction mode=\"BATCH\"/> </replicated-cache> </cache-container> <cache-container name=\"web\" default-cache=\"dist\" module=\"org.wildfly.clustering.web.infinispan\"> <transport lock-timeout=\"60000\"/> <distributed-cache name=\"dist\"> <locking isolation=\"REPEATABLE_READ\"/> <transaction mode=\"BATCH\"/> <file-store/> </distributed-cache> </cache-container> <cache-container name=\"ejb\" aliases=\"sfsb\" default-cache=\"dist\" module=\"org.wildfly.clustering.ejb.infinispan\"> <transport lock-timeout=\"60000\"/> <distributed-cache name=\"dist\"> <locking isolation=\"REPEATABLE_READ\"/> <transaction mode=\"BATCH\"/> <file-store/> </distributed-cache> </cache-container> <cache-container name=\"hibernate\" default-cache=\"local-query\" module=\"org.hibernate.infinispan\"> <transport lock-timeout=\"60000\"/> <local-cache name=\"local-query\"> <object-memory size=\"1000\"/> <expiration max-idle=\"100000\"/> </local-cache> <invalidation-cache name=\"entity\"> <transaction mode=\"NON_XA\"/> <object-memory size=\"1000\"/> <expiration max-idle=\"100000\"/> </invalidation-cache> <replicated-cache name=\"timestamps\" mode=\"ASYNC\"/> </cache-container> </subsystem>", "/subsystem=infinispan/cache-container= CACHE_CONTAINER :add", "/subsystem=infinispan/cache-container= CACHE_CONTAINER /replicated-cache= CACHE :add(mode= MODE )", "/subsystem=infinispan/cache-container= CACHE_CONTAINER :write-attribute(name=default-cache,value= CACHE )", "/subsystem=infinispan/cache-container= CACHE_CONTAINER /replicated-cache= CACHE /component=transaction:write-attribute(name=mode,value=BATCH)", "batch /subsystem=infinispan/cache-container=web/distributed-cache=concurrent:add /subsystem=infinispan/cache-container=web:write-attribute(name=default-cache,value=concurrent) /subsystem=infinispan/cache-container=web/distributed-cache=concurrent/store=file:add run-batch", "<cache-container name=\"web\" default-cache=\"concurrent\" module=\"org.wildfly.clustering.web.infinispan\"> <distributed-cache name=\"concurrent\"> <file-store/> </distributed-cache> </cache-container>", "<subsystem xmlns=\"urn:jboss:domain:ejb3:5.0\"> <passivation-stores> <passivation-store name=\"infinispan\" cache-container=\"ejb-cltest\" max-size=\"10000\"/> </passivation-stores> <remote cluster=\"ejb-cltest\" connectors=\"http-remoting-connector\" thread-pool-name=\"default\"/> </subsystem> <subsystem xmlns=\"urn:jboss:domain:infinispan:7.0\"> <cache-container name=\"ejb-cltest\" aliases=\"sfsb\" default-cache=\"dist\" module=\"org.wildfly.clustering.ejb.infinispan\"> </subsystem>", "@Resource(lookup = \"java:jboss/infinispan/cache/foo/bar\") private org.infinispan.Cache<Integer, Object> cache;", "@Resource(lookup = \"java:jboss/infinispan/cache/foo/default\")", "@Resource(lookup = \"java:jboss/infinispan/container/foo\") private org.infinispan.manager.EmbeddedCacheManager manager;", "@Resource(lookup = \"java:jboss/infinispan/configuration/foo/bar\") private org.infinispan.config.Configuration config;", "@Resource(lookup = \"java:jboss/infinispan/configuration/foo/default\") private org.infinispan.config.Configuration config;", "<cache-container name=\"hibernate\" default-cache=\"local-query\" module=\"org.hibernate.infinispan\"> <transport lock-timeout=\"60000\"/> <local-cache name=\"local-query\"> <object-memory size=\"1000\"/> <expiration max-idle=\"100000\"/>", "batch /subsystem=infinispan/cache-container=web/replicated-cache=repl:add() /subsystem=infinispan/cache-container=web/replicated-cache=repl/component=transaction:add(mode=BATCH) /subsystem=infinispan/cache-container=web/replicated-cache=repl/component=locking:add(isolation=REPEATABLE_READ) /subsystem=infinispan/cache-container=web/replicated-cache=repl/store=file:add /subsystem=infinispan/cache-container=web:write-attribute(name=default-cache,value=repl) run-batch", "reload", "/subsystem=infinispan/cache-container=web:write-attribute(name=default-cache,value=dist)", "/subsystem=infinispan/cache-container=web/distributed-cache=dist:write-attribute(name=owners,value=5)", "reload", "/subsystem=infinispan/cache-container=web/scattered-cache=scattered:add(bias-lifespan=1800000)", "/subsystem=infinispan/cache-container=web:write-attribute(name=default-cache,value=scattered)", "<cache-container name=\"web\" default-cache=\"scattered\" module=\"org.wildfly.clustering.web.infinispan\"> <scattered-cache name=\"scattered\" bias-lifespan=\"1800000\"/> </cache-container>", "/subsystem=infinispan/cache-container=server/ CACHE_TYPE = CACHE /component=state-transfer:write-attribute(name=timeout,value=0)", "/subsystem=infinispan/cache-container= CACHE_CONTAINER_NAME /thread-pool= THREAD_POOL_NAME :write-attribute(name= ATTRIBUTE_NAME , value= ATTRIBUTE_VALUE )", "/subsystem=infinispan/cache-container=server/thread-pool=persistence:write-attribute(name=\"max-threads\", value=\"10\")", "/subsystem=infinispan/cache-container= CACHE_CONTAINER :write-attribute(name=statistics-enabled,value=true)", "/subsystem=infinispan/cache-container= CACHE_CONTAINER / CACHE_TYPE = CACHE :write-attribute(name=statistics-enabled,value=true)", "/subsystem=infinispan/cache-container= CACHE_CONTAINER / CACHE_TYPE = CACHE :undefine-attribute(name=statistics-enabled)", "/subsystem=infinispan/cache-container=web/distributed-cache=dist/component=partition-handling:write-attribute(name=enabled, value=true)", "/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding= SOCKET_BINDING :add(host= HOSTNAME ,port= PORT )", "batch /subsystem=infinispan/remote-cache-container= CACHE_CONTAINER :add(default-remote-cluster=data-grid-cluster) /subsystem=infinispan/remote-cache-container= CACHE_CONTAINER /remote-cluster=data-grid-cluster:add(socket-bindings=[ SOCKET_BINDING , SOCKET_BINDING_2 ,...]) run-batch", "/subsystem=infinispan/remote-cache-container=foo:write-attribute(name=statistics-enabled, value=true)", "/subsystem=infinispan/remote-cache-container=foo:read-attribute(name=connections) /subsystem=infinispan/remote-cache-container=foo:read-attribute(name=active-connections) /subsystem=infinispan/remote-cache-container=foo:read-attribute(name=idle-connections)", "/subsystem=infinispan/remote-cache-container=foo:read-resource-description", "/subsystem=infinispan/remote-cache-container=foo/remote-cache=bar.war:read-resource(include-runtime=true, recursive=true) { \"average-read-time\" : 1, \"average-remove-time\" : 0, \"average-write-time\" : 2, \"hits\" : 9, \"misses\" : 0, \"near-cache-hits\" : 7, \"near-cache-invalidations\" : 8, \"near-cache-misses\" : 9, \"near-cache-size\" : 1, \"removes\" : 0, \"time-since-reset\" : 82344, \"writes\" : 8 }", "/subsystem=infinispan/remote-cache-container=foo/remote-cache=bar.war:read-resource-description", "/subsystem=infinispan/remote-cache-container=foo/remote-cache=bar.war:reset-statistics()", "batch /subsystem=infinispan/cache-container=web/invalidation-cache= CACHE_NAME :add() /subsystem=infinispan/cache-container=web/invalidation-cache= CACHE_NAME /store=hotrod:add(remote-cache-container= CACHE_CONTAINER ,fetch-state=false,purge=false,passivation=false,shared=true) /subsystem=infinispan/cache-container=web/invalidation-cache= CACHE_NAME /component=transaction:add(mode=BATCH) /subsystem=infinispan/cache-container=web/invalidation-cache= CACHE_NAME /component=locking:add(isolation=REPEATABLE_READ) /subsystem=infinispan/cache-container=web:write-attribute(name=default-cache,value= CACHE_NAME ) run-batch", "@Resource(lookup = \"java:jboss/infinispan/remote-container/web-sessions\") private org.infinispan.client.hotrod.RemoteCacheContainer client;", "infinispan.client.hotrod.transport_factory = org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory infinispan.client.hotrod.server_list = 127.0.0.1:11222 infinispan.client.hotrod.marshaller = org.infinispan.commons.marshall.jboss.GenericJBossMarshaller infinispan.client.hotrod.async_executor_factory = org.infinispan.client.hotrod.impl.async.DefaultAsyncExecutorFactory infinispan.client.hotrod.default_executor_factory.pool_size = 1 infinispan.client.hotrod.default_executor_factory.queue_size = 10000 infinispan.client.hotrod.hash_function_impl.1 = org.infinispan.client.hotrod.impl.consistenthash.ConsistentHashV1 infinispan.client.hotrod.tcp_no_delay = true infinispan.client.hotrod.ping_on_startup = true infinispan.client.hotrod.request_balancing_strategy = org.infinispan.client.hotrod.impl.transport.tcp.RoundRobinBalancingStrategy infinispan.client.hotrod.key_size_estimate = 64 infinispan.client.hotrod.value_size_estimate = 512 infinispan.client.hotrod.force_return_values = false ## below is connection pooling config maxActive=-1 maxTotal = -1 maxIdle = -1 whenExhaustedAction = 1 timeBetweenEvictionRunsMillis=120000 minEvictableIdleTimeMillis=300000 testWhileIdle = true minIdle = 1", "/subsystem=elytron/client-ssl-context= CLIENT_SSL_CONTEXT :add(key-manager= KEY_MANAGER ,trust-manager= TRUST_MANAGER )", "/subsystem=infinispan/remote-cache-container= CACHE_CONTAINER /component=security:write-attribute(name=ssl-context,value= CLIENT_SSL_CONTEXT )", "/core-service=management/security-realm=ApplicationRealm/server-identity=ssl:add(keystore-path=\" KEYSTORE_NAME \",keystore-relative-to=\"jboss.server.config.dir\",keystore-password=\" KEYSTORE_PASSWORD \")", "/subsystem=datagrid-infinispan-endpoint/hotrod-connector=hotrod-connector/encryption=ENCRYPTION:add(require-ssl-client-auth=false,security-realm=\"ApplicationRealm\")", "reload", "/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-rhdg-server1:add(host=RHDGHostName1, port=11222) /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-rhdg-server2:add(host=RHDGHostName2, port=11222)", "<socket-binding-group name=\"standard-sockets\" ... > <outbound-socket-binding name=\"remote-rhdg-server1\"> <remote-destination host=\"RHDGHostName1\" port=\"11222\"/> </outbound-socket-binding> <outbound-socket-binding name=\"remote-rhdg-server2\"> <remote-destination host=\"RHDGHostName2\" port=\"11222\"/> </outbound-socket-binding> </socket-binding-group>", "/subsystem=infinispan/cache-container=web/invalidation-cache=rhdg:add(mode=SYNC) /subsystem=infinispan/cache-container=web/invalidation-cache=rhdg/component=locking:write-attribute(name=isolation,value=REPEATABLE_READ) /subsystem=infinispan/cache-container=web/invalidation-cache=rhdg/component=transaction:write-attribute(name=mode,value=BATCH) /subsystem=infinispan/cache-container=web/invalidation-cache=rhdg/store=remote:add(remote-servers=[\"remote-rhdg-server1\",\"remote-rhdg-server2\"], cache=default, socket-timeout=60000, passivation=false, purge=false, shared=true)", "<subsystem xmlns=\"urn:jboss:domain:infinispan:7.0\"> <cache-container name=\"web\" default-cache=\"dist\" module=\"org.wildfly.clustering.web.infinispan\" statistics-enabled=\"true\"> <transport lock-timeout=\"60000\"/> <invalidation-cache name=\"rhdg\" mode=\"SYNC\"> <locking isolation=\"REPEATABLE_READ\"/> <transaction mode=\"BATCH\"/> <remote-store cache=\"default\" socket-timeout=\"60000\" remote-servers=\"remote-rhdg-server1 remote-rhdg-server2\" passivation=\"false\" purge=\"false\" shared=\"true\"/> </invalidation-cache> </cache-container> </subsystem>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <jboss-web xmlns=\"http://www.jboss.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-web_10_0.xsd\" version=\"10.0\"> <replication-config> <replication-granularity>SESSION</replication-granularity> <cache-name>web.rhdg</cache-name> </replication-config> </jboss-web>", "/profile=ha/subsystem=modcluster/proxy=default:write-attribute(name=advertise-security-key, value=mypassword)", "/profile=load-balancer/subsystem=undertow/configuration=filter/mod-cluster=load-balancer:write-attribute(name=security-key,value=mypassword)", "/socket-binding-group=standard-sockets/socket-binding=ajp-other:add(port=8010) /subsystem=undertow/server=other-server:add /subsystem=undertow/server=other-server/ajp-listener=ajp-other:add(socket-binding=ajp-other) /subsystem=undertow/server=other-server/host=other-host:add(default-web-module=root-other.war) /subsystem=undertow/server=other-server/host=other-host /location=other:add(handler=welcome-content) /subsystem=undertow/server=other-server/host=other-host:write-attribute(name=alias,value=[localhost])) /socket-binding-group=standard-sockets/socket-binding=modcluster-other:add(multicast-address=224.0.1.106,multicast-port=23364) /subsystem=modcluster/proxy=other:add(advertise-socket=modcluster-other,balancer=other-balancer,connector=ajp-other) reload", "/subsystem=undertow/configuration=filter/mod-cluster=load-balancer/affinity=ranked:add", "/subsystem=undertow/configuration=filter/mod-cluster=load-balancer/affinity=ranked:write-attribute(name=delimiter,value=':')", "/subsystem=undertow/configuration=handler/reverse-proxy=my-handler:add", "/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-host1/:add(host=server1.example.com, port=8009) /socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-host2/:add(host=server2.example.com, port=8009)", "/subsystem=undertow/configuration=handler/reverse-proxy=my-handler/host=host1:add(outbound-socket-binding=remote-host1, scheme=ajp, instance-id=myroute1, path=/test) /subsystem=undertow/configuration=handler/reverse-proxy=my-handler/host=host2:add(outbound-socket-binding=remote-host2, scheme=ajp, instance-id=myroute2, path=/test)", "/subsystem=undertow/server=default-server/host=default-host/location=\\/test:add(handler=my-handler)", "/subsystem=undertow:write-attribute(name=instance-id,value=node1)", "/subsystem=undertow/server=default-server:read-resource", "/socket-binding-group=standard-sockets/socket-binding=ajp:add(port=8009)", "/subsystem=undertow/server=default-server/ajp-listener=ajp:add(socket-binding=ajp)", "mod_proxy_balancer should be disabled when mod_cluster is used LoadModule proxy_cluster_module modules/mod_proxy_cluster.so LoadModule cluster_slotmem_module modules/mod_cluster_slotmem.so LoadModule manager_module modules/mod_manager.so LoadModule advertise_module modules/mod_advertise.so MemManagerFile cache/mod_cluster <IfModule manager_module> Listen 6666 <VirtualHost *:6666> <Directory /> Require ip 127.0.0.1 </Directory> ServerAdvertise on EnableMCPMReceive <Location /mod_cluster_manager> SetHandler mod_cluster-manager Require ip 127.0.0.1 </Location> </VirtualHost> </IfModule>", "ServerAdvertise Off", "AdvertiseFrequency 5", "EnableMCPMReceive", "/profile=full-ha/subsystem=modcluster/proxy=default:write-attribute(name=advertise,value=false)", "/socket-binding-group=full-ha-sockets/remote-destination-outbound-socket-binding=proxy1:add(host=10.33.144.3,port=6666) /socket-binding-group=full-ha-sockets/remote-destination-outbound-socket-binding=proxy2:add(host=10.33.144.1,port=6666)", "/profile=full-ha/subsystem=modcluster/proxy=default:list-add(name=proxies,value=proxy1) /profile=full-ha/subsystem=modcluster/proxy=default:list-add(name=proxies,value=proxy2)", "/interface=management:write-attribute(name=inet-address,value=\"USD{jboss.bind.address.management: EXTERNAL_IP_ADDRESS }\") /interface=public:write-attribute(name=inet-address,value=\"USD{jboss.bind.address.public: EXTERNAL_IP_ADDRESS }\") /interface=unsecure:write-attribute(name=inet-address,value=\"USD{jboss.bind.address.unsecure: EXTERNAL_IP_ADDRESS }\")", "reload", "EAP_HOME /bin/domain.sh --host-config=host-slave.xml", "/host= EXISTING_HOST_NAME :write-attribute(name=name,value=slave1)", "EAP_HOME /bin/domain.sh --host-config=host-slave.xml", "/host= SLAVE_HOST_NAME :write-remote-domain-controller(host= DOMAIN_CONTROLLER_IP_ADDRESS ,port=USD{jboss.domain.master.port:9990},security-realm=\"ManagementRealm\")", "<domain-controller> <remote host=\" DOMAIN_CONTROLLER_IP_ADDRESS \" port=\"USD{jboss.domain.master.port:9990}\" security-realm=\"ManagementRealm\"/> </domain-controller>", "EAP_HOME /bin/add-user.sh What type of user do you wish to add? a) Management User (mgmt-users.properties) b) Application User (application-users.properties) (a): a Username : slave1 Password : changeme Re-enter Password : changeme What groups do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[ ]: About to add user 'slave1' for realm 'ManagementRealm' Is this correct yes/no? yes Is this new user going to be used for one AS process to connect to another AS process? e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server {JEB} calls. yes/no? yes To represent the user add the following to the server-identities definition <secret value=\" SECRET_VALUE \" />", "/host= SLAVE_HOST_NAME /core-service=management/security-realm=ManagementRealm/server-identity=secret:add(value=\" SECRET_VALUE \")", "reload --host= HOST_NAME", "/host= SLAVE_HOST_NAME /core-service=management/security-realm=ManagementRealm/server-identity=secret:add(credential-reference={store= STORE_NAME ,alias= ALIAS }", "reload --host= HOST_NAME", "VAULT::secret::password::ODVmYmJjNGMtZDU2ZC00YmNlLWE4ODMtZjQ1NWNmNDU4ZDc1TElORV9CUkVBS3ZhdWx0.", "/host=master/core-service=management/security-realm=ManagementRealm/server-identity=secret:add(value=\"USD{VAULT::secret::password:: VAULT_SECRET_VALUE }\")", "reload --host= HOST_NAME", "/host= SLAVE_HOST_NAME /core-service=management/security-realm=ManagementRealm/server-identity=secret:add(value=\"USD{server.identity.password}\")", "reload --host=master", "EAP_HOME /bin/domain.sh --host-config=host-slave.xml -Dserver.identity.password=changeme", "server.identity.password=changeme", "EAP_HOME /bin/domain.sh --host-config=host-slave.xml --properties= PATH_TO_PROPERTIES_FILE", "ProxyPass / balancer://MyBalancer stickysession=JSESSIONID|jsessionid nofailover=on failonstatus=203,204 ProxyPassReverse / balancer://MyBalancer ProxyPreserveHost on", "/profile=full-ha/subsystem=modcluster/proxy=default:write-attribute(name=sticky-session,value=true)", "/profile=full-ha/subsystem=modcluster/proxy=default:write-attribute(name=load-balancing-group,value=ClusterOLD)", "mod_cluster/<version> LBGroup ClusterOLD: [Enable Nodes] [Disable Nodes] [Stop Nodes] Node node-1-jvmroute (ajp://node1.oldcluster.example:8009): [Enable Contexts] [Disable Contexts] [Stop Contexts] Balancer: qacluster, LBGroup: ClusterOLD, Flushpackets: Off, ..., Load: 100 Virtual Host 1: Contexts: /my-deployed-application-context, Status: ENABLED Request: 0 [Disable] [Stop] Node node-2-jvmroute (ajp://node2.oldcluster.example:8009): [Enable Contexts] [Disable Contexts] [Stop Contexts] Balancer: qacluster, LBGroup: ClusterOLD, Flushpackets: Off, ..., Load: 100 Virtual Host 1: Contexts: /my-deployed-application-context, Status: ENABLED Request: 0 [Disable] [Stop] LBGroup ClusterNEW: [Enable Nodes] [Disable Nodes] [Stop Nodes] Node node-3-jvmroute (ajp://node3.newcluster.example:8009): [Enable Contexts] [Disable Contexts] [Stop Contexts] Balancer: qacluster, LBGroup: ClusterNEW, Flushpackets: Off, ..., Load: 100 Virtual Host 1: Contexts: /my-deployed-application-context, Status: ENABLED Request: 0 [Disable] [Stop] Node node-4-jvmroute (ajp://node4.newcluster.example:8009): [Enable Contexts] [Disable Contexts] [Stop Contexts] Balancer: qacluster, LBGroup: ClusterNEW, Flushpackets: Off, ..., Load: 100 Virtual Host 1: Contexts: /my-deployed-application-context, Status: ENABLED Request: 0 [Disable] [Stop]", "/host=master/server=server-one/subsystem=modcluster:stop-context(context=/my-deployed-application-context, virtualhost=default-host, waittime=0)", "/host=master/server=server-one/subsystem=modcluster:disable-context(context=/my-deployed-application-context, virtualhost=default-host)", "Load mod_jk module Specify the filename of the mod_jk lib LoadModule jk_module modules/mod_jk.so Where to find workers.properties JkWorkersFile conf.d/workers.properties Where to put jk logs JkLogFile logs/mod_jk.log Set the jk log level [debug/error/info] JkLogLevel info Select the log format JkLogStampFormat \"[%a %b %d %H:%M:%S %Y]\" JkOptions indicates to send SSK KEY SIZE JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories JkRequestLogFormat JkRequestLogFormat \"%w %V %T\" Mount your applications JkMount /application/* loadbalancer Add shared memory. This directive is present with 1.2.10 and later versions of mod_jk, and is needed for for load balancing to work properly JkShmFile logs/jk.shm Add jkstatus for managing runtime data <Location /jkstatus/> JkMount status Require ip 127.0.0.1 </Location>", "Define list of workers that will be used for mapping requests worker.list=loadbalancer,status Define Node1 modify the host as your host IP or DNS name. worker.node1.port=8009 worker.node1.host=node1.mydomain.com worker.node1.type=ajp13 worker.node1.ping_mode=A worker.node1.lbfactor=1 Define Node2 modify the host as your host IP or DNS name. worker.node2.port=8009 worker.node2.host=node2.mydomain.com worker.node2.type=ajp13 worker.node2.ping_mode=A worker.node2.lbfactor=1 Load-balancing behavior worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=node1,node2 worker.loadbalancer.sticky_session=1 Status worker for managing load balancer worker.status.type=status", "Simple worker configuration file /*=loadbalancer", "Use external file for mount points. It will be checked for updates each 60 seconds. The format of the file is: /url=worker /examples/*=loadbalancer JkMountFile conf.d/uriworkermap.properties", "<VirtualHost *:80> Your domain name ServerName YOUR_DOMAIN_NAME ProxyPreserveHost On The IP and port of JBoss These represent the default values, if your httpd is on the same host as your JBoss managed domain or server ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ The location of the HTML files, and access control information DocumentRoot /var/www <Directory /var/www> Options -Indexes Order allow,deny Allow from all </Directory> </VirtualHost>", "<Proxy balancer://mycluster> Order deny,allow Allow from all Add each JBoss Enterprise Application Server by IP address and port. If the route values are unique like this, one node will not fail over to the other. BalancerMember http://192.168.1.1:8080 route=node1 BalancerMember http://192.168.1.2:8180 route=node2 </Proxy> <VirtualHost *:80> # Your domain name ServerName YOUR_DOMAIN_NAME ProxyPreserveHost On ProxyPass / balancer://mycluster/ # The location of the HTML files, and access control information DocumentRoot /var/www <Directory /var/www> Options -Indexes Order allow,deny Allow from all </Directory> </VirtualHost>", "ProxyPass / balancer://mycluster stickysession=JSESSIONID", "Configuration file for the ISAPI Connector Extension uri definition extension_uri=/jboss/isapi_redirect.dll Full path to the log file for the ISAPI Connector log_file=c:\\connectors\\isapi_redirect.log Log level (debug, info, warn, error or trace) log_level=info Full path to the workers.properties file worker_file=c:\\connectors\\workers.properties Full path to the uriworkermap.properties file worker_mount_file=c:\\connectors\\uriworkermap.properties #Full path to the rewrite.properties file rewrite_rule_file=c:\\connectors\\rewrite.properties", "images and css files for path /status are provided by worker01 /status=worker01 /images/*=worker01 /css/*=worker01 Path /web-console is provided by worker02 IIS (customized) error page is used for http errors with number greater or equal to 400 css files are provided by worker01 /web-console/*=worker02;use_server_errors=400 /web-console/css/*=worker01 Example of exclusion from mapping, logo.gif won't be displayed !/web-console/images/logo.gif=* Requests to /app-01 or /app-01/something will be routed to worker01 /app-01|/*=worker01 Requests to /app-02 or /app-02/something will be routed to worker02 /app-02|/*=worker02", "An entry that lists all the workers defined worker.list=worker01, worker02 Entries that define the host and port associated with these workers First JBoss EAP server definition, port 8009 is standard port for AJP in EAP worker.worker01.host=127.0.0.1 worker.worker01.port=8009 worker.worker01.type=ajp13 Second JBoss EAP server definition worker.worker02.host=127.0.0.100 worker.worker02.port=8009 worker.worker02.type=ajp13", "#Simple example Images are accessible under abc path /app-01/abc/=/app-01/images/", "C:\\> net stop was /Y C:\\> net start w3svc", "Configuration file for the ISAPI Connector Extension uri definition extension_uri=/jboss/isapi_redirect.dll Full path to the log file for the ISAPI Connector log_file=c:\\connectors\\isapi_redirect.log Log level (debug, info, warn, error or trace) log_level=info Full path to the workers.properties file worker_file=c:\\connectors\\workers.properties Full path to the uriworkermap.properties file worker_mount_file=c:\\connectors\\uriworkermap.properties #OPTIONAL: Full path to the rewrite.properties file rewrite_rule_file=c:\\connectors\\rewrite.properties", "images, css files, path /status and /web-console will be provided by nodes defined in the load-balancer called \"router\" /css/*=router /images/*=router /status=router /web-console|/*=router Example of exclusion from mapping, logo.gif won't be displayed !/web-console/images/logo.gif=* Requests to /app-01 and /app-02 will be routed to nodes defined in the load-balancer called \"router\" /app-01|/*=router /app-02|/*=router mapping for management console, nodes in cluster can be enabled or disabled here /jkmanager|/*=status", "The advanced router LB worker worker.list=router,status First EAP server definition, port 8009 is standard port for AJP in EAP # lbfactor defines how much the worker will be used. The higher the number, the more requests are served lbfactor is useful when one machine is more powerful ping_mode=A - all possible probes will be used to determine that connections are still working worker.worker01.port=8009 worker.worker01.host=127.0.0.1 worker.worker01.type=ajp13 worker.worker01.ping_mode=A worker.worker01.socket_timeout=10 worker.worker01.lbfactor=3 Second EAP server definition worker.worker02.port=8009 worker.worker02.host=127.0.0.100 worker.worker02.type=ajp13 worker.worker02.ping_mode=A worker.worker02.socket_timeout=10 worker.worker02.lbfactor=1 Define the LB worker worker.router.type=lb worker.router.balance_workers=worker01,worker02 Define the status worker for jkmanager worker.status.type=status", "#Simple example Images are accessible under abc path /app-01/abc/=/app-01/images/ Restart the IIS server. Restart your IIS server by using the net stop and net start commands. C:\\> net stop was /Y C:\\> net start w3svc", "<!-- ============== Built In Servlet Mappings =============== --> <!-- The servlet mappings for the built in servlets defined above. --> <!-- The mapping for the default servlet --> <!--servlet-mapping> <servlet-name>default</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping--> <!-- The mapping for the invoker servlet --> <!--servlet-mapping> <servlet-name>invoker</servlet-name> <url-pattern>/servlet/*</url-pattern> </servlet-mapping--> <!-- The mapping for the Jakarta Server Pages servlet --> <!--servlet-mapping> <servlet-name>jsp</servlet-name> <url-pattern>*.jsp</url-pattern> </servlet-mapping-->", "Init fn=\"load-modules\" funcs=\"jk_init,jk_service\" shlib=\"/lib/nsapi_redirector.so\" shlib_flags=\"(global|now)\" Init fn=\"jk_init\" worker_file=\" IPLANET_CONFIG /connectors/workers.properties\" log_level=\"info\" log_file=\" IPLANET_CONFIG /connectors/nsapi.log\" shm_file=\" IPLANET_CONFIG /connectors/tmp/jk_shm\"", "<Object name=\"default\"> [...] NameTrans fn=\"assign-name\" from=\"/status\" name=\"jknsapi\" NameTrans fn=\"assign-name\" from=\"/images(|/*)\" name=\"jknsapi\" NameTrans fn=\"assign-name\" from=\"/css(|/*)\" name=\"jknsapi\" NameTrans fn=\"assign-name\" from=\"/nc(|/*)\" name=\"jknsapi\" NameTrans fn=\"assign-name\" from=\"/jmx-console(|/*)\" name=\"jknsapi\" </Object>", "<Object name=\"jknsapi\"> ObjectType fn=force-type type=text/plain Service fn=\"jk_service\" worker=\"worker01\" path=\"/status\" Service fn=\"jk_service\" worker=\"worker02\" path=\"/nc(/*)\" Service fn=\"jk_service\" worker=\"worker01\" </Object>", "An entry that lists all the workers defined worker.list=worker01, worker02 Entries that define the host and port associated with these workers worker.worker01.host=127.0.0.1 worker.worker01.port=8009 worker.worker01.type=ajp13 worker.worker02.host=127.0.0.100 worker.worker02.port=8009 worker.worker02.type=ajp13", "IPLANET_CONFIG /../bin/stopserv IPLANET_CONFIG /../bin/startserv", "<Object name=\"default\"> [...] NameTrans fn=\"assign-name\" from=\"/status\" name=\"jknsapi\" NameTrans fn=\"assign-name\" from=\"/images(|/*)\" name=\"jknsapi\" NameTrans fn=\"assign-name\" from=\"/css(|/*)\" name=\"jknsapi\" NameTrans fn=\"assign-name\" from=\"/nc(|/*)\" name=\"jknsapi\" NameTrans fn=\"assign-name\" from=\"/jmx-console(|/*)\" name=\"jknsapi\" NameTrans fn=\"assign-name\" from=\"/jkmanager/*\" name=\"jknsapi\" </Object>", "<Object name=\"jknsapi\"> ObjectType fn=force-type type=text/plain Service fn=\"jk_service\" worker=\"status\" path=\"/jkmanager(/*)\" Service fn=\"jk_service\" worker=\"router\" </Object>", "The advanced router LB worker A list of each worker worker.list=router,status First JBoss EAP server (worker node) definition. Port 8009 is the standard port for AJP # worker.worker01.port=8009 worker.worker01.host=127.0.0.1 worker.worker01.type=ajp13 worker.worker01.ping_mode=A worker.worker01.socket_timeout=10 worker.worker01.lbfactor=3 Second JBoss EAP server worker.worker02.port=8009 worker.worker02.host=127.0.0.100 worker.worker02.type=ajp13 worker.worker02.ping_mode=A worker.worker02.socket_timeout=10 worker.worker02.lbfactor=1 Define the load-balancer called \"router\" worker.router.type=lb worker.router.balance_workers=worker01,worker02 Define the status worker worker.status.type=status", "IPLANET_CONFIG /../bin/stopserv IPLANET_CONFIG /../bin/startserv", "-Dorg.wildfly.openssl.path= PATH_TO_OPENSSL_LIBS", "module add --name=com.mysql --resources= /path/to /mysql-connector-java-8.0.12.jar --export-dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "module add --module-root-dir= /path/to /my-external-modules/ --name=com.mysql --resources= /path/to /mysql-connector-java-8.0.12.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "module add --name=com.mysql --slot=8.0 --resources= /path/to /mysql-connector-java-8.0.12.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api", "/subsystem=undertow/servlet-container=default/setting=crawler-session-management:add /subsystem=undertow/servlet-container=default/setting=crawler-session-management:read-resource", "/subsystem=undertow/servlet-container=default/setting=jsp:read-resource", "/subsystem=undertow/servlet-container=default/setting=persistent-sessions:add /subsystem=undertow/servlet-container=default/setting=persistent-sessions:read-resource", "/subsystem=undertow/servlet-container=default/setting=session-cookie:add /subsystem=undertow/servlet-container=default/setting=session-cookie:read-resource", "/subsystem=undertow/servlet-container=default/setting=websockets:read-resource", "/subsystem=undertow/server=default-server/host=default-host/setting=access-log:add /subsystem=undertow/server=default-server/host=default-host/setting=access-log:read-resource", "/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=record-request-start-time,value=true)", "/subsystem=undertow/server=default-server/host=default-host/setting=single-sign-on:add /subsystem=undertow/server=default-server/host=default-host/setting=single-sign-on:read-resource", "<module xmlns=\"urn:jboss:module:1.8\" name=\"com.sun.jsf-impl: IMPL_NAME - VERSION \"> <properties> <property name=\"jboss.api\" value=\"private\"/> </properties> <dependencies> <module name=\"javax.faces.api: IMPL_NAME - VERSION \"/> <module name=\"javaee.api\"/> <module name=\"javax.servlet.jstl.api\"/> <module name=\"org.apache.xerces\" services=\"import\"/> <module name=\"org.apache.xalan\" services=\"import\"/> <module name=\"org.jboss.weld.core\"/> <module name=\"org.jboss.weld.spi\"/> <module name=\"javax.xml.rpc.api\"/> <module name=\"javax.rmi.api\"/> <module name=\"org.omg.api\"/> </dependencies> <resources> <resource-root path=\"impl- VERSION .jar\"/> </resources> </module>", "<module xmlns=\"urn:jboss:module:1.8\" name=\"com.sun.jsf-impl: IMPL_NAME - VERSION \"> <properties> <property name=\"jboss.api\" value=\"private\"/> </properties> <dependencies> <module name=\"javax.faces.api: IMPL_NAME - VERSION \"> <imports> <include path=\"META-INF/**\"/> </imports> </module> <module name=\"javaee.api\"/> <module name=\"javax.servlet.jstl.api\"/> <module name=\"org.apache.xerces\" services=\"import\"/> <module name=\"org.apache.xalan\" services=\"import\"/> <!-- extra dependencies for MyFaces --> <module name=\"org.apache.commons.collections\"/> <module name=\"org.apache.commons.codec\"/> <module name=\"org.apache.commons.beanutils\"/> <module name=\"org.apache.commons.digester\"/> <!-- extra dependencies for MyFaces 1.1 <module name=\"org.apache.commons.logging\"/> <module name=\"org.apache.commons.el\"/> <module name=\"org.apache.commons.lang\"/> --> <module name=\"javax.xml.rpc.api\"/> <module name=\"javax.rmi.api\"/> <module name=\"org.omg.api\"/> </dependencies> <resources> <resource-root path=\" IMPL_NAME -impl- VERSION .jar\"/> </resources> </module>", "<module xmlns=\"urn:jboss:module:1.8\" name=\"javax.faces.api: IMPL_NAME - VERSION \"> <dependencies> <module name=\"com.sun.jsf-impl: IMPL_NAME - VERSION \"/> <module name=\"javax.enterprise.api\" export=\"true\"/> <module name=\"javax.servlet.api\" export=\"true\"/> <module name=\"javax.servlet.jsp.api\" export=\"true\"/> <module name=\"javax.servlet.jstl.api\" export=\"true\"/> <module name=\"javax.validation.api\" export=\"true\"/> <module name=\"org.glassfish.javax.el\" export=\"true\"/> <module name=\"javax.api\"/> <module name=\"javax.websocket.api\"/> </dependencies> <resources> <resource-root path=\"jsf-api- VERSION .jar\"/> </resources> </module>", "<module xmlns=\"urn:jboss:module:1.8\" name=\"javax.faces.api: IMPL_NAME - VERSION \"> <dependencies> <module name=\"javax.enterprise.api\" export=\"true\"/> <module name=\"javax.servlet.api\" export=\"true\"/> <module name=\"javax.servlet.jsp.api\" export=\"true\"/> <module name=\"javax.servlet.jstl.api\" export=\"true\"/> <module name=\"javax.validation.api\" export=\"true\"/> <module name=\"org.glassfish.javax.el\" export=\"true\"/> <module name=\"javax.api\"/> <!-- extra dependencies for MyFaces 1.1 <module name=\"org.apache.commons.logging\"/> <module name=\"org.apache.commons.el\"/> <module name=\"org.apache.commons.lang\"/> --> </dependencies> <resources> <resource-root path=\"myfaces-api- VERSION .jar\"/> </resources> </module>", "<module xmlns=\"urn:jboss:module:1.8\" name=\"org.jboss.as.jsf-injection: IMPL_NAME - VERSION \"> <properties> <property name=\"jboss.api\" value=\"private\"/> </properties> <resources> <resource-root path=\"wildfly-jsf-injection- INJECTION_VERSION .jar\"/> <resource-root path=\"weld-core-jsf- WELD_VERSION .jar\"/> </resources> <dependencies> <module name=\"com.sun.jsf-impl: IMPL_NAME - VERSION \"/> <module name=\"java.naming\"/> <module name=\"java.desktop\"/> <module name=\"org.jboss.as.jsf\"/> <module name=\"org.jboss.as.web-common\"/> <module name=\"javax.servlet.api\"/> <module name=\"org.jboss.as.ee\"/> <module name=\"org.jboss.as.jsf\"/> <module name=\"javax.enterprise.api\"/> <module name=\"org.jboss.logging\"/> <module name=\"org.jboss.weld.core\"/> <module name=\"org.jboss.weld.api\"/> <module name=\"javax.faces.api: IMPL_NAME - VERSION \"/> </dependencies> </module>", "<module xmlns=\"urn:jboss:module:1.8\" name=\"org.jboss.as.jsf-injection: IMPL_NAME - VERSION \"> <properties> <property name=\"jboss.api\" value=\"private\"/> </properties> <resources> <resource-root path=\"wildfly-jsf-injection- INJECTION_VERSION .jar\"/> <resource-root path=\"weld-jsf- WELD_VERSION .jar\"/> </resources> <dependencies> <module name=\"com.sun.jsf-impl: IMPL_NAME - VERSION \"/> <module name=\"javax.api\"/> <module name=\"org.jboss.as.web-common\"/> <module name=\"javax.servlet.api\"/> <module name=\"org.jboss.as.jsf\"/> <module name=\"org.jboss.as.ee\"/> <module name=\"org.jboss.as.jsf\"/> <module name=\"javax.enterprise.api\"/> <module name=\"org.jboss.logging\"/> <module name=\"org.jboss.weld.core\"/> <module name=\"org.jboss.weld.api\"/> <module name=\"org.wildfly.security.elytron\"/> <module name=\"javax.faces.api: IMPL_NAME - VERSION \"/> </dependencies> </module>", "<module xmlns=\"urn:jboss:module:1.5\" name=\"org.apache.commons.digester\"> <properties> <property name=\"jboss.api\" value=\"private\"/> </properties> <resources> <resource-root path=\"commons-digester- VERSION .jar\"/> </resources> <dependencies> <module name=\"javax.api\"/> <module name=\"org.apache.commons.collections\"/> <module name=\"org.apache.commons.logging\"/> <module name=\"org.apache.commons.beanutils\"/> </dependencies> </module>", "<Location /mod_cluster-manager> SetHandler mod_cluster-manager Require ip 127.0.0.1 </Location>", "Define list of workers that will be used for mapping requests worker.list=loadbalancer,status Define Node1 modify the host as your host IP or DNS name. worker.node1.port=8009 worker.node1.host=node1.mydomain.com worker.node1.type=ajp13 worker.node1.ping_mode=A worker.node1.lbfactor=1 Define Node2 modify the host as your host IP or DNS name. worker.node2.port=8009 worker.node2.host= node2.mydomain.com worker.node2.type=ajp13 worker.node2.ping_mode=A worker.node2.lbfactor=1 Load-balancing behavior worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=node1,node2 worker.loadbalancer.sticky_session=1 Status worker for managing load balancer worker.status.type=status", "JAVA_OPTS=\"USDJAVA_OPTS -Dorg.wildfly.openssl.path= JBCS_OPENSSL_PATH", "/system-property=org.wildfly.openssl.path:add(value= JBCS_OPENSSL_PATH )", "subscription-manager repos --enable REPO_NAME", "Repository REPO_NAME is enabled for this system.", "yum install jbcs-httpd24-openssl", "WILDFLY_OPTS=\"USDWILDFLY_OPTS -Dorg.wildfly.openssl.path= JBCS_OPENSSL_PATH \"", "/system-property=org.wildfly.openssl.path:add(value= JBCS_OPENSSL_PATH )", "/subsystem=elytron:write-attribute(name=initial-providers, value=combined-providers) /subsystem=elytron:undefine-attribute(name=final-providers) reload", "/subsystem=elytron/server-ssl-context=httpsSSC:add(key-manager=localhost-manager, trust-manager=ca-manager, provider-name=openssl) reload", "/core-service=management/security-realm=ApplicationRealm/server-identity=ssl:write-attribute(name=protocol,value=openssl.TLSv1.2) reload", "15:37:59,814 INFO [org.wildfly.openssl.SSL] (MSC service thread 1-7) WFOPENSSL0002 OpenSSL Version OpenSSL 1.0.2k-fips 23 Mar 2017" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html-single/configuration_guide/index
Chapter 9. Troubleshooting monitoring issues
Chapter 9. Troubleshooting monitoring issues 9.1. Investigating why user-defined metrics are unavailable ServiceMonitor resources enable you to determine how to use the metrics exposed by a service in user-defined projects. Follow the steps outlined in this procedure if you have created a ServiceMonitor resource but cannot see any corresponding metrics in the Metrics UI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have enabled and configured monitoring for user-defined workloads. You have created the user-workload-monitoring-config ConfigMap object. You have created a ServiceMonitor resource. Procedure Check that the corresponding labels match in the service and ServiceMonitor resource configurations. Obtain the label defined in the service. The following example queries the prometheus-example-app service in the ns1 project: USD oc -n ns1 get service prometheus-example-app -o yaml Example output labels: app: prometheus-example-app Check that the matchLabels app label in the ServiceMonitor resource configuration matches the label output in the preceding step: USD oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml Example output Note You can check service and ServiceMonitor resource labels as a developer with view permissions for the project. Inspect the logs for the Prometheus Operator in the openshift-user-workload-monitoring project. List the pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Example output NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m Obtain the logs from the prometheus-operator container in the prometheus-operator pod. In the following example, the pod is called prometheus-operator-776fcbbd56-2nbfm : USD oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator If there is a issue with the service monitor, the logs might include an error similar to this example: level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg="skipping servicemonitor" error="it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload Review the target status for your project in the Prometheus UI directly. Establish port-forwarding to the Prometheus instance in the openshift-user-workload-monitoring project: USD oc port-forward -n openshift-user-workload-monitoring pod/prometheus-user-workload-0 9090 Open http://localhost:9090/targets in a web browser and review the status of the target for your project directly in the Prometheus UI. Check for error messages relating to the target. Configure debug level logging for the Prometheus Operator in the openshift-user-workload-monitoring project. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: debug for prometheusOperator under data/config.yaml to set the log level to debug : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug Save the file to apply the changes. Note The prometheus-operator in the openshift-user-workload-monitoring project restarts automatically when you apply the log-level change. Confirm that the debug log-level has been applied to the prometheus-operator deployment in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Debug level logging will show all calls made by the Prometheus Operator. Check that the prometheus-operator pod is running: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized Prometheus Operator loglevel value is included in the config map, the prometheus-operator pod might not restart successfully. Review the debug logs to see if the Prometheus Operator is using the ServiceMonitor resource. Review the logs for other related errors. Additional resources Creating a user-defined workload monitoring config map See Specifying how a service is monitored for details on how to create a ServiceMonitor or PodMonitor resource 9.2. Determining why Prometheus is consuming a lot of disk space Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id attribute is unbound because it has an infinite number of possible values. Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space. You can use the following measures when Prometheus consumes a lot of disk: Check the number of scrape samples that are being collected. Check the time series database (TSDB) status in the Prometheus UI for more information on which labels are creating the most time series. This requires cluster administrator privileges. Reduce the number of unique time series that are created by reducing the number of unbound attributes that are assigned to user-defined metrics. Note Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective, navigate to Monitoring Metrics . Run the following Prometheus Query Language (PromQL) query in the Expression field. This returns the ten metrics that have the highest number of scrape samples: topk(10,count by (job)({__name__=~".+"})) Investigate the number of unbound label values assigned to metrics with higher than expected scrape sample counts. If the metrics relate to a user-defined project , review the metrics key-value pairs assigned to your workload. These are implemented through Prometheus client libraries at the application level. Try to limit the number of unbound attributes referenced in your labels. If the metrics relate to a core OpenShift Container Platform project , create a Red Hat support case on the Red Hat Customer Portal . Check the TSDB status in the Prometheus UI. In the Administrator perspective, navigate to Networking Routes . Select the openshift-monitoring project in the Project list. Select the URL in the prometheus-k8s row to open the login page for the Prometheus UI. Choose Log in with OpenShift to log in using your OpenShift Container Platform credentials. In the Prometheus UI, navigate to Status TSDB Status . Additional resources See Setting a scrape sample limit for user-defined projects for details on how to set a scrape sample limit and create related alerting rules Submitting a support case
[ "oc -n ns1 get service prometheus-example-app -o yaml", "labels: app: prometheus-example-app", "oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml", "spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app", "oc -n openshift-user-workload-monitoring get pods", "NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m", "oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator", "level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload", "oc port-forward -n openshift-user-workload-monitoring pod/prometheus-user-workload-0 9090", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug", "oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-user-workload-monitoring get pods", "topk(10,count by (job)({__name__=~\".+\"}))" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/monitoring/troubleshooting-monitoring-issues
Chapter 4. Deploying Red Hat Quay
Chapter 4. Deploying Red Hat Quay After you have configured your Red Hat Quay deployment, you can deploy it using the following procedures. Prerequisites The Red Hat Quay database is running. The Redis server is running. 4.1. Creating the YAML configuration file Use the following procedure to deploy Red Hat Quay locally. Procedure Enter the following command to create a minimal config.yaml file that is used to deploy the Red Hat Quay container: USD touch config.yaml Copy and paste the following YAML configuration into the config.yaml file: BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 DATABASE_SECRET_KEY: a8c2744b-7004-4af2-bcee-e417e7bdd235 DB_URI: postgresql://quayuser:[email protected]:5432/quay DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default SECRET_KEY: e9bd34f4-900c-436a-979e-7530e5d74ac8 SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 Create a directory to copy the Red Hat Quay configuration bundle to: USD mkdir USDQUAY/config Copy the Red Hat Quay configuration file to the directory: USD cp ~/Downloads/quay-config.tar.gz USDQUAY/config 4.2. Prepare local storage for image data Use the following procedure to set your local file system to store registry images. Procedure Create a local directory that will store registry images by entering the following command: USD mkdir USDQUAY/storage Set the directory to store registry images: USD setfacl -m u:1001:-wx USDQUAY/storage 4.3. Deploy the Red Hat Quay registry Use the following procedure to deploy the Quay registry container. Procedure Enter the following command to start the Quay registry container, specifying the appropriate volumes for configuration data and local storage for image data:
[ "touch config.yaml", "BUILDLOGS_REDIS: host: quay-server.example.com password: strongpassword port: 6379 DATABASE_SECRET_KEY: a8c2744b-7004-4af2-bcee-e417e7bdd235 DB_URI: postgresql://quayuser:[email protected]:5432/quay DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default SECRET_KEY: e9bd34f4-900c-436a-979e-7530e5d74ac8 SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true USER_EVENTS_REDIS: host: quay-server.example.com password: strongpassword port: 6379", "mkdir USDQUAY/config", "cp ~/Downloads/quay-config.tar.gz USDQUAY/config", "mkdir USDQUAY/storage", "setfacl -m u:1001:-wx USDQUAY/storage", "sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v USDQUAY/config:/conf/stack:Z -v USDQUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.10.9" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/proof_of_concept_-_deploying_red_hat_quay/poc-deploying-quay
Chapter 9. Authentication and authorization for hosted control planes
Chapter 9. Authentication and authorization for hosted control planes The OpenShift Container Platform control plane includes a built-in OAuth server. You can obtain OAuth access tokens to authenticate to the OpenShift Container Platform API. After you create your hosted cluster, you can configure OAuth by specifying an identity provider. 9.1. Configuring the OAuth server for a hosted cluster by using the CLI You can configure the internal OAuth server for your hosted cluster by using an OpenID Connect identity provider ( oidc ). You can configure OAuth for the following supported identity providers: oidc htpasswd keystone ldap basic-authentication request-header github gitlab google Adding any identity provider in the OAuth configuration removes the default kubeadmin user provider. Note When you configure identity providers, you must configure at least one NodePool replica in your hosted cluster in advance. Traffic for DNS resolution is sent through the worker nodes. You do not need to configure the NodePool replicas in advance for the htpasswd and request-header identity providers. Prerequisites You created your hosted cluster. Procedure Edit the HostedCluster custom resource (CR) on the hosting cluster by running the following command: USD oc edit <hosted_cluster_name> -n <hosted_cluster_namespace> Add the OAuth configuration in the HostedCluster CR by using the following example: apiVersion: hypershift.openshift.io/v1alpha1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: configuration: oauth: identityProviders: - openID: 3 claims: email: 4 - <email_address> name: 5 - <display_name> preferredUsername: 6 - <preferred_username> clientID: <client_id> 7 clientSecret: name: <client_id_secret_name> 8 issuer: https://example.com/identity 9 mappingMethod: lookup 10 name: IAM type: OpenID 1 Specifies your hosted cluster name. 2 Specifies your hosted cluster namespace. 3 This provider name is prefixed to the value of the identity claim to form an identity name. The provider name is also used to build the redirect URL. 4 Defines a list of attributes to use as the email address. 5 Defines a list of attributes to use as a display name. 6 Defines a list of attributes to use as a preferred user name. 7 Defines the ID of a client registered with the OpenID provider. You must allow the client to redirect to the https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name> URL. 8 Defines a secret of a client registered with the OpenID provider. 9 The Issuer Identifier described in the OpenID spec. You must use https without query or fragment component. 10 Defines a mapping method that controls how mappings are established between identities of this provider and User objects. Save the file to apply the changes. 9.2. Configuring the OAuth server for a hosted cluster by using the web console You can configure the internal OAuth server for your hosted cluster by using the OpenShift Container Platform web console. You can configure OAuth for the following supported identity providers: oidc htpasswd keystone ldap basic-authentication request-header github gitlab google Adding any identity provider in the OAuth configuration removes the default kubeadmin user provider. Note When you configure identity providers, you must configure at least one NodePool replica in your hosted cluster in advance. Traffic for DNS resolution is sent through the worker nodes. You do not need to configure the NodePool replicas in advance for the htpasswd and request-header identity providers. Prerequisites You logged in as a user with cluster-admin privileges. You created your hosted cluster. Procedure Navigate to Home API Explorer . Use the Filter by kind box to search for your HostedCluster resource. Click the HostedCluster resource that you want to edit. Click the Instances tab. Click the Options menu to your hosted cluster name entry and click Edit HostedCluster . Add the OAuth configuration in the YAML file: spec: configuration: oauth: identityProviders: - openID: 1 claims: email: 2 - <email_address> name: 3 - <display_name> preferredUsername: 4 - <preferred_username> clientID: <client_id> 5 clientSecret: name: <client_id_secret_name> 6 issuer: https://example.com/identity 7 mappingMethod: lookup 8 name: IAM type: OpenID 1 This provider name is prefixed to the value of the identity claim to form an identity name. The provider name is also used to build the redirect URL. 2 Defines a list of attributes to use as the email address. 3 Defines a list of attributes to use as a display name. 4 Defines a list of attributes to use as a preferred user name. 5 Defines the ID of a client registered with the OpenID provider. You must allow the client to redirect to the https://oauth-openshift.apps.<cluster_name>.<cluster_domain>/oauth2callback/<idp_provider_name> URL. 6 Defines a secret of a client registered with the OpenID provider. 7 The Issuer Identifier described in the OpenID spec. You must use https without query or fragment component. 8 Defines a mapping method that controls how mappings are established between identities of this provider and User objects. Click Save . Additional resources To know more about supported identity providers, see "Understanding identity provider configuration" in Authentication and authorization . 9.3. Assigning components IAM roles by using the CCO in a hosted cluster on AWS You can assign components IAM roles that provide short-term, limited-privilege security credentials by using the Cloud Credential Operator (CCO) in hosted clusters on Amazon Web Services (AWS). By default, the CCO runs in a hosted control plane. Note The CCO supports a manual mode only for hosted clusters on AWS. By default, hosted clusters are configured in a manual mode. The management cluster might use modes other than manual. 9.4. Verifying the CCO installation in a hosted cluster on AWS You can verify that the Cloud Credential Operator (CCO) is running correctly in your hosted control plane. Prerequisites You configured the hosted cluster on Amazon Web Services (AWS). Procedure Verify that the CCO is configured in a manual mode in your hosted cluster by running the following command: USD oc get cloudcredentials <hosted_cluster_name> \ -n <hosted_cluster_namespace> \ -o=jsonpath={.spec.credentialsMode} Expected output Manual Verify that the value for the serviceAccountIssuer resource is not empty by running the following command: USD oc get authentication cluster --kubeconfig <hosted_cluster_name>.kubeconfig \ -o jsonpath --template '{.spec.serviceAccountIssuer }' Example output https://aos-hypershift-ci-oidc-29999.s3.us-east-2.amazonaws.com/hypershift-ci-29999 9.5. Enabling Operators to support CCO-based workflows with AWS STS As an Operator author designing your project to run on Operator Lifecycle Manager (OLM), you can enable your Operator to authenticate against AWS on STS-enabled OpenShift Container Platform clusters by customizing your project to support the Cloud Credential Operator (CCO). With this method, the Operator is responsible for and requires RBAC permissions for creating the CredentialsRequest object and reading the resulting Secret object. Note By default, pods related to the Operator deployment mount a serviceAccountToken volume so that the service account token can be referenced in the resulting Secret object. Prerequisities OpenShift Container Platform 4.14 or later Cluster in STS mode OLM-based Operator project Procedure Update your Operator project's ClusterServiceVersion (CSV) object: Ensure your Operator has RBAC permission to create CredentialsRequests objects: Example 9.1. Example clusterPermissions list # ... install: spec: clusterPermissions: - rules: - apiGroups: - "cloudcredential.openshift.io" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch Add the following annotation to claim support for this method of CCO-based workflow with AWS STS: # ... metadata: annotations: features.operators.openshift.io/token-auth-aws: "true" Update your Operator project code: Get the role ARN from the environment variable set on the pod by the Subscription object. For example: // Get ENV var roleARN := os.Getenv("ROLEARN") setupLog.Info("getting role ARN", "role ARN = ", roleARN) webIdentityTokenPath := "/var/run/secrets/openshift/serviceaccount/token" Ensure you have a CredentialsRequest object ready to be patched and applied. For example: Example 9.2. Example CredentialsRequest object creation import ( minterv1 "github.com/openshift/cloud-credential-operator/pkg/apis/cloudcredential/v1" corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) var in = minterv1.AWSProviderSpec{ StatementEntries: []minterv1.StatementEntry{ { Action: []string{ "s3:*", }, Effect: "Allow", Resource: "arn:aws:s3:*:*:*", }, }, STSIAMRoleARN: "<role_arn>", } var codec = minterv1.Codec var ProviderSpec, _ = codec.EncodeProviderSpec(in.DeepCopyObject()) const ( name = "<credential_request_name>" namespace = "<namespace_name>" ) var CredentialsRequestTemplate = &minterv1.CredentialsRequest{ ObjectMeta: metav1.ObjectMeta{ Name: name, Namespace: "openshift-cloud-credential-operator", }, Spec: minterv1.CredentialsRequestSpec{ ProviderSpec: ProviderSpec, SecretRef: corev1.ObjectReference{ Name: "<secret_name>", Namespace: namespace, }, ServiceAccountNames: []string{ "<service_account_name>", }, CloudTokenPath: "", }, } Alternatively, if you are starting from a CredentialsRequest object in YAML form (for example, as part of your Operator project code), you can handle it differently: Example 9.3. Example CredentialsRequest object creation in YAML form // CredentialsRequest is a struct that represents a request for credentials type CredentialsRequest struct { APIVersion string `yaml:"apiVersion"` Kind string `yaml:"kind"` Metadata struct { Name string `yaml:"name"` Namespace string `yaml:"namespace"` } `yaml:"metadata"` Spec struct { SecretRef struct { Name string `yaml:"name"` Namespace string `yaml:"namespace"` } `yaml:"secretRef"` ProviderSpec struct { APIVersion string `yaml:"apiVersion"` Kind string `yaml:"kind"` StatementEntries []struct { Effect string `yaml:"effect"` Action []string `yaml:"action"` Resource string `yaml:"resource"` } `yaml:"statementEntries"` STSIAMRoleARN string `yaml:"stsIAMRoleARN"` } `yaml:"providerSpec"` // added new field CloudTokenPath string `yaml:"cloudTokenPath"` } `yaml:"spec"` } // ConsumeCredsRequestAddingTokenInfo is a function that takes a YAML filename and two strings as arguments // It unmarshals the YAML file to a CredentialsRequest object and adds the token information. func ConsumeCredsRequestAddingTokenInfo(fileName, tokenString, tokenPath string) (*CredentialsRequest, error) { // open a file containing YAML form of a CredentialsRequest file, err := os.Open(fileName) if err != nil { return nil, err } defer file.Close() // create a new CredentialsRequest object cr := &CredentialsRequest{} // decode the yaml file to the object decoder := yaml.NewDecoder(file) err = decoder.Decode(cr) if err != nil { return nil, err } // assign the string to the existing field in the object cr.Spec.CloudTokenPath = tokenPath // return the modified object return cr, nil } Note Adding a CredentialsRequest object to the Operator bundle is not currently supported. Add the role ARN and web identity token path to the credentials request and apply it during Operator initialization: Example 9.4. Example applying CredentialsRequest object during Operator initialization // apply CredentialsRequest on install credReq := credreq.CredentialsRequestTemplate credReq.Spec.CloudTokenPath = webIdentityTokenPath c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, "unable to create CredRequest") os.Exit(1) } } Ensure your Operator can wait for a Secret object to show up from the CCO, as shown in the following example, which is called along with the other items you are reconciling in your Operator: Example 9.5. Example wait for Secret object // WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 "k8s.io/api/core/v1" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf("timed out waiting for secret %s in namespace %s", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } } 1 The timeout value is based on an estimate of how fast the CCO might detect an added CredentialsRequest object and generate a Secret object. You might consider lowering the time or creating custom feedback for cluster administrators that could be wondering why the Operator is not yet accessing the cloud resources. Set up the AWS configuration by reading the secret created by the CCO from the credentials request and creating the AWS config file containing the data from that secret: Example 9.6. Example AWS configuration creation func SharedCredentialsFileFromSecret(secret *corev1.Secret) (string, error) { var data []byte switch { case len(secret.Data["credentials"]) > 0: data = secret.Data["credentials"] default: return "", errors.New("invalid secret for aws credentials") } f, err := ioutil.TempFile("", "aws-shared-credentials") if err != nil { return "", errors.Wrap(err, "failed to create file for shared credentials") } defer f.Close() if _, err := f.Write(data); err != nil { return "", errors.Wrapf(err, "failed to write credentials to %s", f.Name()) } return f.Name(), nil } Important The secret is assumed to exist, but your Operator code should wait and retry when using this secret to give time to the CCO to create the secret. Additionally, the wait period should eventually time out and warn users that the OpenShift Container Platform cluster version, and therefore the CCO, might be an earlier version that does not support the CredentialsRequest object workflow with STS detection. In such cases, instruct users that they must add a secret by using another method. Configure the AWS SDK session, for example: Example 9.7. Example AWS SDK session configuration sharedCredentialsFile, err := SharedCredentialsFileFromSecret(secret) if err != nil { // handle error } options := session.Options{ SharedConfigState: session.SharedConfigEnable, SharedConfigFiles: []string{sharedCredentialsFile}, } Additional resources Cluster Operators reference page for the Cloud Credential Operator
[ "oc edit <hosted_cluster_name> -n <hosted_cluster_namespace>", "apiVersion: hypershift.openshift.io/v1alpha1 kind: HostedCluster metadata: name: <hosted_cluster_name> 1 namespace: <hosted_cluster_namespace> 2 spec: configuration: oauth: identityProviders: - openID: 3 claims: email: 4 - <email_address> name: 5 - <display_name> preferredUsername: 6 - <preferred_username> clientID: <client_id> 7 clientSecret: name: <client_id_secret_name> 8 issuer: https://example.com/identity 9 mappingMethod: lookup 10 name: IAM type: OpenID", "spec: configuration: oauth: identityProviders: - openID: 1 claims: email: 2 - <email_address> name: 3 - <display_name> preferredUsername: 4 - <preferred_username> clientID: <client_id> 5 clientSecret: name: <client_id_secret_name> 6 issuer: https://example.com/identity 7 mappingMethod: lookup 8 name: IAM type: OpenID", "oc get cloudcredentials <hosted_cluster_name> -n <hosted_cluster_namespace> -o=jsonpath={.spec.credentialsMode}", "Manual", "oc get authentication cluster --kubeconfig <hosted_cluster_name>.kubeconfig -o jsonpath --template '{.spec.serviceAccountIssuer }'", "https://aos-hypershift-ci-oidc-29999.s3.us-east-2.amazonaws.com/hypershift-ci-29999", "install: spec: clusterPermissions: - rules: - apiGroups: - \"cloudcredential.openshift.io\" resources: - credentialsrequests verbs: - create - delete - get - list - patch - update - watch", "metadata: annotations: features.operators.openshift.io/token-auth-aws: \"true\"", "// Get ENV var roleARN := os.Getenv(\"ROLEARN\") setupLog.Info(\"getting role ARN\", \"role ARN = \", roleARN) webIdentityTokenPath := \"/var/run/secrets/openshift/serviceaccount/token\"", "import ( minterv1 \"github.com/openshift/cloud-credential-operator/pkg/apis/cloudcredential/v1\" corev1 \"k8s.io/api/core/v1\" metav1 \"k8s.io/apimachinery/pkg/apis/meta/v1\" ) var in = minterv1.AWSProviderSpec{ StatementEntries: []minterv1.StatementEntry{ { Action: []string{ \"s3:*\", }, Effect: \"Allow\", Resource: \"arn:aws:s3:*:*:*\", }, }, STSIAMRoleARN: \"<role_arn>\", } var codec = minterv1.Codec var ProviderSpec, _ = codec.EncodeProviderSpec(in.DeepCopyObject()) const ( name = \"<credential_request_name>\" namespace = \"<namespace_name>\" ) var CredentialsRequestTemplate = &minterv1.CredentialsRequest{ ObjectMeta: metav1.ObjectMeta{ Name: name, Namespace: \"openshift-cloud-credential-operator\", }, Spec: minterv1.CredentialsRequestSpec{ ProviderSpec: ProviderSpec, SecretRef: corev1.ObjectReference{ Name: \"<secret_name>\", Namespace: namespace, }, ServiceAccountNames: []string{ \"<service_account_name>\", }, CloudTokenPath: \"\", }, }", "// CredentialsRequest is a struct that represents a request for credentials type CredentialsRequest struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` Metadata struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"metadata\"` Spec struct { SecretRef struct { Name string `yaml:\"name\"` Namespace string `yaml:\"namespace\"` } `yaml:\"secretRef\"` ProviderSpec struct { APIVersion string `yaml:\"apiVersion\"` Kind string `yaml:\"kind\"` StatementEntries []struct { Effect string `yaml:\"effect\"` Action []string `yaml:\"action\"` Resource string `yaml:\"resource\"` } `yaml:\"statementEntries\"` STSIAMRoleARN string `yaml:\"stsIAMRoleARN\"` } `yaml:\"providerSpec\"` // added new field CloudTokenPath string `yaml:\"cloudTokenPath\"` } `yaml:\"spec\"` } // ConsumeCredsRequestAddingTokenInfo is a function that takes a YAML filename and two strings as arguments // It unmarshals the YAML file to a CredentialsRequest object and adds the token information. func ConsumeCredsRequestAddingTokenInfo(fileName, tokenString, tokenPath string) (*CredentialsRequest, error) { // open a file containing YAML form of a CredentialsRequest file, err := os.Open(fileName) if err != nil { return nil, err } defer file.Close() // create a new CredentialsRequest object cr := &CredentialsRequest{} // decode the yaml file to the object decoder := yaml.NewDecoder(file) err = decoder.Decode(cr) if err != nil { return nil, err } // assign the string to the existing field in the object cr.Spec.CloudTokenPath = tokenPath // return the modified object return cr, nil }", "// apply CredentialsRequest on install credReq := credreq.CredentialsRequestTemplate credReq.Spec.CloudTokenPath = webIdentityTokenPath c := mgr.GetClient() if err := c.Create(context.TODO(), credReq); err != nil { if !errors.IsAlreadyExists(err) { setupLog.Error(err, \"unable to create CredRequest\") os.Exit(1) } }", "// WaitForSecret is a function that takes a Kubernetes client, a namespace, and a v1 \"k8s.io/api/core/v1\" name as arguments // It waits until the secret object with the given name exists in the given namespace // It returns the secret object or an error if the timeout is exceeded func WaitForSecret(client kubernetes.Interface, namespace, name string) (*v1.Secret, error) { // set a timeout of 10 minutes timeout := time.After(10 * time.Minute) 1 // set a polling interval of 10 seconds ticker := time.NewTicker(10 * time.Second) // loop until the timeout or the secret is found for { select { case <-timeout: // timeout is exceeded, return an error return nil, fmt.Errorf(\"timed out waiting for secret %s in namespace %s\", name, namespace) // add to this error with a pointer to instructions for following a manual path to a Secret that will work on STS case <-ticker.C: // polling interval is reached, try to get the secret secret, err := client.CoreV1().Secrets(namespace).Get(context.Background(), name, metav1.GetOptions{}) if err != nil { if errors.IsNotFound(err) { // secret does not exist yet, continue waiting continue } else { // some other error occurred, return it return nil, err } } else { // secret is found, return it return secret, nil } } } }", "func SharedCredentialsFileFromSecret(secret *corev1.Secret) (string, error) { var data []byte switch { case len(secret.Data[\"credentials\"]) > 0: data = secret.Data[\"credentials\"] default: return \"\", errors.New(\"invalid secret for aws credentials\") } f, err := ioutil.TempFile(\"\", \"aws-shared-credentials\") if err != nil { return \"\", errors.Wrap(err, \"failed to create file for shared credentials\") } defer f.Close() if _, err := f.Write(data); err != nil { return \"\", errors.Wrapf(err, \"failed to write credentials to %s\", f.Name()) } return f.Name(), nil }", "sharedCredentialsFile, err := SharedCredentialsFileFromSecret(secret) if err != nil { // handle error } options := session.Options{ SharedConfigState: session.SharedConfigEnable, SharedConfigFiles: []string{sharedCredentialsFile}, }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/hosted_control_planes/authentication-and-authorization-for-hosted-control-planes
probe::nfs.fop.read
probe::nfs.fop.read Name probe::nfs.fop.read - NFS client read operation Synopsis nfs.fop.read Values devname block device name Description SystemTap uses the vfs.do_sync_read probe to implement this probe and as a result will get operations other than the NFS client read operations.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-fop-read
14.3. Displaying Currently Assigned ID Ranges
14.3. Displaying Currently Assigned ID Ranges To display which ID ranges are configured for a server, use the following commands: ipa-replica-manage dnarange-show displays the current ID range that is set on all servers or, if you specify a server, only on the specified server, for example: ipa-replica-manage dnanextrange-show displays the ID range currently set on all servers or, if you specify a server, only on the specified server, for example: For more information about these two commands, see the ipa-replica-manage (1) man page.
[ "ipa-replica-manage dnarange-show masterA.example.com: 1001-1500 masterB.example.com: 1501-2000 masterC.example.com: No range set ipa-replica-manage dnarange-show masterA.example.com masterA.example.com: 1001-1500", "ipa-replica-manage dnanextrange-show masterA.example.com: 1001-1500 masterB.example.com: No on-deck range set masterC.example.com: No on-deck range set ipa-replica-manage dnanextrange-show masterA.example.com masterA.example.com: 1001-1500" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/display-id-range
Chapter 3. Tutorials
Chapter 3. Tutorials 3.1. Prerequisite Before undertaking these tutorials, you must configure your environment correctly. See the OpenShift Primer for instructions. 3.2. Deploy your project using the Red Hat JBoss Data Virtualization for OpenShift image Use this workflow to create and deploy a project. As an example, the tutorial uses the dynamicvdb-datafederation quickstart which combines data from a relational source (H2) and a Microsoft Excel file. After deploying it, you will learn how to switch data sources. Create a new project: Create a service account to be used for the Red Hat JBoss Data Virtualization for OpenShift deployment: Add the view role to the service account: The Red Hat JBoss Data Virtualization for OpenShift template requires SSL and JGroups keystores. (These keystores are required even if the application will not use https.) The following commands will prompt you for passwords: Generate a secure key for the SSL keystore: Generate a secure key for the JGroups keystore: Use the SSL and JGroup keystore files to create the keystore secret for the project: Create a secret with the datasources.env file: Link the keystore and environment secrets to the service account: Log in to the OpenShift Web Console: https://127.0.0.1:8443 Click jdv-app-demo . Click Add to Project . Click Browse Catalog . Enter datavirt in the Filter by keyword search bar. Click basic-s2i . Enter these parameters: Git Repository URL : https://github.com/jboss-openshift/openshift-quickstarts Git Reference : master CONTEXT_DIR : datavirt64/dynamicvdb-datafederation/app Click Deploy . Switch to using a MySQL data source instead of H2: 3.3. How to use a cache as a materialization target Having deployed the dynamicvdb-datafederation quickstart, you can now deploy a Red Hat JBoss Data Grid instance. This allows you to quickly query the cache for data, without having to go back to the original sources. You can use any of the Red Hat JBoss Data Grid for OpenShift templates, but it is better to use non-persistent templates as these provide you with clustering and high availability functionality. To obtain them, click here: https://github.com/jboss-container-images/jboss-datavirt-6-openshift-image/tree/datavirt64/resources/openshift/templates Enter the jdv-app-demo project: Create a service account for JDG for OpenShift: Add view role permissions to the service account: Create the keystore secret for the project: Link the keystore secret to the service account: Log in to the OpenShift Web Console . Click the jdv-app-demo project space. Click Add to Project . Enter datagrid in the Filter by keyword search bar. Click the datagrid71-https template. In the DATAVIRT_CACHE_NAMES environment variable field, enter stockCache . In the CACHE_TYPE_DEFAULT environment variable field, enter replicated . Enter these parameters: USERNAME : jdg PASSWORD : JBoss.123 JDG User Roles/Groups (ADMIN_GROUP): admin,___schema_manager HOTROD_AUTHENTICATION : true CONTAINERSECURITYROLE_MAPPER : identity-role-mapper CONTAINER_SECURITY_ROLES :"admin=ALL,jdg=ALL" Click Deploy . Set JDG as the materialization target:
[ "oc new-project jdv-app-demo", "oc create serviceaccount datavirt-service-account", "oc policy add-role-to-user view system:serviceaccount:jdv-app-demo:datavirt-service-account", "keytool -genkeypair -alias https -storetype JKS -keystore keystore.jks", "keytool -genseckey -alias jgroups -storetype JCEKS -keystore jgroups.jceks", "oc secret new datavirt-app-secret keystore.jks jgroups.jceks", "git clone https://github.com/jboss-openshift/openshift-quickstarts/blob/master/datavirt/dynamicvdb-datafederation/datasources.env", "oc secrets new datavirt-app-config datasources.env", "oc secrets link datavirt-service-account datavirt-app-secret datavirt-app-config", "oc env dc/datavirt-app QS_DB_TYPE=mysql5", "oc project jdv-app-demo", "oc create serviceaccount datagrid-service-account", "oc policy add-role-to-user view system:serviceaccount:jdv-app-demo:datagrid-service-account", "oc secret new datagrid-app-secret jgroups.jceks", "oc secrets link datagrid-service-account datagrid-app-secret", "oc env bc/datavirt-app DATAGRID_MATERIALIZATION=true" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/red_hat_jboss_data_virtualization_for_openshift/tutorials
Chapter 1. Disaster recovery tools in IdM
Chapter 1. Disaster recovery tools in IdM A good disaster recovery strategy combines the following tools to recover from a disaster as soon as possible with minimal data loss: Replication Replication copies database contents between IdM servers. If an IdM server fails, you can replace the lost server by creating a new replica based on one of the remaining servers. Virtual machine (VM) snapshots A snapshot is a view of a VM's operating system and applications on any or all available disks at a given point in time. After taking a VM snapshot, you can use it to return a VM and its IdM data to a state. IdM backups The ipa-backup utility allows you to take a backup of an IdM server's configuration files and its data. You can later use a backup to restore an IdM server to a state.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/preparing_for_disaster_recovery_with_identity_management/disaster-recovery-tools-in-idm_preparing-for-disaster-recovery
7.2. Support for Non-Pushdown User Defined Functions
7.2. Support for Non-Pushdown User Defined Functions To define a non-pushdown function, a Java function must be provided that matches the metadata supplied either in the Teiid Designer or Dynamic VDB defined metadata. User Defined Function (or UDF) and User Defined Aggregate Function (or UDAF) may be called at runtime like any other function or aggregate function respectively. 7.2.1. Non-Pushdown UDF Metadata in Teiid Designer You can create a user-defined function on any VDB in a view model. To do so, create a function as a base table. Make sure you provide the JAVA code implementation details in the properties dialog for the UDF. 7.2.2. Non-Pushdown UDF Metadata for Dynamic VDBs When defining the metadata using DDL in the Dynamic VDBs, user can define a UDF or UDAF (User Defined Aggregate Function) as shown below. You must create a Java method that contains the function's logic. This Java method should accept the necessary arguments, which Red Hat JBoss Data Virtualization will pass to it at runtime, and function should return the calculated or altered value. Refer to the Red Hat JBoss Data Virtualization Development Guide: Reference Material for more information about DDL Metadata and options related to functions defined via DDL. 7.2.3. Coding Non-Pushdown Functions 7.2.3.1. UDF Coding The following are requirements for coding User Defined Functions (UDFs): The Java class containing the function method must be defined public. Note You can declare multiple user defined functions for a given class. The function method must be public and static. Example 7.1. Sample UDF Code package org.something; public class TempConv { /** * Converts the given Celsius temperature to Fahrenheit, and returns the * value. * @param doubleCelsiusTemp * @return Fahrenheit */ public static Double celsiusToFahrenheit(Double doubleCelsiusTemp) { if (doubleCelsiusTemp == null) { return null; } return (doubleCelsiusTemp)*9/5 + 32; } } 7.2.3.2. UDAF Coding The following are requirements for coding User Defined Aggregate Functions (UDAFs): The Java class containing the function method must be defined public and extend org.teiid.UserDefinedAggregate . The function method must be public. Example 7.2. Sample UDAF Code package org.something; public class SumAll implements UserDefinedAggregate<Integer> { private boolean isNull = true; private int result; public void addInput(Integer... vals) { isNull = false; for (int i : vals) { result += i; } } @Override public Integer getResult(org.teiid.CommandContext commandContext) { if (isNull) { return null; } return result; } @Override public void reset() { isNull = true; result = 0; } } 7.2.3.3. Coding: Other Considerations The following are additional considerations when coding UDFs or UDAFs: Number of input arguments and types must match the function metadata defined in Section 7.1, "User Defined Functions" . Any exception can be thrown, but Red Hat JBoss Data Virtualization will throw the exception as a FunctionExecutionException . You may optionally add an additional org.teiid.CommandContext argument as the first parameter. The CommandContext interface provides access to information about the current command, such as the executing user, subject, the VDB, the session id, etc. This CommandContext parameter should not be declared in the function metadata. Example 7.3. Sample CommandContext Usage package org.something; public class SessionInfo { /** * @param context * @return the created Timestamp */ public static Timestamp sessionCreated(CommandContext context) { return new Timestamp(context.getSession().getCreatedTime()); } } The corresponding user-defined function would be declared as Timestamp sessionCreated() . 7.2.3.4. Post Coding Activities After coding the functions, compile the Java code into a Java Archive (JAR) file. Create a JBoss EAP module ( module.xml ) accompanying the JAR file in the EAP_HOME /modules/ directory. Add the module dependency to the DATABASE -vdb.xml file as shown in the example below. The lib property value may contain a space delimited list of module names if more than one dependency is needed. Note Alternatively, when using a VDB created with Teiid Designer ( DATABASE .vdb ), the JAR file may be placed in your VDB under the /lib directory. It will be added automatically to the VDB classloader.
[ "<vdb name=\"{vdb-name}\" version=\"1\"> <model name=\"{model-name}\" type=\"VIRTUAL\"> <metadata type=\"DDL\"><![CDATA[ CREATE VIRTUAL FUNCTION celsiusToFahrenheit(celsius decimal) RETURNS decimal OPTIONS (JAVA_CLASS 'org.something.TempConv', JAVA_METHOD 'celsiusToFahrenheit'); CREATE VIRTUAL FUNCTION sumAll(arg integer) RETURNS integer OPTIONS (JAVA_CLASS 'org.something.SumAll', JAVA_METHOD 'addInput', AGGREGATE 'true', VARARGS 'true', \"NULL-ON-NULL\" 'true');]]> </metadata> </model> </vdb>", "package org.something; public class TempConv { /** * Converts the given Celsius temperature to Fahrenheit, and returns the * value. * @param doubleCelsiusTemp * @return Fahrenheit */ public static Double celsiusToFahrenheit(Double doubleCelsiusTemp) { if (doubleCelsiusTemp == null) { return null; } return (doubleCelsiusTemp)*9/5 + 32; } }", "package org.something; public class SumAll implements UserDefinedAggregate<Integer> { private boolean isNull = true; private int result; public void addInput(Integer... vals) { isNull = false; for (int i : vals) { result += i; } } @Override public Integer getResult(org.teiid.CommandContext commandContext) { if (isNull) { return null; } return result; } @Override public void reset() { isNull = true; result = 0; } }", "package org.something; public class SessionInfo { /** * @param context * @return the created Timestamp */ public static Timestamp sessionCreated(CommandContext context) { return new Timestamp(context.getSession().getCreatedTime()); } }", "<vdb name=\"{vdb-name}\" version=\"1\"> <property name =\"lib\" value =\"{module-name}\"></property> </vdb>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/sect-support_for_non-pushdown_user_defined_functions
Chapter 12. Nested Virtualization
Chapter 12. Nested Virtualization 12.1. Overview As of Red Hat Enterprise Linux 7.5, nested virtualization is available as a Technology Preview for KVM guest virtual machines. With this feature, a guest virtual machine (also referred to as level 1 or L1 ) that runs on a physical host ( level 0 or L0 ) can act as a hypervisor, and create its own guest virtual machines ( L2 ). Nested virtualization is useful in a variety of scenarios, such as debugging hypervisors in a constrained environment and testing larger virtual deployments on a limited amount of physical resources. However, note that nested virtualization is not supported or recommended in production user environments, and is primarily intended for development and testing. Nested virtualization relies on host virtualization extensions to function, and it should not be confused with running guests in a virtual environment using the QEMU Tiny Code Generator (TCG) emulation, which is not supported in Red Hat Enterprise Linux.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/nested_virt
9.8. Querying a Pacemaker Cluster with SNMP (Red Hat Enterprise Linux 7.5 and later)
9.8. Querying a Pacemaker Cluster with SNMP (Red Hat Enterprise Linux 7.5 and later) As of Red Hat Enterprise Linux 7.5, you can use the pcs_snmp_agent daemon to query a Pacemaker cluster for data by means of SNMP. The pcs_snmp_agent daemon is an SNMP agent that connects to the master agent ( snmpd ) by means of agentx protocol. The pcs_snmp_agent agent does not work as a standalone agent as it only provides data to the master agent. The following procedure sets up a basic configuration for a system to use SNMP with a Pacemaker cluster. You run this procedure on each node of the cluster from which you will be using SNMP to fetch data for the cluster. Install the pcs-snmp package on each node of the cluster. This will also install the net-snmp package which provides the snmp daemon. Add the following line to the /etc/snmp/snmpd.conf configuration file to set up the snmpd daemon as master agentx . Add the following line to the /etc/snmp/snmpd.conf configuration file to enable pcs_snmp_agent in the same SNMP configuration. Start the pcs_snmp_agent service. To check the configuration, display the status of the cluster with the pcs status and then try to fetch the data from SNMP to check whether it corresponds to the output. Note that when you use SNMP to fetch data, only primitive resources are provided. The following example shows the output of a pcs status command on a running cluster with one failed action.
[ "yum install pcs-snmp", "master agentx", "view systemview included .1.3.6.1.4.1.32723.100", "systemctl start pcs_snmp_agent.service systemctl enable pcs_snmp_agent.service", "pcs status Cluster name: rhel75-cluster Stack: corosync Current DC: rhel75-node2 (version 1.1.18-5.el7-1a4ef7d180) - partition with quorum Last updated: Wed Nov 15 16:07:44 2017 Last change: Wed Nov 15 16:06:40 2017 by hacluster via cibadmin on rhel75-node1 2 nodes configured 14 resources configured (1 DISABLED) Online: [ rhel75-node1 rhel75-node2 ] Full list of resources: fencing (stonith:fence_xvm): Started rhel75-node1 dummy5 (ocf::pacemaker:Dummy): Stopped (disabled) dummy6 (ocf::pacemaker:Dummy): Stopped dummy7 (ocf::pacemaker:Dummy): Started rhel75-node2 dummy8 (ocf::pacemaker:Dummy): Started rhel75-node1 dummy9 (ocf::pacemaker:Dummy): Started rhel75-node2 Resource Group: group1 dummy1 (ocf::pacemaker:Dummy): Started rhel75-node1 dummy10 (ocf::pacemaker:Dummy): Started rhel75-node1 Clone Set: group2-clone [group2] Started: [ rhel75-node1 rhel75-node2 ] Clone Set: dummy4-clone [dummy4] Started: [ rhel75-node1 rhel75-node2 ] Failed Actions: * dummy6_start_0 on rhel75-node1 'unknown error' (1): call=87, status=complete, exitreason='', last-rc-change='Wed Nov 15 16:05:55 2017', queued=0ms, exec=20ms", "snmpwalk -v 2c -c public localhost PACEMAKER-PCS-V1-MIB::pcmkPcsV1Cluster PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterName.0 = STRING: \"rhel75-cluster\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterQuorate.0 = INTEGER: 1 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterNodesNum.0 = INTEGER: 2 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterNodesNames.0 = STRING: \"rhel75-node1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterNodesNames.1 = STRING: \"rhel75-node2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterCorosyncNodesOnlineNum.0 = INTEGER: 2 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterCorosyncNodesOnlineNames.0 = STRING: \"rhel75-node1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterCorosyncNodesOnlineNames.1 = STRING: \"rhel75-node2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterCorosyncNodesOfflineNum.0 = INTEGER: 0 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesOnlineNum.0 = INTEGER: 2 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesOnlineNames.0 = STRING: \"rhel75-node1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesOnlineNames.1 = STRING: \"rhel75-node2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesStandbyNum.0 = INTEGER: 0 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesOfflineNum.0 = INTEGER: 0 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesNum.0 = INTEGER: 11 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.0 = STRING: \"fencing\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.1 = STRING: \"dummy5\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.2 = STRING: \"dummy6\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.3 = STRING: \"dummy7\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.4 = STRING: \"dummy8\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.5 = STRING: \"dummy9\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.6 = STRING: \"dummy1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.7 = STRING: \"dummy10\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.8 = STRING: \"dummy2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.9 = STRING: \"dummy3\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.10 = STRING: \"dummy4\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesNum.0 = INTEGER: 9 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.0 = STRING: \"fencing\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.1 = STRING: \"dummy7\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.2 = STRING: \"dummy8\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.3 = STRING: \"dummy9\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.4 = STRING: \"dummy1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.5 = STRING: \"dummy10\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.6 = STRING: \"dummy2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.7 = STRING: \"dummy3\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.8 = STRING: \"dummy4\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterStoppedResroucesNum.0 = INTEGER: 1 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterStoppedResroucesIds.0 = STRING: \"dummy5\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterFailedResourcesNum.0 = INTEGER: 1 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterFailedResourcesIds.0 = STRING: \"dummy6\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterFailedResourcesIds.0 = No more variables left in this MIB View (It is past the end of the MIB tree)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-snmpandpacemaker-haar
function::task_time_string_tid
function::task_time_string_tid Name function::task_time_string_tid - Human readable string of task time usage Synopsis Arguments tid Thread id of the given task Description Returns a human readable string showing the user and system time the given task has used up to now. For example " usr: 0m12.908s, sys: 1m6.851s " .
[ "function task_time_string_tid:string(tid:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-task-time-string-tid
5.179. lvm2
5.179. lvm2 5.179.1. RHEA-2012:1574 - lvm2 enhancement update Updated lvm2 packages that add an enhancement are now available for Red Hat Enterprise Linux 6. The lvm2 packages provide support for Logical Volume Management (LVM). Enhancement BZ# 883034 In cases of transient inaccessibility of a PV (Physical Volume), such as with iSCSI or other unreliable transport, LVM required manual action to restore the PV for use even if there was no room for conflict. With this update, the manual action is no longer required if the transiently inaccessible PV had no active metadata areas (MDA). The automatic restore action of a physical volume (PV) from the MISSING state after it becomes reachable again and if it has no active MDA has been added to the lvm2 packages. Users of lvm2 are advised to upgrade to these updated packages, which adds this enhancement. 5.179.2. RHBA-2012:1399 - lvm2 bug fix update Updated lvm2 packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The lvm2 packages provide support for Logical Volume Management (LVM). Bug Fixes BZ# 843808 When using a physical volume (PV) that contained ignored metadata areas, an LVM command, such as pvs, could incorrectly display the PV as being orphan despite it belonged to a volume group (VG). This incorrect behavior was also dependent on the order of processing each PV in the VG. With this update, the processing of PVs in a VG has been fixed to properly account for PVs with ignored metadata areas so that the order of processing is no longer important, and LVM commands now always give the same correct result, regardless of PVs with ignored metadata areas. BZ# 852438 Previously, if the "issue_discards=1" configuration option was used with an LVM command, moving physical volumes using the pvmove command resulted in data loss. This update fixes the bug in pvmove and the data loss no longer occurs in the described scenario. BZ# 852440 When the "--alloc anywhere" command-line option was specified for the lvcreate command, an attempt to create a logical volume failed if "raid4", "raid5", or "raid6" was specified for the "--type" command-line option as well. A patch has been provided to address this bug and lvcreate now succeeds in the described scenario. BZ# 852441 An error in the way RAID 4/5/6 space was calculated, was preventing users from being able to increase the size of these logical volumes. This update provides a patch to fix this bug but it comes with two limitations. Firstly, a RAID 4/5/6 logical volume cannot be reduced in size yet. Secondly, users cannot extend a RAID 4/5/6 logical volume with a different stripe count than the original. BZ# 867009 If the "issue_discards=1" configuration option was set in the /etc/lvm/lvm.conf file, it was possible to issue a discard request to a PV that was missing in a VG. Consequently, the dmeventd, lvremove, or vgreduce utilities could terminate unexpectedly with a segmentation fault. This bug has been fixed and discard requests are no longer issued on missing devices. As the discard operation is irreversible, in addition to this fix, a confirmation prompt has been added to the lvremove utility to ask the user before discarding a LV, thus increasing robustness of the discard logic. Users of lvm2 are advised to upgrade to these updated packages, which fix these bugs. 5.179.3. RHBA-2012:0962 - lvm2 bug fix and enhancement update Updated lvm2 packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The lvm2 packages contain support for Logical Volume Management ( LVM ). Bug Fixes BZ# 683270 When mirrors are up-converted it was impossible to add another image (copy) to a mirrored logical volume whose activation was regulated by tags present in the volume_list parameter of lvm.conf . The code has been improved to temporarily copy the mirror's tags to the in-coming image so that it can be properly activated. As a result, high-availability (HA) LVM service relocation now works as expected. BZ# 700128 Previously, the lvremove command could fail with the error message " Can't remove open logical volume " despite the volume itself not being in use anymore. In most cases, such a situation was caused by the udev daemon that still processed events that preceded the removal and it kept the device open while lvremove tried to remove it at the same time. This update adds a retry loop to help to avoid this problem. The removal is tried several times before the command fails completely. BZ# 733522 Previously, if a snapshot with a virtual origin was created in a clustered Volume Group ( VG ), it incorrectly tried to activate on other nodes as well and the command failed with " Error locking on node " error messages. This has been fixed and a snapshot with virtual origin, using --virtualsize , is now properly activated exclusively only (on local node). BZ# 738484 Previously, if clvmd received an invalid request through its socket (for example an incomplete header was sent), the clvmd process could terminate unexpectedly or stay in an infinite loop. Additional checks have been added so that such invalid packets now cause a proper error reply to the client and clvmd no longer crashes in the scenario described. BZ# 739190 The device-mapper daemon ( dmeventd ) is used, for example, for monitoring LVM based mirrors and snapshots. When attempting to create a snapshot using lvm2 , the lvcreate -s command resulted in a dlopen error if dmeventd was upgraded after the last system restart. With this update, dmeventd is now restarted during a package update to fetch new versions of installed libraries to avoid any code divergence that could end up with a symbol lookup failure. BZ# 740290 Restarting clvmd with option -S should preserve exclusive locks on a restarted cluster node. However the option -E , which should pass such exclusive locks, had errors in its implementation. Consequently, exclusive locks were not preserved in a cluster after restart. This update implements proper support for option -E . As a result, after restarting clvmd the locks will preserve a cluster's exclusive state. BZ# 742607 If a device-mapper device was left open by a process, it could not be removed with the dmsetup [--force] remove device_name command. The --force option failed, reporting that the device was busy. Consequently, the underlying block device could not be detached from the system. With this update, dmsetup has a new command wipe_table to wipe the table of the device. Any subsequent I/O sent to the device returns errors and any devices used by the table, that is to say devices to which the I/O is forwarded, are closed. As a result, if a long-running process keeps a device open after it has finished using it, the underlying devices can be released before that process exits. BZ# 760946 Using a prefix or command names on LVM command output (the log/prefix and log/command_names directive in lvm.conf ) caused the lvm2-monitor init script to fail to start monitoring for relevant VGs. The init script acquires the list of VGs first by calling the vgs command and then it uses its output for further processing. However, if the prefix or command name directive is used on output, the VG name was not correctly formatted. To solve this, the lvm2-monitor init script now overrides the log/prefix and log/command_names setting so the command's output is always suitable for use in the init script. BZ# 761267 Prior to this update, the lvconvert --merge command did not check if the snapshot in question was invalid before proceeding. Consequently, the operation failed part-way through, leaving an invalid snapshot. This update disallows an invalid snapshot to be merged. In addition, it allows the removal of an invalid snapshot that was to be merged on activation, or that was invalidated while merging (the user will be prompted for confirmation). BZ# 796602 If a Volume Group name is supplied together with the Logical Volume name, lvconvert --splitmirrors fails to strip it off. This leads to an attempt to use a Logical Volume name that is invalid. This release detects and validates any supplied Volume Group name correctly. BZ# 799071 Previously, if pvmove was used on a clustered VG, temporarily activated pvmove devices were improperly activated cluster-wide (that is to say, on all nodes). Consequently, in some situations, such as when using tags or HA-LVM configuration, pvmove failed. This update fixes the problem, pvmove now activates all such devices exclusively if the Logical Volumes to be moved are already exclusively activated. BZ# 807441 Previously, when the vgreduce command was executed with a non-existent VG, it unnecessarily tried to unlock the VG at command exit. However the VG was not locked at that time as it was unlocked as part of the error handling process. Consequently, the vgreduce command failed with the internal error " Attempt to unlock unlocked VG " when it was executed with a non-existent VG. This update improves the code to provide a proper check so that only locked VGs are unlocked at vgreduce command exit. BZ# 816711 Previously, requests for information regarding the health of the log device were processed locally. Consequently, in the event of a device failure that affected the log of a cluster mirror, it was possible for a failure to be ignored and this could cause I/O to the mirror LV to become unresponsive. With this update, the information is requested from the cluster so that log device failures are detected and processed as expected. Enhancements BZ# 464877 Most LVM commands require an accurate view of the LVM metadata stored on the disk devices on the system. With the current LVM design, if this information is not available, LVM must scan all the physical disk devices in the system. This requires a significant amount of I/O operations in systems that have a large number of disks. The purpose of the LV Metadata daemon ( lvmetad ) is to eliminate the need for this scanning by dynamically aggregating metadata information each time the status of a device changes. These events are signaled to lvmetad by udev rules. If lvmetad is not running, LVM performs a scan as it normally would. This feature is provided as a Technology Preview and is disabled by default in Red Hat Enterprise Linux 6.3. To enable it, refer to the use_lvmetad parameter in the /etc/lvm/lvm.conf file, and enable the lvmetad daemon by configuring the lvm2-lvmetad init script. BZ# 593119 The expanded RAID support in LVM is now fully supported in Red Hat Enterprise Linux 6.3, with the exception of RAID logical volumes in HA-LVM. LVM now has the capability to create RAID 4/5/6 Logical Volumes and supports a new implementation of mirroring. The MD (software RAID) modules provide the back-end support for these new features. BZ# 637693 When a new LV is defined, anyone who has access to the LV can read any data already present on the LUNs in the extents allocated to the new LV. Users can create thin volumes by using the "lvcreate -T" command to meet the requirement that zeros are returned when attempting to read a block not previously written to. The default behavior of thin volumes is the provisioning of zero data blocks. The size of provisioned blocks is in the range of 64KB to 1GB. The bigger the blocks the longer it takes for the initial provisioning. After first write, the performance should be close to a native linear volume. However, for a clustering environment there is a difference as thin volumes may be only exclusively activated. BZ# 658639 This update greatly reduces the time spent on creating identical data structures, which allows even a very large number of devices (in the thousands) to be activated and deactivated in a matter of seconds. In addition, the number of system calls from device scanning has been reduced, which also gives a 10%-30% speed improvement. BZ# 672314 Some LVM segment types, such as " mirror " , have single machine and cluster-aware variants. Others, such as snapshot and the RAID types, have only single machine variants. When switching the cluster attribute of a Volume Group (VG), the aforementioned segment types must be inactive. This allows for re-loading of the appropriate single machine or cluster variant, or for the necessity of the activation to be exclusive in nature. This update disallows changing cluster attributes of a VG while RAID LVs are active. BZ# 731785 The dmsetup command now supports displaying block device names for any devices listed in the " deps " , " ls " and " info " command output. For the dmsetup " deps " and " ls " command, it is possible to switch among " devno " (major and minor number, the default and the original behavior), " devname " (mapping name for a device-mapper device, block device name otherwise) and " blkdevname " (always display a block device name). For the dmsetup " info " command, it is possible to use the new " blkdevname " and " blkdevs_used " fields. BZ# 736486 Device-mapper allows any character except " / " to be used in a device-mapper name. However, this is in conflict with udev as its character whitelist is restricted to 0-9, A-Z, a-z and #+-.:=@_. Using any black-listed character in the device-mapper name ends up with incorrect /dev entries being created by udev. To solve this problem, the libdevmapper library together with the dmsetup command now supports encoding of udev-blacklisted characters by using the " \xNN " format where NN is the hex value of the character. This format is supported by udev. There are three " mangling " modes in which libdevmapper can operate: " none " (no mangling), " hex " (always mangle any blacklisted character) and " auto " (use detection and mangle only if not mangled yet). The default mode used is " auto " and any libdevmapper user is affected unless this setting is changed by the respective libdevmapper call. To support this feature, the dmsetup command has a new --manglename <mangling_mode> option to define the name mangling mode used while processing device-mapper names. The dmsetup info -c -o command has new fields to display: " mangled_name " and " unmangled_name " . There is also a new dmsetup mangle command that renames any existing device-mapper names to its correct form automatically. It is strongly advised to issue this command after an update to correct any existing device-mapper names. BZ# 743640 It is now possible to extend a mirrored logical volume without inducing a synchronization of the new portion. The " --nosync " option to lvextend will cause the initial synchronization to be skipped. This can save time and is acceptable if the user does not intend to read what they have not written. BZ# 746792 LVM mirroring has a variety of options for the bitmap write-intent log: " core " , " disk " , " mirrored " . The cluster log daemon (cmirrord) is not multi-threaded and can handle only one request at a time. When a log is stacked on top of a mirror (which itself contains a 'core' log), it creates a situation that cannot be solved without threading. When the top level mirror issues a "resume", the log daemon attempts to read from the log device to retrieve the log state. However, the log is a mirror which, before issuing the read, attempts to determine the "sync" status of the region of the mirror which is to be read. This sync status request cannot be completed by the daemon because it is blocked on a read I/O to the very mirror requesting the sync status. With this update, the " mirrored " option is not available in the cluster context to prevent this problem from occurring. BZ# 769293 A new LVM configuration file parameter, activation/read_only_volume_list , makes it possible to activate particular volumes always in read-only mode, regardless of the actual permissions on the volumes concerned. This parameter overrides the --permission rw option stored in the metadata. BZ# 771419 In versions, when monitoring of multiple snapshots was enabled, dmeventd would log redundant informative messages in the form " Another thread is handling an event. Waiting... " . This needlessly flooded system log files. This behavior has been fixed in this update. BZ# 773482 A new implementation of LVM copy-on-write (cow) snapshots is available in Red Hat Enterprise Linux 6.3 as a Technology Preview. The main advantage of this implementation, compared to the implementation of snapshots, is that it allows many virtual devices to be stored on the same data volume. This implementation also provides support for arbitrary depth of recursive snapshots. This feature is for use on a single-system. It is not available for multi-system access in cluster environments. For more information, refer to the documentation of the -s or --snapshot option in the lvcreate man page. BZ# 773507 Logical Volumes (LVs) can now be thinly provisioned to manage a storage pool of free space to be allocated to an arbitrary number of devices when needed by applications. This allows creation of devices that can be bound to a thinly provisioned pool for late allocation when an application actually writes to the LV. The thinly-provisioned pool can be expanded dynamically if and when needed for cost-effective allocation of storage space. In Red Hat Enterprise Linux 6.3, this feature is introduced as a Technology Preview. For more information, refer to the lvcreate man page. Note that the device-mapper-persistent-data package is required. BZ# 796408 LVM now recognizes EMC PowerPath devices (emcpower) and uses them in preference to the devices out of which they are constructed. BZ# 817130 LVM now has two implementations for creating mirrored logical volumes: the " mirror " segment type and the " raid1 " segment type. The " raid1 " segment type contains design improvements over the " mirror " segment type that are useful to its operation with snapshots. As a result, users who employ snapshots of mirrored volumes are encouraged to use the " raid1 " segment type rather than the " mirror " segment type. Users who continue to use the " mirror " segment type as the origin LV for snapshots should plan for the possibility of the following disruptions. When a snapshot is created or resized, it forces I/O through the underlying origin. The operation will not complete until this occurs. If a device failure occurs to a mirrored logical volume (of " mirror " segment type) that is the origin of the snapshot being created or resized, it will delay I/O until it is reconfigured. The mirror cannot be reconfigured until the snapshot operation completes, but the snapshot operation cannot complete unless the mirror releases the I/O. Again, the problem can manifest itself when the mirror suffers a failure simultaneously with a snapshot creation or resize. There is no current solution to this problem beyond converting the mirror from the " mirror " segment type to the " raid1 " segment type. In order to convert an existing mirror from the " mirror " segment type to the " raid1 " segment type, perform the following action: This operation can only be undone using the vgcfgrestore command. With the current version of LVM2 , if the " mirror " segment type is used to create a new mirror LV, a warning message is issued to the user about possible problems and it suggests using the " raid1 " segment type instead. Users of lvm2 should upgrade to these updated packages, which fix these bugs and add these enhancements. 5.179.4. RHBA-2013:1472 - lvm2 bug fix update Updated lvm2 packages that fix one bug are now available for Red Hat Enterprise Linux 6. The lvm2 packages include all of the support for handling read and write operations on physical volumes, creating volume groups from one or more physical volumes and creating one or more logical volumes in volume groups. Bug Fix BZ# 965810 Previously, on certain HP servers using Red Hat Enterprise Linux 6 with the xfs file system, a regression in the code caused the lvm2 utility to ignore the "optimal_io_size" parameter and use a 1MB offset start. Consequently, there was an increase in the disk write operations which caused data misalignment and considerably lowered the performance of the servers. With this update, lvm2 no longer ignores "optimal_io_size" and data misalignment no longer occurs in this scenario. Users of lvm2 are advised to upgrade to these updated packages, which fix this bug.
[ "~]USD lvconvert --type raid1 <VG>/<mirrored LV>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/lvm2
Chapter 5. OS/JVM certifications
Chapter 5. OS/JVM certifications This release is supported for use with the following operating system and Java Development Kit (JDK) versions: Operating System Chipset Architecture Java Virtual Machine Red Hat Enterprise Linux 9 x86_64 Red Hat build of OpenJDK 11, Red Hat build of OpenJDK 17, Oracle JDK 11, Oracle JDK 17 Red Hat Enterprise Linux 8 x86_64 Red Hat build of OpenJDK 11, Red Hat build of OpenJDK 17, Oracle JDK 11, Oracle JDK 17 Microsoft Windows 2019 Server x86_64 Red Hat build of OpenJDK 11, Red Hat build of OpenJDK 17, Oracle JDK 11, Oracle JDK 17 Note Red Hat Enterprise Linux 7 and Microsoft Windows 2016 Server are not supported.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_3_release_notes/os_jvm
Chapter 17. Storage
Chapter 17. Storage New kernel subsystem: libnvdimm This update adds libnvdimm , a kernel subsystem responsible for the detection, configuration, and management of Non-Volatile Dual Inline Memory Modules (NVDIMMs). As a result, if NVDIMMs are present in the system, they are exposed through the /dev/pmem* device nodes and can be configured using the ndctl utility. (BZ#1269626) Hardware with NVDIMM support At the time of the Red Hat Enterprise Linux 7.3 release, a number of original equipment manufacturers (OEMs) are in the process of adding support for Non-Volatile Dual Inline Memory Module (NVDIMM) hardware. As these products are introduced in the market, Red Hat will work with these OEMs to test these configurations and, if possible, announce support for them on Red Hat Enterprise Linux 7.3. Since this is a new technology, a specific support statement will be issued for each product and supported configuration. This will be done after successful Red Hat testing, and corresponding documented support by the OEM. The currently supported NVDIMM products are: HPE NVDIMM on HPE ProLiant systems. For specific configurations, see Hewlett Packard Enterprise Company support statements. NVDIMM products and configurations that are not on this list are not supported. The Red Hat Enterprise Linux 7.3 Release Notes will be updated as NVDIMM products are added to the list of supported products. (BZ#1389121) New packages: nvml The nvml packages contain the Non-Volatile Memory Library (NVML), a collection of libraries for using memory-mapped persistence, optimized specifically for persistent memory. (BZ#1274541) SCSI now supports multiple hardware queues The nr_hw_queues field is now present in the Scsi_Host structure, which allows drivers to use the field. (BZ#1308703) The exclusive_pref_bit optional argument has been added to the multipath ALUA prioritizer If the exclusive_pref_bit argument is added to the multipath Asymmetric Logical Unit Access (ALUA) prioritizer, and a path has the Target Port Group Support (TPGS) pref bit set, multipath makes a path group using only that path and assigns the highest priority to the path. Users can now either allow the preferred path to be in a path group with other paths that are equally optimized, which is the default option, or in a path group by itself by adding the exclusive_pref_bit argument. (BZ# 1299652 ) multipathd now supports raw format mode in multipathd formatted output commands The multipathd formatted output commands now offer raw format mode, which removes the headers and additional padding between fields. Support for additional format wildcards has been added as well. Raw format mode makes it easier to collect and parse information about multipath devices, particularly for use in scripting. (BZ# 1299651 ) Improved LVM locking infrastructure lvmlockd is a generation locking infrastucture for LVM. It allows LVM to safely manage shared storage from multiple hosts, using either the dlm or sanlock lock managers. sanlock allows lvmlockd to coordinate hosts through storage-based locking, without the need for an entire cluster infrastructure. For more information, see the lvmlockd(8) man page. This feature was originally introduced in Red Hat Enterprise Linux 7.2 as a Technology Preview. In Red Hat Enterprise Linux 7.3, lvmlockd is fully supported. (BZ# 1299977 ) Support for caching thinly-provisioned logical volumes with limitations Red Hat Enterprise Linux 7.3 provides the ability to cache thinly provisioned logical volumes. This brings caching benefits to all the thin logical volumes associated with a particular thin pool. However, when thin pools are set up in this way, it is not currently possible to grow the thin pool without removing the cache layer first. This also means that thin pool auto-grow features are unavailable. Users should take care to monitor the fullness and consumption rate of their thin pools to avoid running out of space. Refer to the lvmthin(7) man page for information on thinly-provisioned logical volume and the lvmcache(7) man page for information on LVM cache volumes. (BZ# 1371597 ) device-mapper-persistent-data rebased to version 0.6.2 The device-mapper-persistent-data packages have been upgraded to upstream version 0.6.2, which provides a number of bug fixes and enhancements over the version. Notably, the thin_ls tool, which can provide information about thin volumes in a pool, is now available. (BZ# 1315452 ) Support for DIF/DIX (T10 PI) on specified hardware SCSI T10 DIF/DIX is fully supported in Red Hat Enterprise Linux 7.3, provided that the hardware vendor has qualified it and provides full support for the particular HBA and storage array configuration. DIF/DIX is not supported on other configurations, it is not supported for use on the boot device, and it is not supported on virtualized guests. At the current time, the following vendors are known to provide this support. FUJITSU supports DIF and DIX on: EMULEX 16G FC HBA: EMULEX LPe16000/LPe16002, 10.2.254.0 BIOS, 10.4.255.23 FW, with: FUJITSU ETERNUS DX100 S3, DX200 S3, DX500 S3, DX600 S3, DX8100 S3, DX8700 S3, DX8900 S3, DX200F, DX60 S3, AF250, AF650 QLOGIC 16G FC HBA: QLOGIC QLE2670/QLE2672, 3.28 BIOS, 8.00.00 FW, with: FUJITSU ETERNUS DX100 S3, DX200 S3, DX500 S3, DX600 S3, DX8100 S3, DX8700 S3, DX8900 S3, DX200F, DX60 S3 Note that T10 DIX requires database or some other software that provides generation and verification of checksums on disk blocks. No currently supported Linux file systems have this capability. EMC supports DIF on: EMULEX 8G FC HBA: LPe12000-E and LPe12002-E with firmware 2.01a10 or later, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later EMULEX 16G FC HBA: LPe16000B-E and LPe16002B-E with firmware 10.0.803.25 or later, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later QLOGIC 16G FC HBA: QLE2670-E-SP and QLE2672-E-SP, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later Please refer to the hardware vendor's support information for the latest status. Support for DIF/DIX remains in Technology Preview for other HBAs and storage arrays. (BZ#1379689) iprutils rebased to version 2.4.13 The iprutils packages have been upgraded to upstream version 2.4.13, which provides a number of bug fixes and enhancements over the version. Notably, this update adds support for enabling an adapter write cache on 8247-22L and 8247-21L base Serial Attached SCSI (SAS) backplanes to provide significant performance improvements. (BZ#1274367) The multipathd command can now display the multipath data with JSON formatting With this release, multipathd now includes the show maps json command to display the multipath data with JSON formatting. This makes it easier for other programs to parse the multipathd show maps output. (BZ# 1353357 ) Default configuration added for Huawei XSG1 arrays With this release, multipath provides a default configuration for Huawei XSG1 arrays. (BZ#1333331) Multipath now includes support for Ceph RADOS block devices. RDB devices need special uid handling and their own checker function with the ability to repair devices. With this release, it is now possible to run multipath on top of RADOS block devices. Note, however, that the multipath RBD support should be used only when an RBD image with the exclusive-lock feature enabled is being shared between multiple clients. (BZ# 1348372 ) Support added for PURE FlashArray With this release, multipath has added built-in configuration support for the PURE FlashArray (BZ# 1300415 ) Default configuration added for the MSA 2040 array With this release, multipath provides a default configuration for the MSA 2040 array. (BZ#1341748) New skip_kpartx configuration option to allow skipping kpartx partition creation The skip_kpartx option has been added to the defaults, devices, and multipaths sections of the multipath.conf file. When this option is set to yes , multipath devices that are configured with skip_kpartx will not have any partition devices created for them. This allows users to create a multipath device without creating partitions, even if the device has a partition table. The default value of this option is no . (BZ# 1311659 ) Multipaths weightedpath prioritizer now supports a wwn keyword The multipaths weightedpath prioritizer now supports a wwn keyword. If this is used, the regular expression for matching the device is of the form host_wwnn:host_wwpn:target_wwnn:target_wwpn . These identifiers can either be looked up through sysfs or using the following multipathd show paths format wildcards: %N:%R:%n:%r . The weightedpath prioritizer previously only allowed HBTL and device nam regex matching. Neither of these are persistent across reboots, so the weightedpath prioritizer arguments needed to be changed after every boot. This feature provides a way to use the weightedpath prioritizer with persistent device identifiers. (BZ#1297456) New packages: nvme-cli The nvme-cli packages provide the Non-Volatile Memory Express (NVMe) command-line interface to manage and configure NVMe controllers. (BZ#1344730) LVM2 now displays a warning message when autoresize is not configured The thin pool default behavior is not to autoresize the thin pool when the space is going to be exhausted. Exhausting the space can have various negative consequences. When the user is not using autoresize and the thin pool becomes full, a new warning message notifies the user about possible problems so that they can take appropriate actions, such as resize the thin pool, or stop using the thin volume. (BZ# 1189221 ) dmstats now supports mapping of files to dmstats regions The --filemap option of the dmstats command now allows the user to easily configure dmstats regions to track I/O operations to a specified file in the file system. Previously, I/O statistics were only available for a whole device, or a region of a device, which limited administrator insight into I/O performance to a per-file basis. Now, the --filemap option enables the user to inspect file I/O performance using the same tools used for any device-mapper device. (BZ# 1286285 ) LVM no longer applies LV polices on external volumes Previously, LVM disruptively applied its own policy for LVM thin logical volumes (LVs) on external volumes as well, which could result in unexpected behavior. With this update, external users of thin pool can use their own management of external thin volumes, and LVM no longer applies LV polices on such volumes. (BZ# 1329235 ) The thin pool is now always checked for sufficient space when creating a new thin volume Even when the user does not use autoresize with thin pool monitoring, the thin pool is now always checked for sufficient space when creating a new thin volume. A new thin volumes now cannot be created in the following situations: The thin-pool has reached 100% of the data volume capacity. There is less than 25% of thin pool metadata free space for metadata smaller than 16 MiB. There is less than 4 MiB of free space in metadata. (BZ# 1348336 ) LVM can now set the maximum number of cache pool chunks The new LVM allocation parameter in the allocation section of the lvm.conf file, cache_pool_max_chunks , limits the maximum number of cache pool chunks. When this parameter is undefined or set to 0, the built-in defaults are used. (BZ# 1364244 ) Support for ability to uncouple a cache pool from a logical volume LVM now has the ability to uncouple a cache pool from a logical volume if a device in the cache pool has failed. Previously, this type of failure would require manual intervention and complicated alterations to LVM metadata in order to separate the cache pool from the origin logical volume. To uncouple a logical volume from its cache-pool use the following command: Note the following limitations: The cache logical volume must be inactive (may require a reboot) A writeback cache requires the --force option due to the possibility of abandoning data lost to failure. (BZ# 1131777 ) LVM can now track and display thin snapshot logical volumes that have been removed You can now configure your system to track thin snapshot logical volumes that have been removed by enabling the record_lvs_history metadata option in the lvm.conf configuration file. This allows you to display a full thin snapshot dependency chain that includes logical volumes that have been removed from the original dependency chain and have become historical logical volumes. The full dependency chain, including historical LVs, can be displayed with new lv_full_ancestors and lv_full_descendants reporting fields. For information on configuring and displaying historical logical volumes, see Logical Volume Administration . (BZ# 1240549 )
[ "lvconvert --uncache *vg*/*lv*" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/new_features_storage
3.4.2. Sharing a website
3.4.2. Sharing a website It may not be possible to label files with the samba_share_t type, for example, when wanting to share a website in /var/www/html/ . For these cases, use the samba_export_all_ro Boolean to share any file or directory (regardless of the current label), allowing read only permissions, or the samba_export_all_rw Boolean to share any file or directory (regardless of the current label), allowing read and write permissions. The following example creates a file for a website in /var/www/html/ , and then shares that file through Samba, allowing read and write permissions. This example assumes the httpd , samba , samba-common , samba-client , and wget packages are installed: As the root user, create a /var/www/html/file1.html file. Copy and paste the following content into /var/www/html/file1.html : Run the ls -Z /var/www/html/file1.html command to view the SELinux context of file1.html : file1.index.html is labeled with the httpd_sys_content_t . By default, the Apache HTTP Server can access this type, but Samba cannot. Run the service httpd start command as the root user to start the Apache HTTP Server: Change into a directory your user has write access to, and run the wget http://localhost/file1.html command. Unless there are changes to the default configuration, this command succeeds: Edit /etc/samba/smb.conf as the root user. Add the following to the bottom of this file to share the /var/www/html/ directory through Samba: The /var/www/html/ directory is labeled with the httpd_sys_content_t type. By default, Samba cannot access files and directories labeled with the httpd_sys_content_t type, even if Linux permissions allow it. To allow Samba access, run the following command as the root user to enable the samba_export_all_ro Boolean: Do not use the -P option if you do not want the change to persist across reboots. Note that enabling the samba_export_all_ro Boolean allows Samba to access any type. Run service smb start as the root user to start smbd :
[ "<html> <h2>File being shared through the Apache HTTP Server and Samba.</h2> </html>", "~]USD ls -Z /var/www/html/file1.html -rw-r--r--. root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/html/file1.html", "~]# service httpd start Starting httpd: [ OK ]", "~]USD wget http://localhost/file1.html Resolving localhost... 127.0.0.1 Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 84 [text/html] Saving to: `file1.html.1' 100%[=======================>] 84 --.-K/s in 0s `file1.html.1' saved [84/84]", "[website] comment = Sharing a website path = /var/www/html/ public = no writable = no", "~]# setsebool -P samba_export_all_ro on", "~]# service smb start Starting SMB services: [ OK ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-configuration_examples-sharing_a_website
Chapter 13. Volume Snapshots
Chapter 13. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. You can create multiple snapshots of the same persistent volume claim (PVC). For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note You cannot schedule periodic creation of snapshots. 13.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 13.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 13.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) Delete Volume Snapshot . From Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage Volume Snapshots and ensure that the deleted volume snapshot is not listed.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/volume-snapshots_osp
Chapter 3. Red Hat build of OpenJDK features
Chapter 3. Red Hat build of OpenJDK features The latest Red Hat build of OpenJDK 11 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from Red Hat build of OpenJDK 11 releases. Note For all the other changes and security fixes, see OpenJDK 11.0.17 Released . Red Hat build of OpenJDK new features and enhancements Review the following release notes to understand new features and feature enhancements that are included with the Red Hat build of OpenJDK 11.0.17 release: Disabled cpu.shares parameter Before the Red Hat build of OpenJDK 11.0.17 release, Red Hat build of OpenJDK used an incorrect interpretation of the cpu.shares parameter, which belongs to Linux control groups, also known as cgroups . The parameter might cause a Java Virtual machine (JVM) to use fewer CPUs than available, which can impact the JVM's CPU resources and performance when it operates inside a container. The Red Hat build of OpenJDK 11.0.17 release configures a JVM to no longer use the cpu.shares parameter when determining the number of threads for a thread pool. If you want to revert this configuration, pass the -XX:+UseContainerCpuShares argument on JVM startup. Note The -XX:+UseContainerCpuShares argument is a deprecated feature and might be removed in a future Red Hat build of OpenJDK release. See JDK-8281181 (JDK Bug System). jdk.httpserver.maxConnections system property Red Hat build of OpenJDK 11.0.17 adds a new system property, jdk.httpserver.maxConnections , that fixes a security issue where no connection limits were specified for the HttpServer service, which can cause accepted connections and established connections to remain open indefinitely. You can use the jdk.httpserver.maxConnections system property to change the HttpServer service, behavior in the following ways: Set a value of 0 or a negative value, such as -1 , to specify no connection limit for the service. Set a positive value, such as 1 , to cause the service to check any accepted connection against the current count of established connections. If the established connection for the service is reached, the service immediately closes the accepted connection. See JDK-8286918 (JDK Bug System). Monitor deserialization of objects with JFR You can now monitor deserialization of objects with the JDK Flight Recorder (JFR). By default, Red Hat build of OpenJDK 11.0.17 disables the jdk.deserialization event setting for JFR. You can enable this feature by updating the event-name element in your JFR configuration. For example: <?xml version="1.0" encoding="UTF-8"?> <configuration version="2.0" description="test"> <event name="jdk.Deserialization"> <setting name="enabled">true</setting> <setting name="stackTrace">false</setting> </event> </configuration> After you enable JFR and you configure JFR to monitor deserialization events, JFR creates an event whenever a monitored application attempts to deserialize an object. The serialization filter mechanism of JFR can then determine whether to accept or reject a deserialized object from the monitored application. See JDK-8261160 (JDK Bug System). SHA-1 Signed JARs With the Red Hat build of OpenJDK 11.0.17 release, JARs signed with SHA-1 algorithms are restricted by default and treated as if they were unsigned. These restrictions apply to the following algorithms: Algorithms used to digest, sign, and optionally timestamp the JAR. Signature and digest algorithms of the certificates in the certificate chain of the code signer and the Timestamp Authority, and any Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) responses that are used to verify if those certificates have been revoked. Additionally, the restrictions apply to signed Java Cryptography Extension (JCE) providers. To reduce the compatibility risk for JARs that have been previously timestamped, the restriction does not apply to any JAR signed with SHA-1 algorithms and timestamped prior to January 01, 2019 . This exception might be removed in a future Red Hat build of OpenJDK release. To determine if your JAR file is impacted by the restriction, you can issue the following command in your CLI: From the output of the command, search for instance of SHA1 , SHA-1 , or disabled . Additionally, search for any warning messages that indicate that the JAR will be treated as unsigned. For example: Consider replacing or re-signing any JARs affected by the new restrictions with stronger algorithms. If your JAR file is impacted by this restriction, you can remove the algorithm and re-sign the file with a stronger algorithm, such as SHA-256 . If you want to remove the restriction on SHA-1 signed JARs for Red Hat build of OpenJDK 11.0.17, and you accept the security risks, you can complete the following actions: Modify the java.security configuration file. Alternatively, you can preserve this file and instead create another file with the required configurations. Remove the SHA1 usage SignedJAR & denyAfter 2019 01 011 entry from the jdk.certpath.disabledAlgorithms security property. Remove the SHA1 denyAfter 2019-01-01 entry from the jdk.jar.disabledAlgorithms security property. Note The value of jdk.certpath.disabledAlgorithms in the java.security file might be overridden by the system security policy on RHEL 8 and 9. The values used by the system security policy can be seen in the file /etc/crypto-policies/back-ends/java.config and disabled by either setting security.useSystemPropertiesFile to false in the java.security file or passing -Djava.security.disableSystemPropertiesFile=true to the JVM. These values are not modified by this release, so the values remain the same for releases of Red Hat build of OpenJDK. For an example of configuring the java.security file, see Overriding java.security properties for JBoss EAP for OpenShift (Red Hat Customer Portal). See JDK-8269039 (JDK Bug System). System properties for controlling the keep-alive behavior of HTTPURLConnection The Red Hat build of OpenJDK 11.0.17 release includes the following new system properties that you can use to control the keep-alive behavior of HTTPURLConnection : http.keepAlive.time.server , which controls connections to servers. http.keepAlive.time.proxy , which controls connections to proxies. Before the Red Hat build of OpenJDK 11.0.17 release, a server or a proxy with an unspecified keep-alive time might cause an idle connection to remain open for a period defined by a hard-coded default value. With Red Hat build of OpenJDK 11.0.17, you can use system properties to change the default value for the keep-alive time. The keep-alive properties control this behavior by changing the HTTP keep-alive time of either a server or proxy, so that Red Hat build of OpenJDK's HTTP protocol handler closes idle connections after a specified number of seconds. Before the Red Hat build of OpenJDK 11.0.17 release, the following use cases would lead to specific keep-alive behaviors for HTTPURLConnection : If the server specifies the Connection:keep-alive header and the server's response contains Keep-alive:timeout=N then the Red Hat build of OpenJDK keep-alive cache on the client uses a timeout of N seconds, where N is an integer value. If the server specifies the Connection:keep-alive header, but the server's response does not contain an entry for Keep-alive:timeout=N then the Red Hat build of OpenJDK keep-alive cache on the client uses a timeout of 60 seconds for a proxy and 5 seconds for a server. If the server does not specify the Connection:keep-alive header, the Red Hat build of OpenJDK keep-alive cache on the client uses a timeout of 5 seconds for all connections. The Red Hat build of OpenJDK 11.0.17 release maintains the previously described behavior, but you can now specify the timeouts in the second and third listed use cases by using the http.keepAlive.time.server and http.keepAlive.time.proxy properties, rather than having to rely on the default settings. Note If you set the keep-alive property and the server specifies a keep-alive time for the Keep-Alive response header, the HTTP protocol handler uses the time specified by the server. This situation is identical for a proxy. See JDK-8278067 (JDK Bug System). Updated the default PKCS #12 MAC algorithm The Red Hat build of OpenJDK 11.0.17 updates the default Message Authentication Code (MAC) algorithm for the PKCS #12 keystore to use the SHA-256 cryptographic hash function rather than the SHA-1 function. The SHA-256 function provides a stronger way to secure data. You can view this update in the keystore.pkcs12.macAlgorithm and the keystore.pkcs12.maclterationCount system properties. If you create a keystore with this updated MAC algorithm, and you attempt to use the keystore with an Red Hat build of OpenJDK version earlier than Red Hat build of OpenJDK 11.0.12, you would receive a java.security.NoSuchAlgorithmException message. To use the keystore with an Red Hat build of OpenJDK version that is earlier than Red Hat build of OpenJDK 11.0.12, set the keystore.pkcs12.legacy system property to true to revert the MAC algorithm. See JDK-8267880 (JDK Bug System). Deprecated and removed features Review the following release notes to understand pre-existing features that have been either deprecated or removed in the Red Hat build of OpenJDK 11.0.17 release: Deprecated Kerberos encryption types Red Hat build of OpenJDK 11.0.17 deprecates des3-hmac-sha1 and rc4-hmac Kerberos encryption types. By default, Red Hat build of OpenJDK 11.0.17 disables these encryption types, but you can enable them by completing the following action: In the krb5.conf configuration file, set the allow_weak_crypto tab to true . This configuration also enables other encryption types, such as des-cbc-crc and des-cbc-md5 . Warning Before you apply this configuration, consider the risks of enabling all of these weak Kerberos encryption types, such as introducing weak encryption algorithms to your Kerberos's authentication mechanism. You can disable a subset of weak encryption types by explicitly listing an encryption type in one of the following krb5.conf configuration file's settings: default_tkt_enctypes default_tgs_enctypes permitted_enctypes See JDK-8139348 (JDK Bug System).
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <configuration version=\"2.0\" description=\"test\"> <event name=\"jdk.Deserialization\"> <setting name=\"enabled\">true</setting> <setting name=\"stackTrace\">false</setting> </event> </configuration>", "jarsigner -verify -verbose -certs", "Signed by \"CN=\"Signer\"\" Digest algorithm: SHA-1 (disabled) Signature algorithm: SHA1withRSA (disabled), 2048-bit key WARNING: The jar will be treated as unsigned, because it is signed with a weak algorithm that is now disabled by the security property: jdk.jar.disabledAlgorithms=MD2, MD5, RSA keySize < 1024, DSA keySize < 1024, SHA1 denyAfter 2019-01-01" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.17/rn-openjdk11017-features_openjdk
Appendix A. Getting More Information
Appendix A. Getting More Information For more information on Software Collection packaging, Red Hat Developers, the Red Hat Software Collections and Red Hat Developer Toolset offerings, and Red Hat Enterprise Linux, see the resources listed below. A.1. Red Hat Developers Overview of Red Hat Software Collections on Red Hat Developers - The Red Hat Developers portal provides a number of tutorials to get you started with developing code using different development technologies. This includes the Node.js, Perl, PHP, Python, and Ruby Software Collection. Red Hat Developer Blog - The Red Hat Developer Blog contains up-to-date information, best practices, opinion, product and program announcements as well as pointers to sample code and other resources for those who are designing and developing applications based on Red Hat technologies.
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/chap-getting_more_information
13.2. Types
13.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following example creates a new file in the /var/www/html/ directory, and shows the file inheriting the httpd_sys_content_t type from its parent directory ( /var/www/html/ ): Enter the following command to view the SELinux context of /var/www/html/ : This shows /var/www/html/ is labeled with the httpd_sys_content_t type. Create a new file by using the touch utility as root: Enter the following command to view the SELinux context: The ls -Z command shows file1 labeled with the httpd_sys_content_t type. SELinux allows httpd to read files labeled with this type, but not write to them, even if Linux permissions allow write access. SELinux policy defines what types a process running in the httpd_t domain (where httpd runs) can read and write to. This helps prevent processes from accessing files intended for use by another process. For example, httpd can access files labeled with the httpd_sys_content_t type (intended for the Apache HTTP Server), but by default, cannot access files labeled with the samba_share_t type (intended for Samba). Also, files in user home directories are labeled with the user_home_t type: by default, this prevents httpd from reading or writing to files in user home directories. The following lists some of the types used with httpd . Different types allow you to configure flexible access: httpd_sys_content_t Use this type for static web content, such as .html files used by a static website. Files labeled with this type are accessible (read only) to httpd and scripts executed by httpd . By default, files and directories labeled with this type cannot be written to or modified by httpd or other processes. Note that by default, files created in or copied into the /var/www/html/ directory are labeled with the httpd_sys_content_t type. httpd_sys_script_exec_t Use this type for scripts you want httpd to execute. This type is commonly used for Common Gateway Interface (CGI) scripts in the /var/www/cgi-bin/ directory. By default, SELinux policy prevents httpd from executing CGI scripts. To allow this, label the scripts with the httpd_sys_script_exec_t type and enable the httpd_enable_cgi Boolean. Scripts labeled with httpd_sys_script_exec_t run in the httpd_sys_script_t domain when executed by httpd . The httpd_sys_script_t domain has access to other system domains, such as postgresql_t and mysqld_t . httpd_sys_rw_content_t Files labeled with this type can be written to by scripts labeled with the httpd_sys_script_exec_t type, but cannot be modified by scripts labeled with any other type. You must use the httpd_sys_rw_content_t type to label files that will be read from and written to by scripts labeled with the httpd_sys_script_exec_t type. httpd_sys_ra_content_t Files labeled with this type can be appended to by scripts labeled with the httpd_sys_script_exec_t type, but cannot be modified by scripts labeled with any other type. You must use the httpd_sys_ra_content_t type to label files that will be read from and appended to by scripts labeled with the httpd_sys_script_exec_t type. httpd_unconfined_script_exec_t Scripts labeled with this type run without SELinux protection. Only use this type for complex scripts, after exhausting all other options. It is better to use this type instead of disabling SELinux protection for httpd , or for the entire system. Note To see more of the available types for httpd, enter the following command: Procedure 13.1. Changing the SELinux Context The type for files and directories can be changed with the chcon command. Changes made with chcon do not survive a file system relabel or the restorecon command. SELinux policy controls whether users are able to modify the SELinux context for any given file. The following example demonstrates creating a new directory and an index.html file for use by httpd , and labeling that file and directory to allow httpd access to them: Use the mkdir utility as root to create a top-level directory structure to store files to be used by httpd : Files and directories that do not match a pattern in file-context configuration may be labeled with the default_t type. This type is inaccessible to confined services: Enter the following command as root to change the type of the my/ directory and subdirectories, to a type accessible to httpd . Now, files created under /my/website/ inherit the httpd_sys_content_t type, rather than the default_t type, and are therefore accessible to httpd: See Section 4.7.1, "Temporary Changes: chcon" for further information about chcon . Use the semanage fcontext command ( semanage is provided by the policycoreutils-python package) to make label changes that survive a relabel and the restorecon command. This command adds changes to file-context configuration. Then, run restorecon , which reads file-context configuration, to apply the label change. The following example demonstrates creating a new directory and an index.html file for use by httpd , and persistently changing the label of that directory and file to allow httpd access to them: Use the mkdir utility as root to create a top-level directory structure to store files to be used by httpd : Enter the following command as root to add the label change to file-context configuration: The "/my(/.*)?" expression means the label change applies to the my/ directory and all files and directories under it. Use the touch utility as root to create a new file: Enter the following command as root to apply the label changes ( restorecon reads file-context configuration, which was modified by the semanage command in step 2): See Section 4.7.2, "Persistent Changes: semanage fcontext" for further information on semanage.
[ "~]USD ls -dZ /var/www/html drwxr-xr-x root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html", "~]# touch /var/www/html/file1", "~]USD ls -Z /var/www/html/file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/html/file1", "~]USD grep httpd /etc/selinux/targeted/contexts/files/file_contexts", "~]# mkdir -p /my/website", "~]USD ls -dZ /my drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /my", "~]# chcon -R -t httpd_sys_content_t /my/ ~]# touch /my/website/index.html ~]# ls -Z /my/website/index.html -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 /my/website/index.html", "~]# mkdir -p /my/website", "~]# semanage fcontext -a -t httpd_sys_content_t \"/my(/.*)?\"", "~]# touch /my/website/index.html", "~]# restorecon -R -v /my/ restorecon reset /my context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /my/website context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /my/website/index.html context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-Managing_Confined_Services-The_Apache_HTTP_Server-Types
Chapter 19. Mail Servers
Chapter 19. Mail Servers Red Hat Enterprise Linux offers many advanced applications to serve and access email. This chapter describes modern email protocols in use today, and some of the programs designed to send and receive email. 19.1. Email Protocols Today, email is delivered using a client/server architecture. An email message is created using a mail client program. This program then sends the message to a server. The server then forwards the message to the recipient's email server, where the message is then supplied to the recipient's email client. To enable this process, a variety of standard network protocols allow different machines, often running different operating systems and using different email programs, to send and receive email. The following protocols discussed are the most commonly used in the transfer of email. 19.1.1. Mail Transport Protocols Mail delivery from a client application to the server, and from an originating server to the destination server, is handled by the Simple Mail Transfer Protocol ( SMTP ). 19.1.1.1. SMTP The primary purpose of SMTP is to transfer email between mail servers. However, it is critical for email clients as well. To send email, the client sends the message to an outgoing mail server, which in turn contacts the destination mail server for delivery. For this reason, it is necessary to specify an SMTP server when configuring an email client. Under Red Hat Enterprise Linux, a user can configure an SMTP server on the local machine to handle mail delivery. However, it is also possible to configure remote SMTP servers for outgoing mail. One important point to make about the SMTP protocol is that it does not require authentication. This allows anyone on the Internet to send email to anyone else or even to large groups of people. It is this characteristic of SMTP that makes junk email or spam possible. Imposing relay restrictions limits random users on the Internet from sending email through your SMTP server, to other servers on the internet. Servers that do not impose such restrictions are called open relay servers. Red Hat Enterprise Linux provides the Postfix and Sendmail SMTP programs. 19.1.2. Mail Access Protocols There are two primary protocols used by email client applications to retrieve email from mail servers: the Post Office Protocol ( POP ) and the Internet Message Access Protocol ( IMAP ). 19.1.2.1. POP The default POP server under Red Hat Enterprise Linux is Dovecot and is provided by the dovecot package. Note In order to use Dovecot , first ensure the dovecot package is installed on your system by running, as root : For more information on installing packages with Yum, see Section 8.2.4, "Installing Packages" . When using a POP server, email messages are downloaded by email client applications. By default, most POP email clients are automatically configured to delete the message on the email server after it has been successfully transferred, however this setting usually can be changed. POP is fully compatible with important Internet messaging standards, such as Multipurpose Internet Mail Extensions ( MIME ), which allow for email attachments. POP works best for users who have one system on which to read email. It also works well for users who do not have a persistent connection to the Internet or the network containing the mail server. Unfortunately for those with slow network connections, POP requires client programs upon authentication to download the entire content of each message. This can take a long time if any messages have large attachments. The most current version of the standard POP protocol is POP3 . There are, however, a variety of lesser-used POP protocol variants: APOP - POP3 with MD5 authentication. An encoded hash of the user's password is sent from the email client to the server rather than sending an unencrypted password. KPOP - POP3 with Kerberos authentication. RPOP - POP3 with RPOP authentication. This uses a per-user ID, similar to a password, to authenticate POP requests. However, this ID is not encrypted, so RPOP is no more secure than standard POP . For added security, it is possible to use Secure Socket Layer ( SSL ) encryption for client authentication and data transfer sessions. This can be enabled by using the pop3s service, or by using the stunnel application. For more information on securing email communication, see Section 19.5.1, "Securing Communication" . 19.1.2.2. IMAP The default IMAP server under Red Hat Enterprise Linux is Dovecot and is provided by the dovecot package. See Section 19.1.2.1, "POP" for information on how to install Dovecot . When using an IMAP mail server, email messages remain on the server where users can read or delete them. IMAP also allows client applications to create, rename, or delete mail directories on the server to organize and store email. IMAP is particularly useful for users who access their email using multiple machines. The protocol is also convenient for users connecting to the mail server via a slow connection, because only the email header information is downloaded for messages until opened, saving bandwidth. The user also has the ability to delete messages without viewing or downloading them. For convenience, IMAP client applications are capable of caching copies of messages locally, so the user can browse previously read messages when not directly connected to the IMAP server. IMAP , like POP , is fully compatible with important Internet messaging standards, such as MIME, which allow for email attachments. For added security, it is possible to use SSL encryption for client authentication and data transfer sessions. This can be enabled by using the imaps service, or by using the stunnel program. For more information on securing email communication, see Section 19.5.1, "Securing Communication" . Other free, as well as commercial, IMAP clients and servers are available, many of which extend the IMAP protocol and provide additional functionality. 19.1.2.3. Dovecot The imap-login and pop3-login processes which implement the IMAP and POP3 protocols are spawned by the master dovecot daemon included in the dovecot package. The use of IMAP and POP is configured through the /etc/dovecot/dovecot.conf configuration file; by default dovecot runs IMAP and POP3 together with their secure versions using SSL . To configure dovecot to use POP , complete the following steps: Edit the /etc/dovecot/dovecot.conf configuration file to make sure the protocols variable is uncommented (remove the hash sign ( # ) at the beginning of the line) and contains the pop3 argument. For example: When the protocols variable is left commented out, dovecot will use the default values as described above. Make the change operational for the current session by running the following command: Make the change operational after the reboot by running the command: Note Please note that dovecot only reports that it started the IMAP server, but also starts the POP3 server. Unlike SMTP , both IMAP and POP3 require connecting clients to authenticate using a user name and password. By default, passwords for both protocols are passed over the network unencrypted. To configure SSL on dovecot : Edit the /etc/dovecot/conf.d/10-ssl.conf configuration to make sure the ssl_cipher_list variable is uncommented, and append :!SSLv3 : These values ensure that dovecot avoids SSL versions 2 and also 3, which are both known to be insecure. This is due to the vulnerability described in POODLE: SSLv3 vulnerability (CVE-2014-3566) . See Resolution for POODLE SSL 3.0 vulnerability (CVE-2014-3566) in Postfix and Dovecot for details. Edit the /etc/pki/dovecot/dovecot-openssl.cnf configuration file as you prefer. However, in a typical installation, this file does not require modification. Rename, move or delete the files /etc/pki/dovecot/certs/dovecot.pem and /etc/pki/dovecot/private/dovecot.pem . Execute the /usr/libexec/dovecot/mkcert.sh script which creates the dovecot self signed certificates. These certificates are copied in the /etc/pki/dovecot/certs and /etc/pki/dovecot/private directories. To implement the changes, restart dovecot : More details on dovecot can be found online at http://www.dovecot.org .
[ "~]# yum install dovecot", "protocols = imap pop3 lmtp", "~]# service dovecot restart", "~]# chkconfig dovecot on", "ssl_cipher_list = ALL:!LOW:!SSLv2:!EXP:!aNULL:!SSLv3", "~]# service dovecot restart" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-Mail_Servers
Chapter 9. application
Chapter 9. application This chapter describes the commands under the application command. 9.1. application credential create Create new application credential Usage: Table 9.1. Positional arguments Value Summary <name> Name of the application credential Table 9.2. Command arguments Value Summary -h, --help Show this help message and exit --secret <secret> Secret to use for authentication (if not provided, one will be generated) --role <role> Roles to authorize (name or id) (repeat option to set multiple values) --expiration <expiration> Sets an expiration date for the application credential, format of YYYY-mm-ddTHH:MM:SS (if not provided, the application credential will not expire) --description <description> Application credential description --unrestricted Enable application credential to create and delete other application credentials and trusts (this is potentially dangerous behavior and is disabled by default) --restricted Prohibit application credential from creating and deleting other application credentials and trusts (this is the default behavior) --access-rules <access-rules> Either a string or file path containing a json- formatted list of access rules, each containing a request method, path, and service, for example [{"method": "GET", "path": "/v2.1/servers", "service": "compute"}] Table 9.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 9.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 9.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 9.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 9.2. application credential delete Delete application credentials(s) Usage: Table 9.7. Positional arguments Value Summary <application-credential> Application credentials(s) to delete (name or id) Table 9.8. Command arguments Value Summary -h, --help Show this help message and exit 9.3. application credential list List application credentials Usage: Table 9.9. Command arguments Value Summary -h, --help Show this help message and exit --user <user> User whose application credentials to list (name or ID) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. Table 9.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 9.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 9.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 9.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 9.4. application credential show Display application credential details Usage: Table 9.14. Positional arguments Value Summary <application-credential> Application credential to display (name or id) Table 9.15. Command arguments Value Summary -h, --help Show this help message and exit Table 9.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 9.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 9.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 9.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack application credential create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--secret <secret>] [--role <role>] [--expiration <expiration>] [--description <description>] [--unrestricted] [--restricted] [--access-rules <access-rules>] <name>", "openstack application credential delete [-h] <application-credential> [<application-credential> ...]", "openstack application credential list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--user <user>] [--user-domain <user-domain>]", "openstack application credential show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <application-credential>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/application
Chapter 2. Performance Monitoring Tools
Chapter 2. Performance Monitoring Tools This chapter briefly describes some of the performance monitoring and configuration tools available for Red Hat Enterprise Linux 7. Where possible, this chapter directs readers to further information about how to use the tool, and examples of real life situations that the tool can be used to resolve. The following knowledge base article provides a more comprehensive list of performance monitoring tools suitable for use with Red Hat Enterprise Linux: https://access.redhat.com/site/solutions/173863 . 2.1. /proc The /proc "file system" is a directory that contains a hierarchy of files that represent the current state of the Linux kernel. It allows users and applications to see the kernel's view of the system. The /proc directory also contains information about system hardware and any currently running processes. Most files in the /proc file system are read-only, but some files (primarily those in /proc/sys) can be manipulated by users and applications to communicate configuration changes to the kernel. For further information about viewing and editing files in the /proc directory, refer to the Red Hat Enterprise Linux 7 System Administrator's Guide .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/chap-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools
Appendix A. Fence Device Parameters
Appendix A. Fence Device Parameters This appendix provides tables with parameter descriptions of fence devices. You can configure the parameters with luci , by using the ccs command, or by editing the etc/cluster/cluster.conf file. For a comprehensive list and description of the fence device parameters for each fence agent, see the man page for that agent. Note The Name parameter for a fence device specifies an arbitrary name for the device that will be used by Red Hat High Availability Add-On. This is not the same as the DNS name for the device. Note Certain fence devices have an optional Password Script parameter. The Password Script parameter allows you to specify that a fence-device password is supplied from a script rather than from the Password parameter. Using the Password Script parameter supersedes the Password parameter, allowing passwords to not be visible in the cluster configuration file ( /etc/cluster/cluster.conf ). Table A.1, "Fence Device Summary" lists the fence devices, the fence device agents associated with the fence devices, and provides a reference to the table documenting the parameters for the fence devices. Table A.1. Fence Device Summary Fence Device Fence Agent Reference to Parameter Description APC Power Switch (telnet/SSH) fence_apc Table A.2, "APC Power Switch (telnet/SSH)" APC Power Switch over SNMP fence_apc_snmp Table A.3, "APC Power Switch over SNMP" Brocade Fabric Switch fence_brocade Table A.4, "Brocade Fabric Switch" Cisco MDS fence_cisco_mds Table A.5, "Cisco MDS" Cisco UCS fence_cisco_ucs Table A.6, "Cisco UCS" Dell DRAC 5 fence_drac5 Table A.7, "Dell DRAC 5" Dell iDRAC fence_idrac Table A.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" Eaton Network Power Switch (SNMP Interface) fence_eaton_snmp Table A.8, "Eaton Network Power Controller (SNMP Interface) (Red Hat Enterprise Linux 6.4 and later)" Egenera BladeFrame fence_egenera Table A.9, "Egenera BladeFrame" Emerson Network Power Switch (SNMP Interface) fence_emerson Table A.10, "Emerson Network Power Switch (SNMP interface) (Red Hat Enterprise LInux 6.7 and later) " ePowerSwitch fence_eps Table A.11, "ePowerSwitch" Fence virt (Serial/VMChannel Mode) fence_virt Table A.12, "Fence virt (Serial/VMChannel Mode)" Fence virt (fence_xvm/Multicast Mode) fence_xvm Table A.13, "Fence virt (fence_xvm/Multicast Mode) " Fujitsu Siemens Remoteview Service Board (RSB) fence_rsb Table A.14, "Fujitsu Siemens Remoteview Service Board (RSB)" HP BladeSystem fence_hpblade Table A.15, "HP BladeSystem (Red Hat Enterprise Linux 6.4 and later)" HP iLO Device fence_ilo Table A.16, "HP iLO and HP iLO2" HP iLO over SSH Device fence_ilo_ssh Table A.17, "HP iLO over SSH, HP iLO3 over SSH, HPiLO4 over SSH (Red Hat Enterprise Linux 6.7 and later)" HP iLO2 Device fence_ilo2 Table A.16, "HP iLO and HP iLO2" HP iLO3 Device fence_ilo3 Table A.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" HP iLO3 over SSH Device fence_ilo3_ssh Table A.17, "HP iLO over SSH, HP iLO3 over SSH, HPiLO4 over SSH (Red Hat Enterprise Linux 6.7 and later)" HP iLO4 Device fence_ilo4 Table A.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" HP iLO4 over SSH Device fence_ilo4_ssh Table A.17, "HP iLO over SSH, HP iLO3 over SSH, HPiLO4 over SSH (Red Hat Enterprise Linux 6.7 and later)" HP iLO MP fence_ilo_mp Table A.18, "HP iLO MP" HP Moonshot iLO fence_ilo_moonshot Table A.19, "HP Moonshot iLO (Red Hat Enterprise Linux 6.7 and later)" IBM BladeCenter fence_bladecenter Table A.20, "IBM BladeCenter" IBM BladeCenter SNMP fence_ibmblade Table A.21, "IBM BladeCenter SNMP" IBM Integrated Management Module fence_imm Table A.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" IBM iPDU fence_ipdu Table A.22, "IBM iPDU (Red Hat Enterprise Linux 6.4 and later)" IF MIB fence_ifmib Table A.23, "IF MIB" Intel Modular fence_intelmodular Table A.24, "Intel Modular" IPMI (Intelligent Platform Management Interface) Lan fence_ipmilan Table A.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" Fence kdump fence_kdump Table A.26, "Fence kdump" Multipath Persistent Reservation Fencing fence_mpath Table A.27, "Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later)" RHEV-M fencing fence_rhevm Table A.28, "RHEV-M fencing (RHEL 6.2 and later against RHEV 3.0 and later)" SCSI Fencing fence_scsi Table A.29, "SCSI Reservation Fencing" VMware Fencing (SOAP Interface) fence_vmware_soap Table A.30, "VMware Fencing (SOAP Interface) (Red Hat Enterprise Linux 6.2 and later)" WTI Power Switch fence_wti Table A.31, "WTI Power Switch" Table A.2, "APC Power Switch (telnet/SSH)" lists the fence device parameters used by fence_apc , the fence agent for APC over telnet/SSH. Table A.2. APC Power Switch (telnet/SSH) luci Field cluster.conf Attribute Description Name name A name for the APC device connected to the cluster into which the fence daemon logs by means of telnet/ssh. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport The TCP port to use to connect to the device. The default port is 23, or 22 if Use SSH is selected. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port port The port. Switch (optional) switch The switch number for the APC switch that connects to the node when you have multiple daisy-chained switches. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Use SSH secure Indicates that system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Table A.3, "APC Power Switch over SNMP" lists the fence device parameters used by fence_apc_snmp , the fence agent for APC that logs into the SNP device by means of the SNMP protocol. Table A.3. APC Power Switch over SNMP luci Field cluster.conf Attribute Description Name name A name for the APC device connected to the cluster into which the fence daemon logs by means of the SNMP protocol. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP port udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string; the default value is private . SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port The port. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.4, "Brocade Fabric Switch" lists the fence device parameters used by fence_brocade , the fence agent for Brocade FC switches. Table A.4. Brocade Fabric Switch luci Field cluster.conf Attribute Description Name name A name for the Brocade device connected to the cluster. IP Address or Hostname ipaddr The IP address assigned to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Force IP Family inet4_only, inet6_only Force the agent to use IPv4 or IPv6 addresses only Force Command Prompt cmd_prompt The command prompt to use. The default value is '\USD'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port port The switch outlet number. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Unfencing unfence section of the cluster configuration file When enabled, this ensures that a fenced node is not re-enabled until the node has been rebooted. This is necessary for non-power fence methods (that is, SAN/storage fencing). When you configure a device that requires unfencing, the cluster must first be stopped and the full configuration including devices and unfencing must be added before the cluster is started. For more information about unfencing a node, see the fence_node (8) man page. For information about configuring unfencing in the cluster configuration file, see Section 8.3, "Configuring Fencing" . For information about configuring unfencing with the ccs command, see Section 6.7.2, "Configuring a Single Storage-Based Fence Device for a Node" . Table A.5, "Cisco MDS" lists the fence device parameters used by fence_cisco_mds , the fence agent for Cisco MDS. Table A.5. Cisco MDS luci Field cluster.conf Attribute Description Name name A name for the Cisco MDS 9000 series device with SNMP enabled. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP port (optional) udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3). SNMP Community community The SNMP community string. SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port The port. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.6, "Cisco UCS" lists the fence device parameters used by fence_cisco_ucs , the fence agent for Cisco UCS. Table A.6. Cisco UCS luci Field cluster.conf Attribute Description Name name A name for the Cisco UCS device. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP port (optional) ipport The TCP port to use to connect to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSL ssl Use SSL connections to communicate with the device. Sub-Organization suborg Additional path needed to access suborganization. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.7, "Dell DRAC 5" lists the fence device parameters used by fence_drac5 , the fence agent for Dell DRAC 5. Table A.7. Dell DRAC 5 luci Field cluster.conf Attribute Description Name name The name assigned to the DRAC. IP Address or Hostname ipaddr The IP address or host name assigned to the DRAC. IP Port (optional) ipport The TCP port to use to connect to the device. Login login The login name used to access the DRAC. Password passwd The password used to authenticate the connection to the DRAC. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Module Name module_name (optional) The module name for the DRAC when you have multiple DRAC modules. Force Command Prompt cmd_prompt The command prompt to use. The default value is '\USD'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Table A.8, "Eaton Network Power Controller (SNMP Interface) (Red Hat Enterprise Linux 6.4 and later)" lists the fence device parameters used by fence_eaton_snmp , the fence agent for the Eaton over SNMP network power switch. Table A.8. Eaton Network Power Controller (SNMP Interface) (Red Hat Enterprise Linux 6.4 and later) luci Field cluster.conf Attribute Description Name name A name for the Eaton network power switch connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string; the default value is private . SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. This parameter is always required. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.9, "Egenera BladeFrame" lists the fence device parameters used by fence_egenera , the fence agent for the Egenera BladeFrame. Table A.9. Egenera BladeFrame luci Field cluster.conf Attribute Description Name name A name for the Egenera BladeFrame device connected to the cluster. CServer cserver The host name (and optionally the user name in the form of username@hostname ) assigned to the device. Refer to the fence_egenera (8) man page for more information. ESH Path (optional) esh The path to the esh command on the cserver (default is /opt/panmgr/bin/esh) Username user The login name. The default value is root . lpan lpan The logical process area network (LPAN) of the device. pserver pserver The processing blade (pserver) name of the device. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Unfencing unfence section of the cluster configuration file When enabled, this ensures that a fenced node is not re-enabled until the node has been rebooted. This is necessary for non-power fence methods (that is, SAN/storage fencing). When you configure a device that requires unfencing, the cluster must first be stopped and the full configuration including devices and unfencing must be added before the cluster is started. For more information about unfencing a node, see the fence_node (8) man page. For information about configuring unfencing in the cluster configuration file, see Section 8.3, "Configuring Fencing" . For information about configuring unfencing with the ccs command, see Section 6.7.2, "Configuring a Single Storage-Based Fence Device for a Node" . Table A.10, "Emerson Network Power Switch (SNMP interface) (Red Hat Enterprise LInux 6.7 and later) " lists the fence device parameters used by fence_emerson , the fence agent for Emerson over SNMP. Table A.10. Emerson Network Power Switch (SNMP interface) (Red Hat Enterprise LInux 6.7 and later) luci Field cluster.conf Attribute Description Name name A name for the Emerson Network Power Switch device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport UDP/TCP port to use for connections with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string. SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP privacy protocol password snmp_priv_passwd The SNMP Privacy Protocol Password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.11, "ePowerSwitch" lists the fence device parameters used by fence_eps , the fence agent for ePowerSwitch. Table A.11. ePowerSwitch luci Field cluster.conf Attribute Description Name name A name for the ePowerSwitch device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Name of Hidden Page hidden_page The name of the hidden page for the device. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.12, "Fence virt (Serial/VMChannel Mode)" lists the fence device parameters used by fence_virt , the fence agent for virtual machines using VM channel or serial mode . Table A.12. Fence virt (Serial/VMChannel Mode) luci Field cluster.conf Attribute Description Name name A name for the Fence virt fence device. Serial Device serial_device On the host, the serial device must be mapped in each domain's configuration file. For more information, see the fence_virt man page. If this field is specified, it causes the fence_virt fencing agent to operate in serial mode. Not specifying a value causes the fence_virt fencing agent to operate in VM channel mode. Serial Parameters serial_params The serial parameters. The default is 115200, 8N1. VM Channel IP Address channel_address The channel IP. The default value is 10.0.2.179. Timeout (optional) timeout Fencing timeout, in seconds. The default value is 30. Domain port (formerly domain ) Virtual machine (domain UUID or name) to fence. ipport The channel port. The default value is 1229, which is the value used when configuring this fence device with luci . Delay (optional) delay Fencing delay, in seconds. The fence agent will wait the specified number of seconds before attempting a fencing operation. The default value is 0. Table A.13, "Fence virt (fence_xvm/Multicast Mode) " lists the fence device parameters used by fence_xvm , the fence agent for virtual machines using multicast. Table A.13. Fence virt (fence_xvm/Multicast Mode) luci Field cluster.conf Attribute Description Name name A name for the Fence virt fence device. Timeout (optional) timeout Fencing timeout, in seconds. The default value is 30. Domain port (formerly domain ) Virtual machine (domain UUID or name) to fence. Delay (optional) delay Fencing delay, in seconds. The fence agent will wait the specified number of seconds before attempting a fencing operation. The default value is 0. Table A.14, "Fujitsu Siemens Remoteview Service Board (RSB)" lists the fence device parameters used by fence_rsb , the fence agent for Fujitsu-Siemens RSB. Table A.14. Fujitsu Siemens Remoteview Service Board (RSB) luci Field cluster.conf Attribute Description Name name A name for the RSB to use as a fence device. IP Address or Hostname ipaddr The host name assigned to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. Path to SSH Identity File identity_file The Identity file for SSH. TCP Port ipport The port number on which the telnet service listens. The default value is 3172. Force Command Prompt cmd_prompt The command prompt to use. The default value is '\USD'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Table A.15, "HP BladeSystem (Red Hat Enterprise Linux 6.4 and later)" lists the fence device parameters used by fence_hpblade , the fence agent for HP BladeSystem. Table A.15. HP BladeSystem (Red Hat Enterprise Linux 6.4 and later) luci Field cluster.conf Attribute Description Name name The name assigned to the HP Bladesystem device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the HP BladeSystem device. IP Port (optional) ipport The TCP port to use to connect to the device. Login login The login name used to access the HP BladeSystem device. This parameter is required. Password passwd The password used to authenticate the connection to the fence device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Force Command Prompt cmd_prompt The command prompt to use. The default value is '\USD'. Missing port returns OFF instead of failure missing_as_off Missing port returns OFF instead of failure. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. The fence agents for HP iLO devices ( fence_ilo and) HP iLO2 devices ( fence_ilo2 ) share the same implementation. Table A.16, "HP iLO and HP iLO2" lists the fence device parameters used by these agents. Table A.16. HP iLO and HP iLO2 luci Field cluster.conf Attribute Description Name name A name for the server with HP iLO support. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport TCP port to use for connection with the device. The default value is 443. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. The fence agents for HP iLO devices over SSH ( fence_ilo_ssh ), HP iLO3 devices over SSH ( fence_ilo3_ssh ), and HP iLO4 devices over SSH ( fence_ilo4_ssh ) share the same implementation. Table A.17, "HP iLO over SSH, HP iLO3 over SSH, HPiLO4 over SSH (Red Hat Enterprise Linux 6.7 and later)" lists the fence device parameters used by these agents. Table A.17. HP iLO over SSH, HP iLO3 over SSH, HPiLO4 over SSH (Red Hat Enterprise Linux 6.7 and later) luci Field cluster.conf Attribute Description Name name A name for the server with HP iLO support. IP Address or Hostname ipaddr The IP address or host name assigned to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. Path to SSH Identity File identity_file The Identity file for SSH. TCP Port ipport UDP/TCP port to use for connections with the device; the default value is 23. Force Command Prompt cmd_prompt The command prompt to use. The default value is 'MP>', 'hpiLO->'. Method to Fence method The method to fence: on/off or cycle Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Table A.18, "HP iLO MP" lists the fence device parameters used by fence_ilo_mp , the fence agent for HP iLO MP devices. Table A.18. HP iLO MP luci Field cluster.conf Attribute Description Name name A name for the server with HP iLO support. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport TCP port to use for connection with the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The Identity file for SSH. Force Command Prompt cmd_prompt The command prompt to use. The default value is 'MP>', 'hpiLO->'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Table A.19, "HP Moonshot iLO (Red Hat Enterprise Linux 6.7 and later)" lists the fence device parameters used by fence_ilo_moonshot , the fence agent for HP Moonshot iLO devices. Table A.19. HP Moonshot iLO (Red Hat Enterprise Linux 6.7 and later) luci Field cluster.conf Attribute Description Name name A name for the server with HP iLO support. IP Address or Hostname ipaddr The IP address or host name assigned to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. Path to SSH Identity File identity_file The Identity file for SSH. TCP Port ipport UDP/TCP port to use for connections with the device; the default value is 22. Force Command Prompt cmd_prompt The command prompt to use. The default value is 'MP>', 'hpiLO->'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Table A.20, "IBM BladeCenter" lists the fence device parameters used by fence_bladecenter , the fence agent for IBM BladeCenter. Table A.20. IBM BladeCenter luci Field cluster.conf Attribute Description Name name A name for the IBM BladeCenter device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP port (optional) ipport TCP port to use for connection with the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Use SSH secure Indicates that system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Table A.21, "IBM BladeCenter SNMP" lists the fence device parameters used by fence_ibmblade , the fence agent for IBM BladeCenter over SNMP. Table A.21. IBM BladeCenter SNMP luci Field cluster.conf Attribute Description Name name A name for the IBM BladeCenter SNMP device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport UDP/TCP port to use for connections with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string. SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP privacy protocol password snmp_priv_passwd The SNMP Privacy Protocol Password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.22, "IBM iPDU (Red Hat Enterprise Linux 6.4 and later)" lists the fence device parameters used by fence_ipdu , the fence agent for iPDU over SNMP devices. Table A.22. IBM iPDU (Red Hat Enterprise Linux 6.4 and later) luci Field cluster.conf Attribute Description Name name A name for the IBM iPDU device connected to the cluster into which the fence daemon logs by means of the SNMP protocol. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string; the default value is private . SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP Authentication Protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.23, "IF MIB" lists the fence device parameters used by fence_ifmib , the fence agent for IF-MIB devices. Table A.23. IF MIB luci Field cluster.conf Attribute Description Name name A name for the IF MIB device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string. SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.24, "Intel Modular" lists the fence device parameters used by fence_intelmodular , the fence agent for Intel Modular. Table A.24. Intel Modular luci Field cluster.conf Attribute Description Name name A name for the Intel Modular device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string; the default value is private . SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. The fence agents for IPMI over LAN ( fence_ipmilan ,) Dell iDRAC ( fence_idrac ), IBM Integrated Management Module ( fence_imm ), HP iLO3 devices ( fence_ilo3 ), and HP iLO4 devices ( fence_ilo4 ) share the same implementation. Table A.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" lists the fence device parameters used by these agents. Table A.25. IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4 luci Field cluster.conf Attribute Description Name name A name for the fence device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. Login login The login name of a user capable of issuing power on/off commands to the given port. Password passwd The password used to authenticate the connection to the port. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Authentication Type auth Authentication type: none , password , or MD5 . Use Lanplus lanplus True or 1 . If blank, then value is False . It is recommended that you enable Lanplus to improve the security of your connection if your hardware supports it. Ciphersuite to use cipher The remote server authentication, integrity, and encryption algorithms to use for IPMIv2 lanplus connections. Privilege level privlvl The privilege level on the device. IPMI Operation Timeout timeout Timeout in seconds for IPMI operation. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. The default value is 2 seconds for fence_ipmilan , fence_idrac , fence_imm , and fence_ilo4 . The default value is 4 seconds for fence_ilo3 . Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Method to Fence method The method to fence: on/off or cycle Table A.26, "Fence kdump" lists the fence device parameters used by fence_kdump , the fence agent for kdump crash recovery service. Note that fence_kdump is not a replacement for traditional fencing methods; The fence_kdump agent can detect only that a node has entered the kdump crash recovery service. This allows the kdump crash recovery service to complete without being preempted by traditional power fencing methods. Table A.26. Fence kdump luci Field cluster.conf Attribute Description Name name A name for the fence_kdump device. IP Family family IP network family. The default value is auto . IP Port (optional) ipport IP port number that the fence_kdump agent will use to listen for messages. The default value is 7410. Operation Timeout (seconds) (optional) timeout Number of seconds to wait for message from failed node. Node name nodename Name or IP address of the node to be fenced. Table A.27, "Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later)" lists the fence device parameters used by fence_mpath , the fence agent for multipath persistent reservation fencing. Table A.27. Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later) luci Field cluster.conf Attribute Description Name name A name for the fence_mpath device. Devices (Comma delimited list) devices Comma-separated list of devices to use for the current operation. Each device must support SCSI-3 persistent reservations. Use sudo when calling third-party software sudo Use sudo (without password) when calling 3rd party software. Path to sudo binary (optional) sudo_path Path to sudo binary (default value is /usr/bin/sudo . Path to mpathpersist binary (optional) mpathpersist_path Path to mpathpersist binary (default value is /sbin/mpathpersist . Path to a directory where the fence agent can store information (optional) store_path Path to directory where fence agent can store information (default value is /var/run/cluster . Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Unfencing unfence section of the cluster configuration file When enabled, this ensures that a fenced node is not re-enabled until the node has been rebooted. This is necessary for non-power fence methods. When you configure a device that requires unfencing, the cluster must first be stopped and the full configuration including devices and unfencing must be added before the cluster is started. For more information about unfencing a node, see the fence_node (8) man page. For information about configuring unfencing in the cluster configuration file, see Section 8.3, "Configuring Fencing" . For information about configuring unfencing with the ccs command, see Section 6.7.2, "Configuring a Single Storage-Based Fence Device for a Node" . Key for current action key Key to use for the current operation. This key should be unique to a node and written in /etc/multipath.conf . For the "on" action, the key specifies the key use to register the local node. For the "off" action, this key specifies the key to be removed from the device(s). This parameter is always required. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.28, "RHEV-M fencing (RHEL 6.2 and later against RHEV 3.0 and later)" lists the fence device parameters used by fence_rhevm , the fence agent for RHEV-M fencing. Table A.28. RHEV-M fencing (RHEL 6.2 and later against RHEV 3.0 and later) luci Field cluster.conf Attribute Description Name name Name of the RHEV-M fencing device. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport The TCP port to use for connection with the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSL ssl Use SSL connections to communicate with the device. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.29, "SCSI Reservation Fencing" lists the fence device parameters used by fence_scsi , the fence agent for SCSI persistent reservations. Note Use of SCSI persistent reservations as a fence method is supported with the following limitations: When using SCSI fencing, all nodes in the cluster must register with the same devices so that each node can remove another node's registration key from all the devices it is registered with. Devices used for the cluster volumes should be a complete LUN, not partitions. SCSI persistent reservations work on an entire LUN, meaning that access is controlled to each LUN, not individual partitions. It is recommended that devices used for the cluster volumes be specified in the format /dev/disk/by-id/ xxx where possible. Devices specified in this format are consistent among all nodes and will point to the same disk, unlike devices specified in a format such as /dev/sda which can point to different disks from machine to machine and across reboots. Table A.29. SCSI Reservation Fencing luci Field cluster.conf Attribute Description Name name A name for the SCSI fence device. Unfencing unfence section of the cluster configuration file When enabled, this ensures that a fenced node is not re-enabled until the node has been rebooted. This is necessary for non-power fence methods (that is, SAN/storage fencing). When you configure a device that requires unfencing, the cluster must first be stopped and the full configuration including devices and unfencing must be added before the cluster is started. For more information about unfencing a node, see the fence_node (8) man page. For information about configuring unfencing in the cluster configuration file, see Section 8.3, "Configuring Fencing" . For information about configuring unfencing with the ccs command, see Section 6.7.2, "Configuring a Single Storage-Based Fence Device for a Node" . Node name nodename The node name is used to generate the key value used for the current operation. Key for current action key (overrides node name) Key to use for the current operation. This key should be unique to a node. For the "on" action, the key specifies the key use to register the local node. For the "off" action,this key specifies the key to be removed from the device(s). Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.30, "VMware Fencing (SOAP Interface) (Red Hat Enterprise Linux 6.2 and later)" lists the fence device parameters used by fence_vmware_soap , the fence agent for VMware over SOAP API. Table A.30. VMware Fencing (SOAP Interface) (Red Hat Enterprise Linux 6.2 and later) luci Field cluster.conf Attribute Description Name name Name of the virtual machine fencing device. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport The TCP port to use for connection with the device. The default port is 80, or 443 if Use SSL is selected. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. VM name port Name of virtual machine in inventory path format (for example, /datacenter/vm/Discovered_virtual_machine/myMachine). VM UUID uuid The UUID of the virtual machine to fence. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Use SSL ssl Use SSL connections to communicate with the device. Table A.31, "WTI Power Switch" lists the fence device parameters used by fence_wti , the fence agent for the WTI network power switch. Table A.31. WTI Power Switch luci Field cluster.conf Attribute Description Name name A name for the WTI power switch connected to the cluster. IP Address or Hostname ipaddr The IP or host name address assigned to the device. IP Port (optional) ipport The TCP port to use to connect to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Force command prompt cmd_prompt The command prompt to use. The default value is ['RSM>', '>MPC', 'IPS>', 'TPS>', 'NBB>', 'NPS>', 'VMR>'] Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Use SSH secure Indicates that system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Port port Physical plug number or name of virtual machine.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ap-fence-device-param-ca
2.4. Configuring Cascading Chaining
2.4. Configuring Cascading Chaining The database link can be configured to point to another database link, creating a cascading chaining operation. A cascading chain occurs any time more than one hop is required to access all of the data in a directory tree. 2.4.1. Overview of Cascading Chaining Cascading chaining occurs when more than one hop is required for the directory to process a client application's request. The client application sends a modify request to Server 1. Server one contains a database link that forwards the operation to Server 2, which contains another database link. The database link on Server 2 forwards the operations to server three, which contains the data the clients wants to modify in a database. Two hops are required to access the piece of data the client want to modify. During a normal operation request, a client binds to the server, and then any ACIs applying to that client are evaluated. With cascading chaining, the client bind request is evaluated on Server 1, but the ACIs applying to the client are evaluated only after the request has been chained to the destination server, in the above example Server 2. For example, on Server A, a directory tree is split: The root suffix dc=example,dc=com and ou=people and ou=groups sub-suffixes are stored on Server A. The ou=europe,dc=example,dc=com and ou=groups suffixes are stored in on Server B, and the ou=people branch of the ou=europe,dc=example,dc=com suffix is stored on Server C. With cascading configured on servers A, B, and C, a client request targeted at the ou=people,ou=europe,dc=example,dc=com entry would be routed by the directory as follows: First, the client binds to Server A and chains to Server B using Database Link 1. Then Server B chains to the target database on Server C using Database Link 2 to access the data in the ou=people,ou=europe,dc=example,dc=com branch. Because at least two hops are required for the directory to service the client request, this is considered a cascading chain. 2.4.2. Configuring Cascading Chaining Using the Command Line This section provides an example of how to configure cascading chaining with three servers as shown in the following diagram: Configuration Steps on Server 1 Create the suffix c=africa,ou=people,dc=example,dc=com : Create the DBLink1 database link: Enable loop detection: Configuration Steps on Server 2 Create a proxy administrative user on server 2 for server 1 to use for proxy authorization: Important For security reasons, do not use the cn=Directory Manager account. Create the suffix ou=Zanzibar,c=africa,ou=people,dc=example,dc=com : Create the DBLink2 database link: Because the DBLink2 link is the intermediate database link in the cascading chaining configuration, enable the ACL check to allow the server to check whether it should allow the client and proxy administrative user access to the database link. Enable loop detection: Enable the proxy authorization control: Add the local proxy authorization ACI: Add an ACI that enables users in c=us,ou=people,dc=example,dc=com on server 1 who have a uid attribute set, to perform any type of operation on the ou=Zanzibar,c=africa,ou=people,dc=example,dc=com suffix tree on server 3: If there are users on server 3 under a different suffix that will require additional rights on server 3, it is necessary to add additional client ACIs on server 2. Configuration Steps on Server 3 Create a proxy administrative user on server 3 for server 2 to use for proxy authorization: Important For security reasons, do not use the cn=Directory Manager account. Add the local proxy authorization ACI: Add an ACI that enables users in c=us,ou=people,dc=example,dc=com on server 1 who have a uid attribute set, to perform any type of operation on the ou=Zanzibar,c=africa,ou=people,dc=example,dc=com suffix tree on server 3: If there are users on server 3 under a different suffix that will require additional rights on server 3, it is necessary to add additional client ACIs on server 2. The cascading chaining configuration is now set up. This cascading configuration enables a user to bind to server 1 and modify information in the ou=Zanzibar,c=africa,ou=people,dc=example,dc=com branch on server 3. Depending on your security needs, it can be necessary to provide more detailed access control. 2.4.3. Detecting Loops An LDAP control included with Directory Server prevents loops. When first attempting to chain, the server sets this control to the maximum number of hops, or chaining connections, allowed. Each subsequent server decrements the count. If a server receives a count of 0 , it determines that a loop has been detected and notifies the client application. To use the control, add the 1.3.6.1.4.1.1466.29539.12 OID. For details about adding an LDAP control, see Section 2.3.2.2, "Chaining LDAP Controls" . If the control is not present in the configuration file of each database link, loop detection will not be implemented. The number of hops allowed is defined using the nsHopLimit parameter. By default, the parameter is set to 10 . For example, to set the hop limit of the example chain to 5 :
[ "dsconf -D \"cn=Directory Manager\" ldap://server1.example.com backend create --parent-suffix=\"ou=people,dc=example,dc=com\" --suffix=\"c=africa,ou=people,dc=example,dc=com\"", "dsconf -D \"cn=Directory Manager\" ldap://server1.example.com chaining link-create --suffix=\"c=africa,ou=people,dc=example,dc=com\" --server-url=\"ldap://africa.example.com:389/\" --bind-mech=\"\" --bind-dn=\"cn=server1 proxy admin,cn=config\" --bind-pw=\"password\" --check-aci=\"off\" \"DBLink1\"", "dsconf -D \"cn=Directory Manager\" ldap://server1.example.com chaining config-set --add-control=\"1.3.6.1.4.1.1466.29539.12\"", "ldapadd -D \"cn=Directory Manager\" -W -p 389 -h server2.example.com -x dn: cn=server1 proxy admin,cn=config objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson cn: server1 proxy admin sn: server1 proxy admin userPassword: password description: Entry for use by database links", "dsconf -D \"cn=Directory Manager\" ldap://server2.example.com backend create --parent-suffix=\"c=africaou=people,dc=example,dc=com\" --suffix=\"ou=Zanzibar,c=africa,ou=people,dc=example,dc=com\"", "dsconf -D \"cn=Directory Manager\" ldap://server2.example.com chaining link-create --suffix=\"ou=Zanzibar,c=africa,ou=people,dc=example,dc=com\" --server-url=\"ldap://zanz.africa.example.com:389/\" --bind-mech=\"\" --bind-dn=\"server2 proxy admin,cn=config\" --bind-pw=\"password\" --check-aci=\"on \"DBLink2\"", "dsconf -D \"cn=Directory Manager\" ldap://server2.example.com chaining config-set --add-control=\"1.3.6.1.4.1.1466.29539.12\"", "dsconf -D \"cn=Directory Manager\" ldap://server2.example.com chaining config-set --add-control=\"2.16.840.1.113730.3.4.12\"", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server2.example.com -x dn: c=africa,ou=people,dc=example,dc=com changetype: modify add: aci aci:(targetattr=\"*\")(target=\"lou=Zanzibar,c=africa,ou=people,dc=example,dc=com\") (version 3.0; acl \"Proxied authorization for database links\"; allow (proxy) userdn = \"ldap:///cn=server1 proxy admin,cn=config\";)", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server2.example.com -x dn: c=africa,ou=people,dc=example,dc=com changetype: modify add: aci aci:(targetattr=\"*\")(target=\"ou=Zanzibar,c=africa,ou=people,dc=example,dc=com\") (version 3.0; acl \"Client authorization for database links\"; allow (all) userdn = \"ldap:///uid=*,c=us,ou=people,dc=example,dc=com\";)", "ldapadd -D \"cn=Directory Manager\" -W -p 389 -h server3.example.com -x dn: cn=server2 proxy admin,cn=config objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson cn: server2 proxy admin sn: server2 proxy admin userPassword: password description: Entry for use by database links", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server3.example.com -x dn: ou=Zanzibar,ou=people,dc=example,dc=com changetype: modify add: aci aci: (targetattr = \"*\")(version 3.0; acl \"Proxied authorization for database links\"; allow (proxy) userdn = \"ldap:///cn=server2 proxy admin,cn=config\";)", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server3.example.com -x dn: ou=Zanzibar,ou=people,dc=example,dc=com changetype: modify add: aci aci: (targetattr =\"*\")(target=\"ou=Zanzibar,c=africa,ou=people,dc=example,dc=com\") (version 3.0; acl \"Client authentication for database link users\"; allow (all) userdn = \"ldap:///uid=*,c=us,ou=people,dc=example,dc=com\";)", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining link-set --hop-limit 5 example" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Creating_and_Maintaining_Database_Links-Advanced_Feature_Configuring_Cascading_Chaining
Chapter 5. Installing Knative Eventing
Chapter 5. Installing Knative Eventing To use event-driven architecture on your cluster, install Knative Eventing. You can create Knative components such as event sources, brokers, and channels and then use them to send events to applications or external systems. After you install the OpenShift Serverless Operator, you can install Knative Eventing by using the default settings, or configure more advanced settings in the KnativeEventing custom resource (CR). For more information about configuration options for the KnativeEventing CR, see Global configuration . Important If you want to use Red Hat OpenShift distributed tracing with OpenShift Serverless , you must install and configure Red Hat OpenShift distributed tracing before you install Knative Eventing. 5.1. Installing Knative Eventing by using the web console After you install the OpenShift Serverless Operator, install Knative Eventing by using the OpenShift Container Platform web console. You can install Knative Eventing by using the default settings or configure more advanced settings in the KnativeEventing custom resource (CR). Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have logged in to the OpenShift Container Platform web console. You have installed the OpenShift Serverless Operator. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Check that the Project dropdown at the top of the page is set to Project: knative-eventing . Click Knative Eventing in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Eventing tab. Click Create Knative Eventing . In the Create Knative Eventing page, you can configure the KnativeEventing object by using either the form provided, or by editing the YAML file. Use the form for simpler configurations that do not require full control of KnativeEventing object creation. Click Create . Edit the YAML file for more complex configurations that require full control of KnativeEventing object creation. To access the YAML editor, click edit YAML on the Create Knative Eventing page. After you have installed Knative Eventing, the KnativeEventing object is created, and you are automatically directed to the Knative Eventing tab. You will see the knative-eventing custom resource in the list of resources. Verification Click on the knative-eventing custom resource in the Knative Eventing tab. You are automatically directed to the Knative Eventing Overview page. Scroll down to look at the list of Conditions . You should see a list of conditions with a status of True , as shown in the example image. Note It may take a few seconds for the Knative Eventing resources to be created. You can check their status in the Resources tab. If the conditions have a status of Unknown or False , wait a few moments and then check again after you have confirmed that the resources have been created. 5.2. Installing Knative Eventing by using YAML After you install the OpenShift Serverless Operator, you can install Knative Eventing by using the default settings, or configure more advanced settings in the KnativeEventing custom resource (CR). You can use the following procedure to install Knative Eventing by using YAML files and the oc CLI. Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have installed the OpenShift Serverless Operator. Install the OpenShift CLI ( oc ). Procedure Create a file named eventing.yaml . Copy the following sample YAML into eventing.yaml : apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing Optional. Make any changes to the YAML that you want to implement for your Knative Eventing deployment. Apply the eventing.yaml file by entering: USD oc apply -f eventing.yaml Verification Verify the installation is complete by entering the following command and observing the output: USD oc get knativeeventing.operator.knative.dev/knative-eventing \ -n knative-eventing \ --template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}' Example output InstallSucceeded=True Ready=True Note It may take a few seconds for the Knative Eventing resources to be created. If the conditions have a status of Unknown or False , wait a few moments and then check again after you have confirmed that the resources have been created. Check that the Knative Eventing resources have been created by entering: USD oc get pods -n knative-eventing Example output NAME READY STATUS RESTARTS AGE broker-controller-58765d9d49-g9zp6 1/1 Running 0 7m21s eventing-controller-65fdd66b54-jw7bh 1/1 Running 0 7m31s eventing-webhook-57fd74b5bd-kvhlz 1/1 Running 0 7m31s imc-controller-5b75d458fc-ptvm2 1/1 Running 0 7m19s imc-dispatcher-64f6d5fccb-kkc4c 1/1 Running 0 7m18s 5.3. Installing Knative broker for Apache Kafka The Knative broker implementation for Apache Kafka provides integration options for you to use supported versions of the Apache Kafka message streaming platform with OpenShift Serverless. Knative broker for Apache Kafka functionality is available in an OpenShift Serverless installation if you have installed the KnativeKafka custom resource. Prerequisites You have installed the OpenShift Serverless Operator and Knative Eventing on your cluster. You have access to a Red Hat AMQ Streams cluster. Install the OpenShift CLI ( oc ) if you want to use the verification steps. You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You are logged in to the OpenShift Container Platform web console. Procedure In the Administrator perspective, navigate to Operators Installed Operators . Check that the Project dropdown at the top of the page is set to Project: knative-eventing . In the list of Provided APIs for the OpenShift Serverless Operator, find the Knative Kafka box and click Create Instance . Configure the KnativeKafka object in the Create Knative Kafka page. Important To use the Kafka channel, source, broker, or sink on your cluster, you must toggle the enabled switch for the options you want to use to true . These switches are set to false by default. Additionally, to use the Kafka channel, broker, or sink you must specify the bootstrap servers. Use the form for simpler configurations that do not require full control of KnativeKafka object creation. Edit the YAML for more complex configurations that require full control of KnativeKafka object creation. You can access the YAML by clicking the Edit YAML link on the Create Knative Kafka page. Example KnativeKafka custom resource apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true 1 bootstrapServers: <bootstrap_servers> 2 source: enabled: true 3 broker: enabled: true 4 defaultConfig: bootstrapServers: <bootstrap_servers> 5 numPartitions: <num_partitions> 6 replicationFactor: <replication_factor> 7 sink: enabled: true 8 logging: level: INFO 9 1 Enables developers to use the KafkaChannel channel type in the cluster. 2 A comma-separated list of bootstrap servers from your AMQ Streams cluster. 3 Enables developers to use the KafkaSource event source type in the cluster. 4 Enables developers to use the Knative broker implementation for Apache Kafka in the cluster. 5 A comma-separated list of bootstrap servers from your Red Hat AMQ Streams cluster. 6 Defines the number of partitions of the Kafka topics, backed by the Broker objects. The default is 10 . 7 Defines the replication factor of the Kafka topics, backed by the Broker objects. The default is 3 . The replicationFactor value must be less than or equal to the number of nodes of your Red Hat AMQ Streams cluster. 8 Enables developers to use a Kafka sink in the cluster. 9 Defines the log level of the Kafka data plane. Allowed values are TRACE , DEBUG , INFO , WARN and ERROR . The default value is INFO . Warning Do not use DEBUG or TRACE as the logging level in production environments. The outputs from these logging levels are verbose and can degrade performance. Click Create after you have completed any of the optional configurations for Kafka. You are automatically directed to the Knative Kafka tab where knative-kafka is in the list of resources. Verification Click on the knative-kafka resource in the Knative Kafka tab. You are automatically directed to the Knative Kafka Overview page. View the list of Conditions for the resource and confirm that they have a status of True . If the conditions have a status of Unknown or False , wait a few moments to refresh the page. Check that the Knative broker for Apache Kafka resources have been created: USD oc get pods -n knative-eventing Example output NAME READY STATUS RESTARTS AGE kafka-broker-dispatcher-7769fbbcbb-xgffn 2/2 Running 0 44s kafka-broker-receiver-5fb56f7656-fhq8d 2/2 Running 0 44s kafka-channel-dispatcher-84fd6cb7f9-k2tjv 2/2 Running 0 44s kafka-channel-receiver-9b7f795d5-c76xr 2/2 Running 0 44s kafka-controller-6f95659bf6-trd6r 2/2 Running 0 44s kafka-source-dispatcher-6bf98bdfff-8bcsn 2/2 Running 0 44s kafka-webhook-eventing-68dc95d54b-825xs 2/2 Running 0 44s 5.4. steps If you want to use Knative services you can install Knative Serving .
[ "apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing", "oc apply -f eventing.yaml", "oc get knativeeventing.operator.knative.dev/knative-eventing -n knative-eventing --template='{{range .status.conditions}}{{printf \"%s=%s\\n\" .type .status}}{{end}}'", "InstallSucceeded=True Ready=True", "oc get pods -n knative-eventing", "NAME READY STATUS RESTARTS AGE broker-controller-58765d9d49-g9zp6 1/1 Running 0 7m21s eventing-controller-65fdd66b54-jw7bh 1/1 Running 0 7m31s eventing-webhook-57fd74b5bd-kvhlz 1/1 Running 0 7m31s imc-controller-5b75d458fc-ptvm2 1/1 Running 0 7m19s imc-dispatcher-64f6d5fccb-kkc4c 1/1 Running 0 7m18s", "apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true 1 bootstrapServers: <bootstrap_servers> 2 source: enabled: true 3 broker: enabled: true 4 defaultConfig: bootstrapServers: <bootstrap_servers> 5 numPartitions: <num_partitions> 6 replicationFactor: <replication_factor> 7 sink: enabled: true 8 logging: level: INFO 9", "oc get pods -n knative-eventing", "NAME READY STATUS RESTARTS AGE kafka-broker-dispatcher-7769fbbcbb-xgffn 2/2 Running 0 44s kafka-broker-receiver-5fb56f7656-fhq8d 2/2 Running 0 44s kafka-channel-dispatcher-84fd6cb7f9-k2tjv 2/2 Running 0 44s kafka-channel-receiver-9b7f795d5-c76xr 2/2 Running 0 44s kafka-controller-6f95659bf6-trd6r 2/2 Running 0 44s kafka-source-dispatcher-6bf98bdfff-8bcsn 2/2 Running 0 44s kafka-webhook-eventing-68dc95d54b-825xs 2/2 Running 0 44s" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/installing_openshift_serverless/installing-knative-eventing
Chapter 1. Upgrading overview
Chapter 1. Upgrading overview Review prerequisites and available upgrade paths below before upgrading your current Red Hat Satellite installation to Red Hat Satellite 6.16. For interactive upgrade instructions, you can also use the Red Hat Satellite Upgrade Helper on the Red Hat Customer Portal. This application provides you with an exact guide to match your current version number. You can find instructions that are specific to your upgrade path, as well as steps to prevent known issues. For more information, see Satellite Upgrade Helper on the Red Hat Customer Portal. 1.1. Upgrade paths You can upgrade to Red Hat Satellite 6.16 from Red Hat Satellite 6.15. For complete instructions on how to upgrade, see Chapter 2, Upgrading Red Hat Satellite . The high-level steps in upgrading Satellite to 6.16 are as follows: Ensure that your Satellite Servers and Capsule Servers have been upgraded to Satellite 6.15. For more information, see Upgrading connected Red Hat Satellite to 6.15 or Upgrading disconnected Red Hat Satellite to 6.15 . Upgrade your Satellite Server: Upgrade your Satellite Server to 6.16. Optional: Upgrade the operating system on your Satellite Server to Red Hat Enterprise Linux 9. Note Although upgrading the operating system of your Satellite Server to Red Hat Enterprise Linux 9 is optional, you will need to do it before you can upgrade to the Satellite version after 6.16. Synchronize the new 6.16 repositories. Upgrade your Capsule Servers: Upgrade all Capsule Servers to 6.16. Optional: Upgrade the operating system on your Capsule Servers to Red Hat Enterprise Linux 9. Note Although upgrading the operating system of your Capsule Servers to Red Hat Enterprise Linux 9 is optional, you will need to do it before you can upgrade to the Satellite version after 6.16. Capsules at version 6.15 will keep working with your upgraded Satellite Server 6.16. After you upgrade Satellite Server to 6.16, you can upgrade your Capsules separately over multiple maintenance windows. For more information, see Section 1.3, "Upgrading Capsules separately from Satellite" . Satellite services are shut down during the upgrade. Ensure to plan for the required downtime. The upgrade process duration might vary depending on your hardware configuration, network speed, and the amount of data that is stored on the server. Upgrading Satellite Server takes approximately 1 - 2 hours. Upgrading Capsule Server takes approximately 10 - 30 minutes. Hammer and API considerations If you have any scripts that use the Hammer CLI tool, ensure that you modify these scripts according to the changes in Hammer. If you have any integrations that use the Satellite REST API, ensure that you modify these integrations according to the changes in the API. For more information about changes in Hammer and API, see Release notes . 1.2. Prerequisites Upgrading to Satellite 6.16 affects your entire Satellite infrastructure. Before proceeding, complete the following: Read the Red Hat Satellite 6.16 Release Notes . Ensure that you have sufficient storage space on your server. For more information, see Preparing your Environment for Installation in Installing Satellite Server in a connected network environment and Preparing your Environment for Installation in Installing Capsule Server . Ensure that you have at least the same amount of free space on /var/lib/pgsql as that consumed by /var/lib/pgsql/data . Upgrading to Satellite 6.16 involves a PostgreSQL 12 to PostgreSQL 13 upgrade. The contents of /var/lib/pgsql/data are backed up during the PostgreSQL upgrade. Back up your Satellite Server and all Capsule Servers. For more information, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . Plan for updating any scripts you use that contain Satellite API commands because some API commands differ between versions of Satellite. Migrate all organizations to Simple Content Access (SCA). For more information, see the Red Hat Knowledgebase solution Simple Content Access . Ensure that all Satellite Servers are on the same version. Warning If you customize configuration files, manually or using a tool such as Hiera, these changes are overwritten when the maintenance script runs during upgrading or updating. You can use the --noop option with the satellite-installer to test for changes. For more information, see the Red Hat Knowledgebase solution How to use the noop option to check for changes in Satellite config files during an upgrade. 1.3. Upgrading Capsules separately from Satellite You can upgrade Satellite to version 6.16 and keep Capsules at version 6.15 until you have the capacity to upgrade them too. All the functionality that worked previously works on 6.15 Capsules. However, the functionality added in the 6.16 release will not work until you upgrade Capsules to 6.16. Upgrading Capsules after upgrading Satellite can be useful in the following example scenarios: If you want to have several smaller outage windows instead of one larger window. If Capsules in your organization are managed by several teams and are located in different locations. If you use a load-balanced configuration, you can upgrade one load-balanced Capsule and keep other load-balanced Capsules at one version lower. This allows you to upgrade all Capsules one after another without any outage. 1.4. Following the progress of the upgrade Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. For more information, see the tmux manual page. If you lose connection to the command shell where the upgrade command is running you can see the logs in /var/log/foreman-installer/satellite.log to check if the process completed successfully.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/upgrading_connected_red_hat_satellite_to_6.16/upgrading_overview_upgrading-connected
7.101. kdelibs
7.101. kdelibs 7.101.1. RHBA-2012:1251 - kdelibs bug fix update Updated kdelibs packages that fix various bugs are now available for Red Hat Enterprise Linux 6. The kdelibs packages provide libraries for the K Desktop Environment (KDE). Bug Fixes BZ# 587016 Prior to this update, the KDE Print dialog did not remember settings, nor did it allow the user to save the settings. Consequent to this, when printing several documents, users were forced to manually change settings for each printed document. With this update, the KDE Print dialog retains settings as expected. BZ# 682611 When the system was configured to use the Traditional Chinese language (the zh_TW locale), Konqueror incorrectly used a Chinese (zh_CN) version of its splash page. This update ensures that Konqueror uses the correct locale. BZ#734734 Previously, clicking the system tray to display hidden icons could cause the Plasma Workspaces to consume an excessive amount of CPU time. This update applies a patch that fixes this error. BZ#754161 When using Konqueror to recursively copy files and directories, if one of the subdirectories was not accessible, no warning or error message was reported to the user. This update ensures that Konqueror displays a proper warning message in this scenario. BZ#826114 Prior to this update, an attempt to add "Terminal Emulator" to the Main Toolbar caused Konqueror to terminate unexpectedly with a segmentation fault. With this update, the underlying source code has been corrected to prevent this error so that users can now use this functionality as expected. All users of kdelibs are advised to upgrade to these updated packages, which fix these bugs. 7.101.2. RHSA-2012:1418 - Critical: kdelibs security update Updated kdelibs packages that fix two security issues are now available for Red Hat Enterprise Linux 6 FasTrack. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The kdelibs packages provide libraries for the K Desktop Environment (KDE). Konqueror is a web browser. CVE-2012-4512 A heap-based buffer overflow flaw was found in the way the CSS (Cascading Style Sheets) parser in kdelibs parsed the location of the source for font faces. A web page containing malicious content could cause an application using kdelibs (such as Konqueror) to crash or, potentially, execute arbitrary code with the privileges of the user running the application. CVE-2012-4513 A heap-based buffer over-read flaw was found in the way kdelibs calculated canvas dimensions for large images. A web page containing malicious content could cause an application using kdelibs to crash or disclose portions of its memory. Users should upgrade to these updated packages, which contain backported patches to correct these issues. The desktop must be restarted (log out, then log back in) for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/kdelibs
20.3. Booleans
20.3. Booleans SELinux is based on the least level of access required for a service to run. Services can be run in a variety of ways; therefore, you need to specify how you run your services. Use the following Booleans to set up SELinux: selinuxuser_mysql_connect_enabled When enabled, this Boolean allows users to connect to the local MariaDB server. exim_can_connect_db When enabled, this Boolean allows the exim mailer to initiate connections to a database server. ftpd_connect_db When enabled, this Boolean allows ftp daemons to initiate connections to a database server. httpd_can_network_connect_db Enabling this Boolean is required for a web server to communicate with a database server. Note Due to the continuous development of the SELinux policy, the list above might not contain all Booleans related to the service at all times. To list them, enter the following command: Enter the following command to view description of a particular Boolean: Note that the additional policycoreutils-devel package providing the sepolicy utility is required for this command to work.
[ "~]USD getsebool -a | grep service_name", "~]USD sepolicy booleans -b boolean_name" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-mariadb-booleans
7.242. sudo
7.242. sudo 7.242.1. RHBA-2013:0363 - sudo bug fix and enhancement update Updated sudo packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The sudo (super user do) utility allows system administrators to give certain users the ability to run commands as root. Note The sudo package has been upgraded to upstream version 1.8.6p3, which provides a number of bug fixes and enhancements over the version. The following list includes highlights, important fixes, or notable enhancements: Plug-in API has been added, provided by the new sudo-devel subpackage. New /etc/sudo.conf configuration file for the sudo utility front-end configuration (plug-in path, coredumps, debugging and so on) has been added. It is possible to specify the sudoer's path, UID, GID, and file mode as options to the plug-in in the /etc/sudo.conf file. Support for using the System Security Services Daemon (SSSD) as a source of sudoers data has been provided. The -D flag in the sudo utility has been replaced with a more general debugging framework that is configured in the /etc/sudo.conf file. The deprecated noexec_file sudoers option is no longer supported. The noexec functionality has been moved out of the sudoers policy plug-in and into the sudo utility front end, which matches the behavior documented in the plug-in writer's guide. As a result, the path to the /user/libexec/sudo_noexec.so file is now specified in the /etc/sudo.conf file instead of the /etc/sudoers file. If the user fails to authenticate, and the user's executed command is rejected by the rules defined in the sudoers file, the command now allowed error message is now logged instead of the previously used <N> incorrect password attempts . Likewise, the mail_no_perms sudoers option now takes precedence over the mail_badpass option. If the user is a member of the exempt group in the sudoers file, he will no longer be prompted for a password even if the -k option is specified with the executed command. This makes the sudo -k command consistent with the behavior one would get if running the sudo -k command immediately before executing another command. If the user specifies a group via the sudo utility's -g option that matches the target user's group in the password database, it is now allowed even if no groups are present in the Runas_Spec . A group ID ( %#gid ) can now be specified in the User_List or Runas_List files. Likewise, for non-Unix groups the syntax is %:#gid . The visudo utility now fixes the mode on the sudoers file even if no changes are made, unless the -f option is specified. (BZ# 759480 ) Bug fixes BZ#823993 The controlling tty of a suspended process was not saved by the sudo utility. Thus, the code handling the resume operation could not restore it correctly. Consequently, resume was not enabled to a suspended process run through the sudo utility. This bug has been fixed by rebasing to a new upstream version. As a result, suspending and resuming works correctly again. BZ#840980 A change in the internal execution method of commands in the sudo utility was the cause of creating a new process and executing the command from there. To fix this bug, new defaults option was added to restore the old behavior. Since the execution method has been implemented to correctly handle PAM session handling, I/O logging, SELinux support, and the plug-in policy close functionality, these features do not work correctly if the newly-implemented option is used. To apply this option, add the following line to the /etc/sudoers file: As a result, if the newly-implemented option is used, commands will be executed directly by the sudo utility. BZ#836242 The sudo utility set the core dump size limit to 0 to prevent the possibility of exposing the user password in the core dump file in case of an unexpected termination. However, this limit was not reset to the state before executing a command and the core dump size hard limit of a child process was eventually set to 0. Consequently, it was not possible to set the core dump size limit by processes run through the sudo utility. This bug was fixed by rebasing to a new upstream version; thus, setting the core dump size limit by processes run through the sudo utility works as expected. BZ# 804123 When initializing the global variable holding the PAM (Pluggable Authentication Modules) handle from a child process, which had a separate address space, a different PAM handle was passed to PAM API functions where the same handle was supposed to be used. Thus, the initialization had no effect on the parent's PAM handle when the pam_end_sessions() function was called. As a consequence, dependent modules could fail to iniciate at session close in order to release resources or make important administrative changes. This bug has been fixed by rebasing to a newer upstream version, which uses the PAM API correctly (for example, initializes one PAM handle and uses it in all related PAM API function calls). As a result, PAM sessions are now closed correctly. BZ# 860397 Incorrect file permissions on the /etc/sudo-ldap.conf file and missing examples in the same file led to an inconsistency with documentation provided by Red Hat. With this update, file permissions have been corrected and example configuration lines have been added. As a result, /etc/sudo-ldap.conf is now consistent with the documentation. BZ#844691 When the sudo utility set up the environment in which it ran a command, it reset the value of the RLIMIT_NPROC resource limit to the parents value of this limit if both the soft (current) and hard (maximum) values of RLIMIT_NPROC were not limited. An upstream patch has been provided to address this bug and RLIMIT_NPROC can now be set to "unlimited". BZ#879675 Due to different parsing rules for comments in the /etc/ldap.conf file, the hash ('#') character could not be used as part of a configuration value, for example in a password. It was understood as a beginning of a comment and everything following the # character was ignored. Now, the parser has been fixed to interpret the # character as a beginning of a comment only if it is at the beginning of a line. As a result, the '#' character can be used as part of a password, or any other value if needed. BZ# 872740 White space characters included in command arguments were not escaped before being passed to the specified command. As a consequence, incorrect arguments were passed to the specified command. This bug was fixed by rebasing to a new upstream version where the escape of command arguments is performed correctly. As a result, command arguments specified on the command line are passed to the command as expected. Enhancements BZ# 789937 The sudo utility is able to consult the /etc/nsswitch.conf file for sudoers entries and look them up in files or via LDAP (Lightweight Directory Access Protocol). Previously, when a match was found in the first database of sudoers entries, the look-up operation still continued in other databases. In Red Hat Enterprise Linux 6.4, an option has been added to the /etc/nsswitch.conf file that allows users to specify a database after which a match of the sudoer's entry is sufficient. This eliminates the need to query any other databases; thus improving the performance of sudoer's entry look up in large environments. This behavior is not enabled by default and must be configured by adding the [SUCCESS=return] string after a selected database. When a match is found in a database that directly precedes this string, no other databases are queried. BZ#846117 This update improves sudo documentation in the section describing wildcard usage, describing what unintended consequences a wildcard character used in the command argument can have. Users of sudo should upgrade to these updated packages, which fix these bugs and add these enhancements.
[ "Defaults cmnd_no_wait" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/sudo
4.2. Configuring Static Routes Using nmcli
4.2. Configuring Static Routes Using nmcli To configure static routes using the nmcli tool, use one of the following: the nmcli command line the nmcli interactive editor Example 4.1. Configuring Static Routes Using nmcli To configure a static route for an existing Ethernet connection using the command line: This will direct traffic for the 192.168.122.0/24 subnet to the gateway at 10.10.10.1 Example 4.2. Configuring Static Routes with nmcli Editor To configure a static route for an Ethernet connection using the interactive editor:
[ "~]# nmcli connection modify enp1s0 +ipv4.routes \"192.168.122.0/24 10.10.10.1\"", "~]USD nmcli con edit ens3 ===| nmcli interactive connection editor |=== Editing existing '802-3-ethernet' connection: 'ens3' Type 'help' or '?' for available commands. Type 'describe [<setting>.<prop>]' for detailed property description. You may edit the following settings: connection, 802-3-ethernet (ethernet), 802-1x, dcb, ipv4, ipv6, tc, proxy nmcli> set ipv4.routes 192.168.122.0/24 10.10.10.1 nmcli> save persistent Connection 'ens3' (23f8b65a-8f3d-41a0-a525-e3bc93be83b8) successfully updated. nmcli> quit" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_static_routes_using_nmcli
Chapter 4. Introduction to devfile in Dev Spaces
Chapter 4. Introduction to devfile in Dev Spaces Devfiles are yaml text files used for development environment customization. Use them to configure a devfile to suit your specific needs and share the customized devfile across multiple workspaces to ensure identical user experience and build, run, and deploy behaviours across your team. Red Hat OpenShift Dev Spaces-specific devfile features Red Hat OpenShift Dev Spaces is expected to work with most of the popular images defined in the components section of devfile. For production purposes, it is recommended to use one of the Universal Base Images as a base image for defining the Cloud Development Environment. Warning Some images can not be used as-is for defining Cloud Development Environment since Visual Studio Code - Open Source ("Code - OSS") can not be started in the containers with missing openssl and libbrotli . Missing libraries should be explicitly installed on the Dockerfile level e.g. RUN yum install compat-openssl11 libbrotli Devfile and Universal Developer Image You do not need a devfile to start a workspace. If you do not include a devfile in your project repository, Red Hat OpenShift Dev Spaces automatically loads a default devfile with a Universal Developer Image (UDI). Devfile Registry Devfile Registry contains ready-to-use community-supported devfiles for different languages and technologies. Devfiles included in the registry should be treated as samples rather than templates. Additional resources What is a devfile Benefits of devfile Devfile customization overview Devfile.io Customizing Cloud Development Environments
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/user_guide/devfile-introduction
11.5.3. FireWire and USB Disks
11.5.3. FireWire and USB Disks Some FireWire and USB hard disks may not be recognized by the Red Hat Enterprise Linux installation system. If configuration of these disks at installation time is not vital, disconnect them to avoid any confusion. Note You can connect and configure external FireWire and USB hard disks after installation. Most such devices are automatically recognized and available for use once connected.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-partitioning-fw-usb-ppc
Chapter 1. Overview of Kaoto
Chapter 1. Overview of Kaoto Important The VS Code extensions for Apache Camel are listed as development support. For more information about the scope of development support, see Development Support Scope of Coverage for Red Hat Build of Apache Camel . Kaoto is an acronym for K amel O rchestration T ool. It is a low code and no code integration designer to create and edit integrations based on Apache Camel . Kaoto is extendable, flexible, and adaptable to different use cases. For more information about the history of Kaoto, see Statistics and History of Kaoto . Kaoto offers a rich catalog of building blocks for use in graphical design. By default, Kaoto loads the official upstream Camel Catalog and Kamelet Catalog. Benefits of using Kaoto can be listed as follows: Enhanced Visual Development Experience By leveraging Kaoto's visual designing capabilities, users can intuitively create, view, and edit Camel integrations through the user interface. This low-code/no-code approach significantly reduces the learning curve for new users and accelerates the development process for seasoned developers. Comprehensive Component Catalog Accessibility Kaoto provides immediate access to a rich catalog of Camel components, enterprise integration patterns (EIPs), and Kamelets. This extensive Catalog enables developers to easily find and implement the necessary components for their integration solutions. By having these resources readily available, developers can focus more on solving business problems rather than spending time searching for and learning about different components. Streamlined Integration Development Process The platform is designed with an efficient user experience in mind, optimizing the steps required to create comprehensive integrations. This efficiency is achieved through features like auto-completion, configuration forms, and interactive feedback mechanisms. As a result, developers can quickly assemble and configure integrations, reducing the overall development time. This streamlined process encourages experimentation and innovation by making it easier to prototype and test different approaches. 1.1. Why Kaoto? Camel at Heart Using the power of Apache Camel: Kaoto utilizes the Apache Camel models and schemas to always offer you all available Camel features. Local Development VS Code Extension: We provide Kaoto as an extension you can install from the Microsoft Marketplace and also from the Open VSX Marketplace . LCNC: Low Code and No Code Care about developers: You can seamlessly switch between any IDE and Kaoto, allowing mixed teams and converting low-code integrators to developers. FLOSS heart Free Libre and Open Source Forever: Truly open with no vendor lock-in. Use, reuse, share, modify, and resell to the users' needs.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/kaoto/overview-of-kaoto
Appendix A. Initialization Script for Provisioning Examples
Appendix A. Initialization Script for Provisioning Examples If you have not followed the examples in Managing Content , you can use the following initialization script to create an environment for provisioning examples. Procedure Create a script file ( content-init.sh ) and include the following: Set executable permissions on the script: Download a copy of your Red Hat Subscription Manifest from the Red Hat Customer Portal and run the script on the manifest: This imports the necessary Red Hat content for the provisioning examples in this guide.
[ "#!/bin/bash MANIFEST=USD1 Import the content from Red Hat CDN hammer organization create --name \"ACME\" --label \"ACME\" --description \"Our example organization for managing content.\" hammer subscription upload --file ~/USDMANIFEST --organization \"ACME\" hammer repository-set enable --name \"Red Hat Enterprise Linux 7 Server (RPMs)\" --releasever \"7Server\" --basearch \"x86_64\" --product \"Red Hat Enterprise Linux Server\" --organization \"ACME\" hammer repository-set enable --name \"Red Hat Enterprise Linux 7 Server (Kickstart)\" --releasever \"7Server\" --basearch \"x86_64\" --product \"Red Hat Enterprise Linux Server\" --organization \"ACME\" hammer repository-set enable --name \"Red Hat Satellite Client 6 (for RHEL 7 Server) (RPMs)\" --basearch \"x86_64\" --product \"Red Hat Enterprise Linux Server\" --organization \"ACME\" hammer product synchronize --name \"Red Hat Enterprise Linux Server\" --organization \"ACME\" Create our application life cycle hammer lifecycle-environment create --name \"Development\" --description \"Environment for ACME's Development Team\" --prior \"Library\" --organization \"ACME\" hammer lifecycle-environment create --name \"Testing\" --description \"Environment for ACME's Quality Engineering Team\" --prior \"Development\" --organization \"ACME\" hammer lifecycle-environment create --name \"Production\" --description \"Environment for ACME's Product Releases\" --prior \"Testing\" --organization \"ACME\" Create and publish our Content View hammer content-view create --name \"Base\" --description \"Base operating system\" --repositories \"Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server,Red Hat Satellite Client 6 for RHEL 7 Server RPMs x86_64\" --organization \"ACME\" hammer content-view publish --name \"Base\" --description \"Initial Content View for our operating system\" --organization \"ACME\" hammer content-view version promote --content-view \"Base\" --version 1 --to-lifecycle-environment \"Development\" --organization \"ACME\" hammer content-view version promote --content-view \"Base\" --version 1 --to-lifecycle-environment \"Testing\" --organization \"ACME\" hammer content-view version promote --content-view \"Base\" --version 1 --to-lifecycle-environment \"Production\" --organization \"ACME\"", "chmod +x content-init.sh", "./content-init.sh manifest_98f4290e-6c0b-4f37-ba79-3a3ec6e405ba.zip" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/initialization_script_for_provisioning_examples_provisioning
Chapter 3. Avro Serialize Action
Chapter 3. Avro Serialize Action Serialize payload to Avro 3.1. Configuration Options The following table summarizes the configuration options available for the avro-serialize-action Kamelet: Property Name Description Type Default Example schema * Schema The Avro schema to use during serialization (as single-line, using JSON format) string "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" validate Validate Indicates if the content must be validated against the schema boolean true Note Fields marked with an asterisk (*) are mandatory. 3.2. Dependencies At runtime, the avro-serialize-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:kamelet camel:core camel:jackson-avro 3.3. Usage This section describes how you can use the avro-serialize-action . 3.3.1. Knative Action You can use the avro-serialize-action Kamelet as an intermediate step in a Knative binding. avro-serialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: avro-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{"first":"Ada","last":"Lovelace"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-serialize-action properties: schema: "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 3.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 3.3.1.2. Procedure for using the cluster CLI Save the avro-serialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f avro-serialize-action-binding.yaml 3.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name avro-serialize-action-binding timer-source?message='{"first":"Ada","last":"Lovelace"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 3.3.2. Kafka Action You can use the avro-serialize-action Kamelet as an intermediate step in a Kafka binding. avro-serialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: avro-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{"first":"Ada","last":"Lovelace"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-serialize-action properties: schema: "{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 3.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 3.3.2.2. Procedure for using the cluster CLI Save the avro-serialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f avro-serialize-action-binding.yaml 3.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind --name avro-serialize-action-binding timer-source?message='{"first":"Ada","last":"Lovelace"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{"type": "record", "namespace": "com.example", "name": "FullName", "fields": [{"name": "first", "type": "string"},{"name": "last", "type": "string"}]}' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 3.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/avro-serialize-action.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: avro-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{\"first\":\"Ada\",\"last\":\"Lovelace\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-serialize-action properties: schema: \"{\\\"type\\\": \\\"record\\\", \\\"namespace\\\": \\\"com.example\\\", \\\"name\\\": \\\"FullName\\\", \\\"fields\\\": [{\\\"name\\\": \\\"first\\\", \\\"type\\\": \\\"string\\\"},{\\\"name\\\": \\\"last\\\", \\\"type\\\": \\\"string\\\"}]}\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f avro-serialize-action-binding.yaml", "kamel bind --name avro-serialize-action-binding timer-source?message='{\"first\":\"Ada\",\"last\":\"Lovelace\"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}' channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: avro-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: '{\"first\":\"Ada\",\"last\":\"Lovelace\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: avro-serialize-action properties: schema: \"{\\\"type\\\": \\\"record\\\", \\\"namespace\\\": \\\"com.example\\\", \\\"name\\\": \\\"FullName\\\", \\\"fields\\\": [{\\\"name\\\": \\\"first\\\", \\\"type\\\": \\\"string\\\"},{\\\"name\\\": \\\"last\\\", \\\"type\\\": \\\"string\\\"}]}\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f avro-serialize-action-binding.yaml", "kamel bind --name avro-serialize-action-binding timer-source?message='{\"first\":\"Ada\",\"last\":\"Lovelace\"}' --step json-deserialize-action --step avro-serialize-action -p step-1.schema='{\"type\": \"record\", \"namespace\": \"com.example\", \"name\": \"FullName\", \"fields\": [{\"name\": \"first\", \"type\": \"string\"},{\"name\": \"last\", \"type\": \"string\"}]}' kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/avro-serialize-action
5.27. cluster-glue
5.27. cluster-glue 5.27.1. RHBA-2012:0942 - cluster-glue bug fix update Updated cluster-glue packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The cluster-glue packages contain a collection of common tools that are useful for writing cluster managers such as Pacemaker. Bug Fixes BZ# 758127 Previously, the environment variable "LRMD_MAX_CHILDREN" from the program /etc/sysconfig/pacemaker was not properly evaluated. As a result, the "max_child_count" variable in the Local Resource Management Daemon (lrmd) was not modified. With this update, the bug has been fixed so that the environment variable "LRMD_MAX_CHILDREN" is evaluated as expected. BZ# 786746 Previously, if Pacemaker attempted to cancel a recurring operation while the operation was executed, the Local Resource Management Daemon (lrmd) did not cancel the operation correctly. As a result the operation was not removed from the repeat list. With this update, a canceled operation is now marked to be removed from the repeat operation list if it is canceled during the execution so that recurring canceled operations are never executed again. All cluster-glue users are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/cluster-glue
Chapter 16. Troubleshooting volume management in GNOME
Chapter 16. Troubleshooting volume management in GNOME Following are some common errors of volume management in GNOME and ways to resolve them. 16.1. Troubleshooting access to GVFS locations from non-GIO clients If you have problems accessing GVFS locations from your application, it might mean that it is not native GIO client. Native GIO clients are typically all GNOME applications using GNOME libraries ( glib , gio ). The gvfs-fuse service is provided as a fallback for non-GIO clients. Prerequisite The gvfs-fuse package is installed. Procedure Ensure that gvfs-fuse is running. If gvfs-fuse is not running, log out and log back in. Red Hat does not recommend starting gvfs-fuse manually. Find the system user ID (UID) for the /run/user/ UID /gvfs/ path. The gvfsd-fuse daemon requires a path where it can expose its services. When the /run/user/ UID /gvfs/ path is unavailable, gvfsd-fuse uses the ~/.gvfs path. If gvfsd-fuse is still not running, start the gvfsd-fuse daemon: Now, the FUSE mount is available, and you can manually browse for the path in your application. Find the GVFS mounts under the /run/user/ UID /gvfs/ or ~/.gvfs locations. 16.2. Troubleshooting an invisible connected USB disk When you connect a flash drive, the GNOME Desktop might not display it. If your flash drive is not visible in Files , but you can see it in the Disks application, you can attempt to set the Show in user interface option in Disks . Procedure Open the Disks application. Select the disk in the side bar. Below Volumes , click Additional partition options > Edit Mount Options Click Show in user interface . Confirm by clicking OK . If the flash drive is still not visible, you can try to physically remove the drive and try connecting it again. 16.3. Troubleshooting unknown or unwanted partitions listed in Files You might see unknown or unwanted partitions when you plug a disk in. For example, when you plug in a flash disk, it is automatically mounted and its volumes are shown in the Files side bar. Some devices have a special partition with backups or help files, which you might not want to see each time you plug in the device. Procedure Open the Disks application. Select the disk in the side bar. Below Volumes , click Additional partition options > Edit Mount Options Deselect Show in user interface . Confirm by clicking OK . 16.4. Troubleshooting if a connection to the remote GVFS file system is unavailable There are number of situations in which the client is unexpectedly and unwillingly disconnected from a virtual file system or a remote disk mount and is not reconnected automatically. You might see the error messages in such situations. Several causes trigger such situations: The connection is interrupted. For example, your laptop is disconnected from the Wi-Fi. The user is inactive for some time and is disconnected by the server (idle timeout). The computer is resumed from sleep mode. Procedure Unmount file system. Mount it again. If the connection is getting disabled more often, check the settings in the Network panel in the GNOME Settings . 16.5. Troubleshooting a busy disk in GNOME If you receive a notification about your disk being busy, determine the programs that are accessing the disk. Then, you can end the programs that are running. You can also use the System Monitor application to kill the programs forcefully. Prerequisites The iotop utility is installed: Procedure Examine the list of open files. Run the lsof command to get the list of open files. If lsof is not available, run the ps ax command. You can use System Monitor to display the running processes in a GUI. When you have determined the programs, terminate them using any of the following methods: On the command line, execute the kill command. In System Monitor , right-click the line with the program process name, and click End or Kill from the context menu. Additional resources kill man page on your system
[ "yum install gvfs-fuse", "ps ax | grep gvfsd-fuse", "id -u", "/usr/libexec/gvfsd-fuse -f /run/user/_UID_/gvfs", "yum install iotop" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/troubleshooting-volume-management-in-gnome_using-the-desktop-environment-in-rhel-8
probe::staprun.remove_module
probe::staprun.remove_module Name probe::staprun.remove_module - Removing SystemTap instrumentation module Synopsis staprun.remove_module Values name the stap module name to be removed (without the .ko extension) Description Fires just before the call to remove the module.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-staprun-remove-module
3.7. Configure Pass-Through Authentication
3.7. Configure Pass-Through Authentication Procedure 3.1. Configure Pass-Through Authentication Set the security domain Change the JBoss Data Virtualization security domain to the same name as your application's security domain name in the transport section of the server configuration file. Note For this to work, the security domain must be a JAAS based login module and your client application must obtain its JBoss Data Virtualization connection using a local connection with the PassthroughAuthentication=true connection flag set.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/configure_pass-through_authentication1
Chapter 3. Running an Ansible Playbook from Satellite
Chapter 3. Running an Ansible Playbook from Satellite You can run an Ansible Playbook on a host or host group by executing a remote job in Satellite. Limitation of host parameters in Ansible Playbook job templates When you execute an Ansible Playbook on multiple hosts, Satellite renders the playbook for all hosts in the batch, but only uses the rendered playbook of the first host to execute it on all hosts in the batch. Therefore, you cannot modify the behavior of the playbook per host by using a host parameter in the template control flow constructs. Host parameters are translated into Ansible variables, so you can use them to control the behavior in native Ansible constructs. For more information, see BZ#2282275 . Prerequisites Ansible plugin in Satellite is enabled. Remote job execution is configured. For more information, see Chapter 4, Configuring and setting up remote jobs . You have an Ansible Playbook ready to use. Procedure In the Satellite web UI, navigate to Monitor > Jobs . Click Run Job . In Job category , select Ansible Playbook . In Job template , select Ansible - Run playbook . Click . Select the hosts on which you want to run the playbook. In the playbook field, paste the content of your Ansible Playbook. Follow the wizard to complete setting the remote job. For more information, see Section 4.21, "Executing a remote job" . Click Submit to run the Ansible Playbook on your hosts. Additional resources Alternatively, you can import Ansible Playbooks from Capsule Servers. For more information, see the following resources: Section 4.7, "Importing an Ansible Playbook by name" Section 4.8, "Importing all available Ansible Playbooks"
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_configurations_by_using_ansible_integration/running-an-ansible-playbook-from-satellite_ansible
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/configuring_red_hat_build_of_openjdk_21_on_rhel_with_fips/providing-direct-documentation-feedback_openjdk
Chapter 14. OpenSSH
Chapter 14. OpenSSH SSH (Secure Shell) is a protocol which facilitates secure communications between two systems using a client-server architecture and allows users to log into server host systems remotely. Unlike other remote communication protocols, such as FTP , Telnet , or rlogin , SSH encrypts the login session, rendering the connection difficult for intruders to collect unencrypted passwords. The ssh program is designed to replace older, less secure terminal applications used to log into remote hosts, such as telnet or rsh . A related program called scp replaces older programs designed to copy files between hosts, such as rcp . Because these older applications do not encrypt passwords transmitted between the client and the server, avoid them whenever possible. Using secure methods to log into remote systems decreases the risks for both the client system and the remote host. Red Hat Enterprise Linux includes the general OpenSSH package, openssh , as well as the OpenSSH server, openssh-server , and client, openssh-clients , packages. 14.1. The SSH Protocol 14.1.1. Why Use SSH? Potential intruders have a variety of tools at their disposal enabling them to disrupt, intercept, and re-route network traffic in an effort to gain access to a system. In general terms, these threats can be categorized as follows: Interception of communication between two systems The attacker can be somewhere on the network between the communicating parties, copying any information passed between them. He may intercept and keep the information, or alter the information and send it on to the intended recipient. This attack is usually performed using a packet sniffer , a rather common network utility that captures each packet flowing through the network, and analyzes its content. Impersonation of a particular host Attacker's system is configured to pose as the intended recipient of a transmission. If this strategy works, the user's system remains unaware that it is communicating with the wrong host. This attack can be performed using a technique known as DNS poisoning , or via so-called IP spoofing . In the first case, the intruder uses a cracked DNS server to point client systems to a maliciously duplicated host. In the second case, the intruder sends falsified network packets that appear to be from a trusted host. Both techniques intercept potentially sensitive information and, if the interception is made for hostile reasons, the results can be disastrous. If SSH is used for remote shell login and file copying, these security threats can be greatly diminished. This is because the SSH client and server use digital signatures to verify their identity. Additionally, all communication between the client and server systems is encrypted. Attempts to spoof the identity of either side of a communication does not work, since each packet is encrypted using a key known only by the local and remote systems. 14.1.2. Main Features The SSH protocol provides the following safeguards: No one can pose as the intended server After an initial connection, the client can verify that it is connecting to the same server it had connected to previously. No one can capture the authentication information The client transmits its authentication information to the server using strong, 128-bit encryption. No one can intercept the communication All data sent and received during a session is transferred using 128-bit encryption, making intercepted transmissions extremely difficult to decrypt and read. Additionally, it also offers the following options: It provides secure means to use graphical applications over a network Using a technique called X11 forwarding , the client can forward X11 ( X Window System ) applications from the server. Note that if you set the ForwardX11Trusted option to yes or you use SSH with the -Y option, you bypass the X11 SECURITY extension controls, which can result in a security threat. It provides a way to secure otherwise insecure protocols The SSH protocol encrypts everything it sends and receives. Using a technique called port forwarding , an SSH server can become a conduit to securing otherwise insecure protocols, like POP , and increasing overall system and data security. It can be used to create a secure channel The OpenSSH server and client can be configured to create a tunnel similar to a virtual private network for traffic between server and client machines. It supports the Kerberos authentication OpenSSH servers and clients can be configured to authenticate using the GSSAPI (Generic Security Services Application Program Interface) implementation of the Kerberos network authentication protocol. 14.1.3. Protocol Versions Two varieties of SSH currently exist: version 1 and version 2. The OpenSSH suite under Red Hat Enterprise Linux uses SSH version 2, which has an enhanced key exchange algorithm not vulnerable to the known exploit in version 1. However, for compatibility reasons, the OpenSSH suite does support version 1 connections as well, although version 1 is disabled by default and needs to be enabled in the configuration files. Important For maximum security, avoid using SSH version 1 and use SSH version 2-compatible servers and clients whenever possible. 14.1.4. Event Sequence of an SSH Connection The following series of events help protect the integrity of SSH communication between two hosts. A cryptographic handshake is made so that the client can verify that it is communicating with the correct server. The transport layer of the connection between the client and remote host is encrypted using a symmetric cipher. The client authenticates itself to the server. The client interacts with the remote host over the encrypted connection. 14.1.4.1. Transport Layer The primary role of the transport layer is to facilitate safe and secure communication between the two hosts at the time of authentication and during subsequent communication. The transport layer accomplishes this by handling the encryption and decryption of data, and by providing integrity protection of data packets as they are sent and received. The transport layer also provides compression, speeding the transfer of information. Once an SSH client contacts a server, key information is exchanged so that the two systems can correctly construct the transport layer. The following steps occur during this exchange: Keys are exchanged The public key encryption algorithm is determined The symmetric encryption algorithm is determined The message authentication algorithm is determined The hash algorithm is determined During the key exchange, the server identifies itself to the client with a unique host key . If the client has never communicated with this particular server before, the server's host key is unknown to the client and it does not connect. OpenSSH notifies the user that the authenticity of the host cannot be established and prompts the user to accept or reject it. The user is expected to independently verify the new host key before accepting it. In subsequent connections, the server's host key is checked against the saved version on the client, providing confidence that the client is indeed communicating with the intended server. If, in the future, the host key no longer matches, the user must remove the client's saved version before a connection can occur. Warning Always verify the integrity of a new SSH server. During the initial contact, an attacker can pretend to be the intended SSH server to the local system without being recognized. To verify the integrity of a new SSH server, contact the server administrator before the first connection or if a host key mismatch occurs. SSH is designed to work with almost any kind of public key algorithm or encoding format. After an initial key exchange creates a hash value used for exchanges and a shared secret value, the two systems immediately begin calculating new keys and algorithms to protect authentication and future data sent over the connection. After a certain amount of data has been transmitted using a given key and algorithm (the exact amount depends on the SSH implementation), another key exchange occurs, generating another set of hash values and a new shared secret value. Even if an attacker is able to determine the hash and shared secret value, this information is only useful for a limited period of time. 14.1.4.2. Authentication Once the transport layer has constructed a secure tunnel to pass information between the two systems, the server tells the client the different authentication methods supported, such as using a private key-encoded signature or typing a password. The client then tries to authenticate itself to the server using one of these supported methods. SSH servers and clients can be configured to allow different types of authentication, which gives each side the optimal amount of control. The server can decide which encryption methods it supports based on its security model, and the client can choose the order of authentication methods to attempt from the available options. 14.1.4.3. Channels After a successful authentication over the SSH transport layer, multiple channels are opened via a technique called multiplexing [4] . Each of these channels handles communication for different terminal sessions and for forwarded X11 sessions. Both clients and servers can create a new channel. Each channel is then assigned a different number on each end of the connection. When the client attempts to open a new channel, the clients sends the channel number along with the request. This information is stored by the server and is used to direct communication to that channel. This is done so that different types of sessions do not affect one another and so that when a given session ends, its channel can be closed without disrupting the primary SSH connection. Channels also support flow-control , which allows them to send and receive data in an orderly fashion. In this way, data is not sent over the channel until the client receives a message that the channel is open. The client and server negotiate the characteristics of each channel automatically, depending on the type of service the client requests and the way the user is connected to the network. This allows great flexibility in handling different types of remote connections without having to change the basic infrastructure of the protocol. [4] A multiplexed connection consists of several signals being sent over a shared, common medium. With SSH, different channels are sent over a common secure connection.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-OpenSSH
9.5. Enabling and Disabling User Accounts
9.5. Enabling and Disabling User Accounts User accounts can be deactivated or disabled . A disabled user cannot log into IdM or its related services (like Kerberos) and he cannot perform any tasks. However, the user account still exists within Identity Management and all of the associated information remains unchanged. Note Any existing connections remain valid until the Kerberos TGT and other tickets expire. Once the ticket expires, the user cannot renew the ticket. 9.5.1. From the Web UI Multiple users can be disabled from the full users list by selecting the checkboxes by the desired users and then clicking the Disable link at the top of the list. Figure 9.2. Disable/Enable Options at the Top of the Users List A user account can also be disabled from the user's individual entry page. Open the Identity tab, and select the Users subtab. Click the name of the user to deactivate or activate. In the actions drop-down menu, select the Disable item. Click the Accept button. When a user account is disabled, it is signified by a minus (-) icon for the user status in the user list and by the username on the entry page. Additionally, the text for the user is gray (to show it is inactive) instead of black. Figure 9.3. Disable Icon for User Status 9.5.2. From the Command Line Users are enabled and disabled using user-enable and user-disable commands. All that is required is the user login. For example:
[ "[bjensen@server ~]USD ipa user-disable jsmith" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/activating_and_deactivating_user_accounts
13.4. Setting a Partition Type
13.4. Setting a Partition Type The partition type, not to be confused with the file system type, is used by a running system only rarely. However, the partition type matters to on-the-fly generators, such as systemd-gpt-auto-generator , which use the partition type to, for example, automatically identify and mount devices. You can start the fdisk utility and use the t command to set the partition type. The following example shows how to change the partition type of the first partition to 0x83, default on Linux: The parted utility provides some control of partition types by trying to map the partition type to 'flags', which is not convenient for end users. The parted utility can handle only certain partition types, for example LVM or RAID. To remove, for example, the lvm flag from the first partition with parted , use: For a list of commonly used partition types and hexadecimal numbers used to represent them, see the Partition Types table in the Partitions: Turning One Drive Into Many appendix of the Red Hat Enterprise Linux 7 Installation Guide .
[ "fdisk /dev/sdc Command (m for help): t Selected partition 1 Partition type (type L to list all types): 83 Changed type of partition 'Linux LVM' to 'Linux'.", "parted /dev/sdc 'set 1 lvm off'" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/sec-setting-partition-type
Chapter 17. EphemeralStorage schema reference
Chapter 17. EphemeralStorage schema reference Used in: JbodStorage , KafkaClusterSpec , KafkaNodePoolSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the EphemeralStorage type from PersistentClaimStorage . It must have the value ephemeral for the type EphemeralStorage . Property Property type Description id integer Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'. sizeLimit string When type=ephemeral, defines the total amount of local storage required for this EmptyDir volume (for example 1Gi). type string Must be ephemeral . kraftMetadata string (one of [shared]) Specifies whether this volume should be used for storing KRaft metadata. This property is optional. When set, the only currently supported value is shared . At most one volume can have this property set.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-EphemeralStorage-reference
Chapter 38. Failover, load balancing and high availability in Identity Management
Chapter 38. Failover, load balancing and high availability in Identity Management Identity Management (IdM) comes with its own failover, load-balancing and high-availability features, for example LDAP identity domain and certificate replication, and service discovery and failover support provided by the System Security Services Daemon (SSSD). IdM is thus equipped with: Client-side failover capability Server-side service availability Client-side failover capability SSSD obtains service (SRV) resource records from DNS servers that the client discovers automatically. Based on the SRV records, SSSD maintains a list of available IdM servers, including the information about the connectivity of these servers. If one IdM server goes offline or is overloaded, SSSD already knows which other server to communicate with. If DNS autodiscovery is not available, IdM clients should be configured at least with a fixed list of IdM servers to retrieve SRV records from in case of a failure. During the installation of an IdM client, the installer searches for _ldap._tcp. DOMAIN DNS SRV records for all domains that are parent to the client's hostname. In this way, the installer retrieves the hostname of the IdM server that is most conveniently located for communicating with the client, and uses its domain to configure the client components. Server-side service availability IdM allows replicating servers in geographically dispersed data centers to shorten the path between IdM clients and the nearest accessible server. Replicating servers allows spreading the load and scaling for more clients. The IdM replication mechanism provides active/active service availability. Services at all IdM replicas are readily available at the same time. Note Trying to combine IdM with other load balancing, HA software is not recommended. Many third-party high availability (HA) solutions assume active/passive scenarios and cause unneeded service interruption to IdM availability. Other solutions use virtual IPs or a single hostname per clustered service. All these methods do not typically work well with the type of service availability provided by the IdM solution. They also integrate very poorly with Kerberos, decreasing the overall security and stability of the deployment. It is also discouraged to deploy other, unrelated services on IdM masters, especially if these services are supposed to be highly available and use solutions that modify networking configuration to provide HA features. For more details about using load balancers when Kerberos is used for authentication, see this blog post .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/load-balancing
Chapter 1. Bare Metal Provisioning service (ironic) functionality
Chapter 1. Bare Metal Provisioning service (ironic) functionality You use the Bare Metal Provisioning service (ironic) components to provision and manage physical machines as bare-metal instances for your cloud users. To provision and manage bare-metal instances, the Bare Metal Provisioning service interacts with the following Red Hat OpenStack Services on OpenShift (RHOSO) services: The Compute service (nova) provides scheduling, tenant quotas, and a user-facing API for virtual machine instance management. The Identity service (keystone) provides request authentication and assists the Bare Metal Provisioning service to locate other RHOSO services. The Image service (glance) manages disk and instance images and image metadata. The Networking service (neutron) provides DHCP and network configuration, and provisions the virtual or physical networks that instances connect to on boot. The Object Storage service (swift) exposes temporary image URLs for some drivers. Bare Metal Provisioning service components The Bare Metal Provisioning service consists of services, named ironic-* . The following services are the core Bare Metal Provisioning services: Bare Metal Provisioning API ( ironic-api ) This service provides the external REST API to users. The API sends application requests to the Bare Metal Provisioning conductor over remote procedure call (RPC). Bare Metal Provisioning conductor ( ironic-conductor ) This service uses drivers to perform the following bare-metal node management tasks: Adds, edits, and deletes bare-metal nodes. Powers bare-metal nodes on and off with IPMI, Redfish, or other vendor-specific protocol. Provisions, deploys, and cleans bare metal nodes. Bare Metal Provisioning inspector ( ironic-inspector ) This service discovers the hardware properties of a bare-metal node that are required for scheduling bare-metal instances, and creates the Bare Metal Provisioning service ports for the discovered ethernet MACs. Bare Metal Provisioning database This database tracks hardware information and state. Bare Metal Provisioning agent ( ironic-python-agent ) This service runs in a temporary ramdisk to provide ironic-conductor and ironic-inspector services with remote access, in-band hardware control, and hardware introspection. Provisioning a bare-metal instance You can configure the Bare Metal Provisioning service to use PXE, iPXE, or virtual media to provision physical machines as bare-metal instances: PXE or iPXE: The Bare Metal Provisioning service provisions the bare-metal instances by using network boot. Virtual media: The Bare Metal Provisioning service provisions the bare-metal instances by creating a temporary ISO image and requesting the Baseboard Management Controller (BMC) to attach and boot to that image.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_bare_metal_provisioning_service/assembly_bare-metal-provisioning-service-functionality
Chapter 5. Configuring the Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform
Chapter 5. Configuring the Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform The platform gateway for Ansible Automation Platform enables you to manage the following Ansible Automation Platform components to form a single user interface: Automation controller Automation hub Event-Driven Ansible Red Hat Ansible Lightspeed (This feature is disabled by default, you must opt in to use it.) Before you can deploy the platform gateway you must have Ansible Automation Platform Operator installed in a namespace. If you have not installed Ansible Automation Platform Operator see Installing the Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform . Note Platform gateway is only available under Ansible Automation Platform Operator version 2.5. Every component deployed under Ansible Automation Platform Operator 2.5 defaults to version 2.5. If you have the Ansible Automation Platform Operator and some or all of the Ansible Automation Platform components installed see Deploying the platform gateway with existing Ansible Automation Platform components for how to proceed. 5.1. Linking your components to the platform gateway After installing the Ansible Automation Platform Operator in your namespace you can set up your Ansible Automation Platform instance. Then link all the platform components to a single user interface. Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Details tab. On the Ansible Automation Platform tile click Create instance . From the Create Ansible Automation Platform page enter a name for your instance in the Name field. Click YAML view and paste the following: spec: database: resource_requirements: requests: cpu: 200m memory: 512Mi storage_requirements: requests: storage: 100Gi controller: disabled: false eda: disabled: false hub: disabled: false storage_type: file file_storage_storage_class: <read-write-many-storage-class> file_storage_size: 10Gi Click Create . Verification Go to your Ansible Automation Platform Operator deployment and click All instances to verify if all instances deployed correctly. You should see the Ansible Automation Platform instance and the deployed AutomationController , EDA , and AutomationHub instances here. Alternatively you can check by the command line, run: oc get route 5.2. Accessing the platform gateway You should use the Ansible Automation Platform instance as your default. This instance links the automation controller, automation hub, and Event-Driven Ansible deployments to a single interface. Procedure To access your Ansible Automation Platform instance: Log in to Red Hat OpenShift Container Platform. Navigate to Networking Routes Click the link under Location for Ansible Automation Platform . This redirects you to the Ansible Automation Platform login page. Enter "admin" as your username in the Username field. For the password you need to: Go to to Workloads Secrets . Click <your instance name>-admin-password and copy the password. Paste the password into the Password field. Click Login . Apply your subscription: Click Subscription manifest or Username/password . Upload your manifest or enter your username and password. Select your subscription from the Subscription list. Click . This redirects you to the Analytics page. Click . Select the I agree to the terms of the license agreement checkbox. Click . You now have access to the platform gateway user interface. If you cannot access the Ansible Automation Platform see Frequently asked questions on platform gateway for help with troubleshooting and debugging. 5.3. Deploying the platform gateway with existing Ansible Automation Platform components You can link any components of the Ansible Automation Platform, that you have already installed to a new Ansible Automation Platform instance. The following procedure simulates a scenario where you have automation controller as an existing component and want to add automation hub and Event-Driven Ansible. Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Click Subscriptions and edit your Update channel to stable-2.5 . Click Details and on the Ansible Automation Platform tile click Create instance . From the Create Ansible Automation Platform page enter a name for your instance in the Name field. When deploying an Ansible Automation Platform instance, ensure that auto_update is set to the default value of false on your existing automation controller instance in order for the integration to work. Click YAML view and copy in the following: apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: example-aap namespace: aap spec: database: resource_requirements: requests: cpu: 200m memory: 512Mi storage_requirements: requests: storage: 100Gi # Platform image_pull_policy: IfNotPresent # Components controller: disabled: false name: existing-controller-name eda: disabled: false hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: <your-read-write-many-storage-class> file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage For new components, if you do not specify a name, a default name is generated. Click Create . To access your new instance, see Accessing the platform gateway . Note If you have an existing controller with a managed Postgres pod, after creating the Ansible Automation Platform resource your automation controller instance will continue to use that original Postgres pod. If you were to do a fresh install you would have a single Postgres managed pod for all instances. 5.4. Configuring an external database for platform gateway on Red Hat Ansible Automation Platform Operator There are two scenarios for deploying Ansible Automation Platform with an external database: Scenario Action required Fresh install You must specify a single external database instance for the platform to use for the following: Platform gateway Automation controller Automation hub Event-Driven Ansible Red Hat Ansible Lightspeed (If enabled) See the aap-configuring-external-db-all-default-components.yml example in the 14.1. Custom resources section for help with this. If using Red Hat Ansible Lightspeed, use the aap-configuring-external-db-with-lightspeed-enabled.yml example. Existing external database in 2.4 Your existing external database remains the same after upgrading but you must specify the external-postgres-configuration-gateway (spec.database.database_secret) on the Ansible Automation Platform custom resource. To deploy Ansible Automation Platform with an external database, you must first create a Kubernetes secret with credentials for connecting to the database. By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Ansible Automation Platform Operator automatically creates. Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations. Note The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance. The following section outlines the steps to configure an external database for your platform gateway on a Ansible Automation Platform Operator. Prerequisite The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. Note Ansible Automation Platform 2.5 supports PostgreSQL 15. Procedure The external postgres instance credentials and connection information must be stored in a secret, which is then set on the platform gateway spec. Create a postgres_configuration_secret YAML file, following the template below: apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>" 2 port: "<external_port>" 3 database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>" 4 type: "unmanaged" type: Opaque 1 Namespace to create the secret in. This should be the same namespace you want to deploy to. 2 The resolvable hostname for your database node. 3 External port defaults to 5432 . 4 Value for variable password should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. Apply external-postgres-configuration-secret.yml to your cluster using the oc create command. USD oc create -f external-postgres-configuration-secret.yml Note The following example is for a platform gateway deployment. To configure an external database for all components, use the aap-configuring-external-db-all-default-components.yml example in the 14.1. Custom resources section. When creating your AnsibleAutomationPlatform custom resource object, specify the secret on your spec, following the example below: apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: example-aap Namespace: aap spec: database: database_secret: automation-platform-postgres-configuration 5.5. Enabling HTTPS redirect for single sign-on (SSO) for platform gateway on OpenShift Container Platform HTTPS redirect for SAML, allows you to log in once and access all of the platform gateway without needing to reauthenticate. Prerequisites You have successfully configured SAML in the gateway from the Ansible Automation Platform Operator. Refer to Configuring SAML authentication for help with this. Procedure Log in to Red Hat OpenShift Container Platform. Go to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select All Instances and go to your AnsibleAutomationPlatform instance. Click the ... icon and then select Edit AnsibleAutomationPlatform . In the YAML view paste the following YAML code under the spec: section: spec: extra_settings: - setting: REDIRECT_IS_HTTPS value: '"True"' Click Save . Verification After you have added the REDIRECT_IS_HTTPS setting, wait for the pod to redeploy automatically. You can verify this setting makes it into the pod by running: oc exec -it <gateway-pod-name> -- grep REDIRECT /etc/ansible-automation-platform/gateway/settings.py 5.6. Frequently asked questions on platform gateway If I delete my Ansible Automation Platform deployment will I still have access to automation controller? No, automation controller, automation hub, and Event-Driven Ansible are nested within the deployment and are also deleted. Something went wrong with my deployment but I'm not sure what, how can I find out? You can follow along in the command line while the operator is reconciling, this can be helpful for debugging. Alternatively you can click into the deployment instance to see the status conditions being updated as the deployment goes on. Is it still possible to view individual component logs? When troubleshooting you should examine the Ansible Automation Platform instance for the main logs and then each individual component ( EDA , AutomationHub , AutomationController ) for more specific information. Where can I view the condition of an instance? To display status conditions click into the instance, and look under the Details or Events tab. Alternatively, to display the status conditions you can run the get command: oc get automationcontroller <instance-name> -o jsonpath=Pipe "| jq" Can I track my migration in real time? To help track the status of the migration or to understand why migration might have failed you can look at the migration logs as they are running. Use the logs command: oc logs fresh-install-controller-migration-4.6.0-jwfm6 -f I have configured my SAML but authentication fails with this error: "Unable to complete social auth login" What can I do? You must update your Ansible Automation Platform instance to include the REDIRECT_IS_HTTPS extra setting. See Enabling single sign-on (SSO) for platform gateway on OpenShift Container Platform for help with this.
[ "spec: database: resource_requirements: requests: cpu: 200m memory: 512Mi storage_requirements: requests: storage: 100Gi controller: disabled: false eda: disabled: false hub: disabled: false storage_type: file file_storage_storage_class: <read-write-many-storage-class> file_storage_size: 10Gi", "apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: example-aap namespace: aap spec: database: resource_requirements: requests: cpu: 200m memory: 512Mi storage_requirements: requests: storage: 100Gi # Platform image_pull_policy: IfNotPresent # Components controller: disabled: false name: existing-controller-name eda: disabled: false hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: <your-read-write-many-storage-class> file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage", "apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: \"<external_ip_or_url_resolvable_by_the_cluster>\" 2 port: \"<external_port>\" 3 database: \"<desired_database_name>\" username: \"<username_to_connect_as>\" password: \"<password_to_connect_with>\" 4 type: \"unmanaged\" type: Opaque", "oc create -f external-postgres-configuration-secret.yml", "apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: example-aap Namespace: aap spec: database: database_secret: automation-platform-postgres-configuration", "spec: extra_settings: - setting: REDIRECT_IS_HTTPS value: '\"True\"'", "exec -it <gateway-pod-name> -- grep REDIRECT /etc/ansible-automation-platform/gateway/settings.py" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_on_openshift_container_platform/configure-aap-operator_operator-platform-doc
Chapter 9. GFS2 tracepoints and the glock debugfs interface
Chapter 9. GFS2 tracepoints and the glock debugfs interface This documentation on both the GFS2 tracepoints and the glock debugfs interface is intended for advanced users who are familiar with file system internals and who would like to learn more about the design of GFS2 and how to debug GFS2-specific issues. The following sections describe GFS2 tracepoints and the GFS2 glocks file. 9.1. GFS2 tracepoint types There are currently three types of GFS2 tracepoints: glock (pronounced "gee-lock") tracepoints, bmap tracepoints and log tracepoints. These can be used to monitor a running GFS2 file system. Tracepoints are particularly useful when a problem, such as a hang or performance issue, is reproducible and thus the tracepoint output can be obtained during the problematic operation. In GFS2, glocks are the primary cache control mechanism and they are the key to understanding the performance of the core of GFS2. The bmap (block map) tracepoints can be used to monitor block allocations and block mapping (lookup of already allocated blocks in the on-disk metadata tree) as they happen and check for any issues relating to locality of access. The log tracepoints keep track of the data being written to and released from the journal and can provide useful information about that part of GFS2. The tracepoints are designed to be as generic as possible. This should mean that it will not be necessary to change the API during the course of Red Hat Enterprise Linux 9. On the other hand, users of this interface should be aware that this is a debugging interface and not part of the normal Red Hat Enterprise Linux 9 API set, and as such Red Hat makes no guarantees that changes in the GFS2 tracepoints interface will not occur. Tracepoints are a generic feature of Red Hat Enterprise Linux and their scope goes well beyond GFS2. In particular they are used to implement the blktrace infrastructure and the blktrace tracepoints can be used in combination with those of GFS2 to gain a fuller picture of the system performance. Due to the level at which the tracepoints operate, they can produce large volumes of data in a very short period of time. They are designed to put a minimum load on the system when they are enabled, but it is inevitable that they will have some effect. Filtering events by a variety of means can help reduce the volume of data and help focus on obtaining just the information which is useful for understanding any particular situation. 9.2. Tracepoints The tracepoints can be found under the /sys/kernel/debug/tracing/ directory assuming that debugfs is mounted in the standard place at the /sys/kernel/debug directory. The events subdirectory contains all the tracing events that may be specified and, provided the gfs2 module is loaded, there will be a gfs2 subdirectory containing further subdirectories, one for each GFS2 event. The contents of the /sys/kernel/debug/tracing/events/gfs2 directory should look roughly like the following: To enable all the GFS2 tracepoints, enter the following command: To enable a specific tracepoint, there is an enable file in each of the individual event subdirectories. The same is true of the filter file which can be used to set an event filter for each event or set of events. The meaning of the individual events is explained in more detail below. The output from the tracepoints is available in ASCII or binary format. This appendix does not currently cover the binary interface. The ASCII interface is available in two ways. To list the current content of the ring buffer, you can enter the following command: This interface is useful in cases where you are using a long-running process for a certain period of time and, after some event, want to look back at the latest captured information in the buffer. An alternative interface, /sys/kernel/debug/tracing/trace_pipe , can be used when all the output is required. Events are read from this file as they occur; there is no historical information available through this interface. The format of the output is the same from both interfaces and is described for each of the GFS2 events in the later sections of this appendix. A utility called trace-cmd is available for reading tracepoint data. For more information about this utility, see http://lwn.net/Articles/341902/ . The trace-cmd utility can be used in a similar way to the strace utility, for example to run a command while gathering trace data from various sources. 9.3. Glocks To understand GFS2, the most important concept to understand, and the one which sets it aside from other file systems, is the concept of glocks. In terms of the source code, a glock is a data structure that brings together the DLM and caching into a single state machine. Each glock has a 1:1 relationship with a single DLM lock, and provides caching for that lock state so that repetitive operations carried out from a single node of the file system do not have to repeatedly call the DLM, and thus they help avoid unnecessary network traffic. There are two broad categories of glocks, those which cache metadata and those which do not. The inode glocks and the resource group glocks both cache metadata, other types of glocks do not cache metadata. The inode glock is also involved in the caching of data in addition to metadata and has the most complex logic of all glocks. Table 9.1. Glock Modes and DLM Lock Modes Glock mode DLM lock mode Notes UN IV/NL Unlocked (no DLM lock associated with glock or NL lock depending on I flag) SH PR Shared (protected read) lock EX EX Exclusive lock DF CW Deferred (concurrent write) used for Direct I/O and file system freeze Glocks remain in memory until either they are unlocked (at the request of another node or at the request of the VM) and there are no local users. At that point they are removed from the glock hash table and freed. When a glock is created, the DLM lock is not associated with the glock immediately. The DLM lock becomes associated with the glock upon the first request to the DLM, and if this request is successful then the 'I' (initial) flag will be set on the glock. The "Glock Flags" table in The glock debugfs interface shows the meanings of the different glock flags. Once the DLM has been associated with the glock, the DLM lock will always remain at least at NL (Null) lock mode until the glock is to be freed. A demotion of the DLM lock from NL to unlocked is always the last operation in the life of a glock. Each glock can have a number of "holders" associated with it, each of which represents one lock request from the higher layers. System calls relating to GFS2 queue and dequeue holders from the glock to protect the critical section of code. The glock state machine is based on a work queue. For performance reasons, tasklets would be preferable; however, in the current implementation we need to submit I/O from that context which prohibits their use. Note Workqueues have their own tracepoints which can be used in combination with the GFS2 tracepoints. The following table shows what state may be cached under each of the glock modes and whether that cached state may be dirty. This applies to both inode and resource group locks, although there is no data component for the resource group locks, only metadata. Table 9.2. Glock Modes and Data Types Glock mode Cache Data Cache Metadata Dirty Data Dirty Metadata UN No No No No SH Yes Yes No No DF No Yes No No EX Yes Yes Yes Yes 9.4. The glock debugfs interface The glock debugfs interface allows the visualization of the internal state of the glocks and the holders and it also includes some summary details of the objects being locked in some cases. Each line of the file either begins G: with no indentation (which refers to the glock itself) or it begins with a different letter, indented with a single space, and refers to the structures associated with the glock immediately above it in the file (H: is a holder, I: an inode, and R: a resource group). Here is an example of what the content of this file might look like: The above example is a series of excerpts (from an approximately 18MB file) generated by the command cat /sys/kernel/debug/gfs2/unity:myfs/glocks >my.lock during a run of the postmark benchmark on a single node GFS2 file system. The glocks in the figure have been selected in order to show some of the more interesting features of the glock dumps. The glock states are either EX (exclusive), DF (deferred), SH (shared) or UN (unlocked). These states correspond directly with DLM lock modes except for UN which may represent either the DLM null lock state, or that GFS2 does not hold a DLM lock (depending on the I flag as explained above). The s: field of the glock indicates the current state of the lock and the same field in the holder indicates the requested mode. If the lock is granted, the holder will have the H bit set in its flags (f: field). Otherwise, it will have the W wait bit set. The n: field (number) indicates the number associated with each item. For glocks, that is the type number followed by the glock number so that in the above example, the first glock is n:5/75320; which indicates an iopen glock which relates to inode 75320. In the case of inode and iopen glocks, the glock number is always identical to the inode's disk block number. Note The glock numbers (n: field) in the debugfs glocks file are in hexadecimal, whereas the tracepoints output lists them in decimal. This is for historical reasons; glock numbers were always written in hex, but decimal was chosen for the tracepoints so that the numbers could easily be compared with the other tracepoint output (from blktrace for example) and with output from stat (1). The full listing of all the flags for both the holder and the glock are set out in the "Glock Flags" table, below, and the "Glock Holder Flags" table in Glock holders . The content of lock value blocks is not currently available through the glock debugfs interface. The following table shows the meanings of the different glock types. Table 9.3. Glock Types Type number Lock type Use 1 trans Transaction lock 2 inode Inode metadata and data 3 rgrp Resource group metadata 4 meta The superblock 5 iopen Inode last closer detection 6 flock flock (2) syscall 8 quota Quota operations 9 journal Journal mutex One of the more important glock flags is the l (locked) flag. This is the bit lock that is used to arbitrate access to the glock state when a state change is to be performed. It is set when the state machine is about to send a remote lock request through the DLM, and only cleared when the complete operation has been performed. Sometimes this can mean that more than one lock request will have been sent, with various invalidations occurring between times. The following table shows the meanings of the different glock flags. Table 9.4. Glock Flags Flag Name Meaning d Pending demote A deferred (remote) demote request D Demote A demote request (local or remote) f Log flush The log needs to be committed before releasing this glock F Frozen Replies from remote nodes ignored - recovery is in progress. i Invalidate in progress In the process of invalidating pages under this glock I Initial Set when DLM lock is associated with this glock l Locked The glock is in the process of changing state L LRU Set when the glock is on the LRU list` o Object Set when the glock is associated with an object (that is, an inode for type 2 glocks, and a resource group for type 3 glocks) p Demote in progress The glock is in the process of responding to a demote request q Queued Set when a holder is queued to a glock, and cleared when the glock is held, but there are no remaining holders. Used as part of the algorithm the calculates the minimum hold time for a glock. r Reply pending Reply received from remote node is awaiting processing y Dirty Data needs flushing to disk before releasing this glock When a remote callback is received from a node that wants to get a lock in a mode that conflicts with that being held on the local node, then one or other of the two flags D (demote) or d (demote pending) is set. In order to prevent starvation conditions when there is contention on a particular lock, each lock is assigned a minimum hold time. A node which has not yet had the lock for the minimum hold time is allowed to retain that lock until the time interval has expired. If the time interval has expired, then the D (demote) flag will be set and the state required will be recorded. In that case the time there are no granted locks on the holders queue, the lock will be demoted. If the time interval has not expired, then the d (demote pending) flag is set instead. This also schedules the state machine to clear d (demote pending) and set D (demote) when the minimum hold time has expired. The I (initial) flag is set when the glock has been assigned a DLM lock. This happens when the glock is first used and the I flag will then remain set until the glock is finally freed (which the DLM lock is unlocked). 9.5. Glock holders The following table shows the meanings of the different glock holder flags. Table 9.5. Glock Holder Flags Flag Name Meaning a Async Do not wait for glock result (will poll for result later) A Any Any compatible lock mode is acceptable c No cache When unlocked, demote DLM lock immediately e No expire Ignore subsequent lock cancel requests E Exact Must have exact lock mode F First Set when holder is the first to be granted for this lock H Holder Indicates that requested lock is granted p Priority Enqueue holder at the head of the queue t Try A "try" lock T Try 1CB A "try" lock that sends a callback W Wait Set while waiting for request to complete The most important holder flags are H (holder) and W (wait) as mentioned earlier, since they are set on granted lock requests and queued lock requests respectively. The ordering of the holders in the list is important. If there are any granted holders, they will always be at the head of the queue, followed by any queued holders. If there are no granted holders, then the first holder in the list will be the one that triggers the state change. Since demote requests are always onsidered higher priority than requests from the file system, that might not always directly result in a change to the state requested. The glock subsystem supports two kinds of "try" lock. These are useful both because they allow the taking of locks out of the normal order (with suitable back-off and retry) and because they can be used to help avoid resources in use by other nodes. The normal t (try) lock is just what its name indicates; it is a "try" lock that does not do anything special. The T ( try 1CB ) lock, on the other hand, is identical to the t lock except that the DLM will send a single callback to current incompatible lock holders. One use of the T ( try 1CB ) lock is with the iopen locks, which are used to arbitrate among the nodes when an inode's i_nlink count is zero, and determine which of the nodes will be responsible for deallocating the inode. The iopen glock is normally held in the shared state, but when the i_nlink count becomes zero and ->evict_inode () is called, it will request an exclusive lock with T ( try 1CB ) set. It will continue to deallocate the inode if the lock is granted. If the lock is not granted it will result in the node(s) which were preventing the grant of the lock marking their glock(s) with the D (demote) flag, which is checked at ->drop_inode () time in order to ensure that the deallocation is not forgotten. This means that inodes that have zero link count but are still open will be deallocated by the node on which the final close () occurs. Also, at the same time as the inode's link count is decremented to zero the inode is marked as being in the special state of having zero link count but still in use in the resource group bitmap. This functions like the ext3 file system3's orphan list in that it allows any subsequent reader of the bitmap to know that there is potentially space that might be reclaimed, and to attempt to reclaim it. 9.6. Glock tracepoints The tracepoints are also designed to be able to confirm the correctness of the cache control by combining them with the blktrace output and with knowledge of the on-disk layout. It is then possible to check that any given I/O has been issued and completed under the correct lock, and that no races are present. The gfs2_glock_state_change tracepoint is the most important one to understand. It tracks every state change of the glock from initial creation right through to the final demotion which ends with gfs2_glock_put and the final NL to unlocked transition. The l (locked) glock flag is always set before a state change occurs and will not be cleared until after it has finished. There are never any granted holders (the H glock holder flag) during a state change. If there are any queued holders, they will always be in the W (waiting) state. When the state change is complete then the holders may be granted which is the final operation before the l glock flag is cleared. The gfs2_demote_rq tracepoint keeps track of demote requests, both local and remote. Assuming that there is enough memory on the node, the local demote requests will rarely be seen, and most often they will be created by umount or by occasional memory reclaim. The number of remote demote requests is a measure of the contention between nodes for a particular inode or resource group. The gfs2_glock_lock_time tracepoint provides information about the time taken by requests to the DLM. The blocking ( b ) flag was introduced into the glock specifically to be used in combination with this tracepoint. When a holder is granted a lock, gfs2_promote is called, this occurs as the final stages of a state change or when a lock is requested which can be granted immediately due to the glock state already caching a lock of a suitable mode. If the holder is the first one to be granted for this glock, then the f (first) flag is set on that holder. This is currently used only by resource groups. 9.7. Bmap tracepoints Block mapping is a task central to any file system. GFS2 uses a traditional bitmap-based system with two bits per block. The main purpose of the tracepoints in this subsystem is to allow monitoring of the time taken to allocate and map blocks. The gfs2_bmap tracepoint is called twice for each bmap operation: once at the start to display the bmap request, and once at the end to display the result. This makes it easy to match the requests and results together and measure the time taken to map blocks in different parts of the file system, different file offsets, or even of different files. It is also possible to see what the average extent sizes being returned are in comparison to those being requested. The gfs2_rs tracepoint traces block reservations as they are created, used, and destroyed in the block allocator. To keep track of allocated blocks, gfs2_block_alloc is called not only on allocations, but also on freeing of blocks. Since the allocations are all referenced according to the inode for which the block is intended, this can be used to track which physical blocks belong to which files in a live file system. This is particularly useful when combined with blktrace , which will show problematic I/O patterns that may then be referred back to the relevant inodes using the mapping gained by means this tracepoint. Direct I/O ( iomap ) is an alternative cache policy which allows file data transfers to happen directly between disk and the user's buffer. This has benefits in situations where cache hit rate is expected to be low. Both gfs2_iomap_start and gfs2_iomap_end tracepoints trace these operations and can be used to keep track of mapping using Direct I/O, the positions on the file system of the Direct I/O along with the operation type. 9.8. Log tracepoints The tracepoints in this subsystem track blocks being added to and removed from the journal ( gfs2_pin ), as well as the time taken to commit the transactions to the log ( gfs2_log_flush ). This can be very useful when trying to debug journaling performance issues. The gfs2_log_blocks tracepoint keeps track of the reserved blocks in the log, which can help show if the log is too small for the workload, for example. The gfs2_ail_flush tracepoint is similar to the gfs2_log_flush tracepoint in that it keeps track of the start and end of flushes of the AIL list. The AIL list contains buffers which have been through the log, but have not yet been written back in place and this is periodically flushed in order to release more log space for use by the file system, or when a process requests a sync or fsync . 9.9. Glock statistics GFS2 maintains statistics that can help track what is going on within the file system. This allows you to spot performance issues. GFS2 maintains two counters: dcount , which counts the number of DLM operations requested. This shows how much data has gone into the mean/variance calculations. qcount , which counts the number of syscall level operations requested. Generally qcount will be equal to or greater than dcount . In addition, GFS2 maintains three mean/variance pairs. The mean/variance pairs are smoothed exponential estimates and the algorithm used is the one used to calculate round trip times in network code. The mean and variance pairs maintained in GFS2 are not scaled, but are in units of integer nanoseconds. srtt/srttvar: Smoothed round trip time for non-blocking operations srttb/srttvarb: Smoothed round trip time for blocking operations irtt/irttvar: Inter-request time (for example, time between DLM requests) A non-blocking request is one which will complete right away, whatever the state of the DLM lock in question. That currently means any requests when (a) the current state of the lock is exclusive (b) the requested state is either null or unlocked or (c) the "try lock" flag is set. A blocking request covers all the other lock requests. Larger times are better for IRTTs, whereas smaller times are better for the RTTs. Statistics are kept in two sysfs files: The glstats file. This file is similar to the glocks file, except that it contains statistics, with one glock per line. The data is initialized from "per cpu" data for that glock type for which the glock is created (aside from counters, which are zeroed). This file may be very large. The lkstats file. This contains "per cpu" stats for each glock type. It contains one statistic per line, in which each column is a cpu core. There are eight lines per glock type, with types following on from each other. 9.10. References For more information about tracepoints and the GFS2 glocks file, see the following resources: For information about glock internal locking rules, see https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/gfs2-glocks.rst . For information about event tracing, see https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/trace/events.rst . For information about the trace-cmd utility, see http://lwn.net/Articles/341902/ .
[ "ls enable gfs2_bmap gfs2_glock_queue gfs2_log_flush filter gfs2_demote_rq gfs2_glock_state_change gfs2_pin gfs2_block_alloc gfs2_glock_put gfs2_log_blocks gfs2_promote", "echo -n 1 >/sys/kernel/debug/tracing/events/gfs2/enable", "cat /sys/kernel/debug/tracing/trace", "G: s:SH n:5/75320 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:EX n:3/258028 f:yI t:EX d:EX/0 a:3 r:4 H: s:EX f:tH e:0 p:4466 [postmark] gfs2_inplace_reserve_i+0x177/0x780 [gfs2] R: n:258028 f:05 b:22256/22256 i:16800 G: s:EX n:2/219916 f:yfI t:EX d:EX/0 a:0 r:3 I: n:75661/219916 t:8 f:0x10 d:0x00000000 s:7522/7522 G: s:SH n:5/127205 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:EX n:2/50382 f:yfI t:EX d:EX/0 a:0 r:2 G: s:SH n:5/302519 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:SH n:5/313874 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:SH n:5/271916 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:SH n:5/312732 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_gfs2_file_systems/con_gfs2-tracepoints-configuring-gfs2-file-systems
Chapter 3. Usage
Chapter 3. Usage This chapter describes the necessary steps for rebuilding and using Red Hat Software Collections 3.4, and deploying applications that use Red Hat Software Collections. 3.1. Using Red Hat Software Collections 3.1.1. Running an Executable from a Software Collection To run an executable from a particular Software Collection, type the following command at a shell prompt: scl enable software_collection ... ' command ...' Or, alternatively, use the following command: scl enable software_collection ... -- command ... Replace software_collection with a space-separated list of Software Collections you want to use and command with the command you want to run. For example, to execute a Perl program stored in a file named hello.pl with the Perl interpreter from the perl526 Software Collection, type: You can execute any command using the scl utility, causing it to be run with the executables from a selected Software Collection in preference to their possible Red Hat Enterprise Linux system equivalents. For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.4 Components" . 3.1.2. Running a Shell Session with a Software Collection as Default To start a new shell session with executables from a selected Software Collection in preference to their Red Hat Enterprise Linux equivalents, type the following at a shell prompt: scl enable software_collection ... bash Replace software_collection with a space-separated list of Software Collections you want to use. For example, to start a new shell session with the python27 and rh-postgresql10 Software Collections as default, type: The list of Software Collections that are enabled in the current session is stored in the USDX_SCLS environment variable, for instance: For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.4 Components" . 3.1.3. Running a System Service from a Software Collection Running a System Service from a Software Collection in Red Hat Enterprise Linux 6 Software Collections that include system services install corresponding init scripts in the /etc/rc.d/init.d/ directory. To start such a service in the current session, type the following at a shell prompt as root : service software_collection - service_name start Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : chkconfig software_collection - service_name on For example, to start the postgresql service from the rh-postgresql96 Software Collection and enable it in runlevels 2, 3, 4, and 5, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 6, refer to the Red Hat Enterprise Linux 6 Deployment Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.4 Components" . Running a System Service from a Software Collection in Red Hat Enterprise Linux 7 In Red Hat Enterprise Linux 7, init scripts have been replaced by systemd service unit files, which end with the .service file extension and serve a similar purpose as init scripts. To start a service in the current session, execute the following command as root : systemctl start software_collection - service_name .service Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : systemctl enable software_collection - service_name .service For example, to start the postgresql service from the rh-postgresql10 Software Collection and enable it at boot time, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 7, refer to the Red Hat Enterprise Linux 7 System Administrator's Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.4 Components" . 3.2. Accessing a Manual Page from a Software Collection Every Software Collection contains a general manual page that describes the content of this component. Each manual page has the same name as the component and it is located in the /opt/rh directory. To read a manual page for a Software Collection, type the following command: scl enable software_collection 'man software_collection ' Replace software_collection with the particular Red Hat Software Collections component. For example, to display the manual page for rh-mariadb102 , type: 3.3. Deploying Applications That Use Red Hat Software Collections In general, you can use one of the following two approaches to deploy an application that depends on a component from Red Hat Software Collections in production: Install all required Software Collections and packages manually and then deploy your application, or Create a new Software Collection for your application and specify all required Software Collections and other packages as dependencies. For more information on how to manually install individual Red Hat Software Collections components, see Section 2.2, "Installing Red Hat Software Collections" . For further details on how to use Red Hat Software Collections, see Section 3.1, "Using Red Hat Software Collections" . For a detailed explanation of how to create a custom Software Collection or extend an existing one, read the Red Hat Software Collections Packaging Guide . 3.4. Red Hat Software Collections Container Images Container images based on Red Hat Software Collections include applications, daemons, and databases. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. For information about their usage, see Using Red Hat Software Collections 3 Container Images . For details regarding container images based on Red Hat Software Collections versions 2.4 and earlier, see Using Red Hat Software Collections 2 Container Images . The following container images are available with Red Hat Software Collections 3.4: rhscl/devtoolset-9-toolchain-rhel7 rhscl/devtoolset-9-perftools-rhel7 rhscl/nodejs-12-rhel7 rhscl/php-73-rhel7 rhscl/nginx-116-rhel7 rhscl/postgresql-12-rhel7 rhscl/httpd-24-rhel7 The following container images are based on Red Hat Software Collections 3.3: rhscl/mariadb-103-rhel7 rhscl/redis-5-rhel7 rhscl/ruby-26-rhel7 rhscl/devtoolset-8-toolchain-rhel7 rhscl/devtoolset-8-perftools-rhel7 rhscl/varnish-6-rhel7 The following container images are based on Red Hat Software Collections 3.2: rhscl/mysql-80-rhel7 rhscl/nginx-114-rhel7 rhscl/php-72-rhel7 rhscl/nodejs-10-rhel7 The following container images are based on Red Hat Software Collections 3.1: rhscl/devtoolset-7-toolchain-rhel7 (EOL) rhscl/devtoolset-7-perftools-rhel7 (EOL) rhscl/mongodb-36-rhel7 rhscl/perl-526-rhel7 rhscl/php-70-rhel7 (EOL) rhscl/postgresql-10-rhel7 rhscl/ruby-25-rhel7 rhscl/varnish-5-rhel7 The following container images are based on Red Hat Software Collections 3.0: rhscl/mariadb-102-rhel7 rhscl/mongodb-34-rhel7 rhscl/nginx-112-rhel7 (EOL) rhscl/nodejs-8-rhel7 (EOL) rhscl/php-71-rhel7 (EOL) rhscl/postgresql-96-rhel7 rhscl/python-36-rhel7 The following container images are based on Red Hat Software Collections 2.4: rhscl/devtoolset-6-toolchain-rhel7 (EOL) rhscl/devtoolset-6-perftools-rhel7 (EOL) rhscl/nginx-110-rhel7 rhscl/nodejs-6-rhel7 (EOL) rhscl/python-27-rhel7 rhscl/ruby-24-rhel7 rhscl/ror-50-rhel7 rhscl/thermostat-16-agent-rhel7 (EOL) rhscl/thermostat-16-storage-rhel7 (EOL) The following container images are based on Red Hat Software Collections 2.3: rhscl/mysql-57-rhel7 (EOL) rhscl/perl-524-rhel7 (EOL) rhscl/redis-32-rhel7 (EOL) rhscl/mongodb-32-rhel7 (EOL) rhscl/php-56-rhel7 (EOL) rhscl/python-35-rhel7 (EOL) rhscl/ruby-23-rhel7 (EOL) The following container images are based on Red Hat Software Collections 2.2: rhscl/devtoolset-4-toolchain-rhel7 (EOL) rhscl/devtoolset-4-perftools-rhel7 (EOL) rhscl/mariadb-101-rhel7 (EOL) rhscl/nginx-18-rhel7 (EOL) rhscl/nodejs-4-rhel7 (EOL) rhscl/postgresql-95-rhel7 (EOL) rhscl/ror-42-rhel7 (EOL) rhscl/thermostat-1-agent-rhel7 (EOL) rhscl/varnish-4-rhel7 (EOL) The following container images are based on Red Hat Software Collections 2.0: rhscl/mariadb-100-rhel7 (EOL) rhscl/mongodb-26-rhel7 (EOL) rhscl/mysql-56-rhel7 (EOL) rhscl/nginx-16-rhel7 (EOL) rhscl/passenger-40-rhel7 (EOL) rhscl/perl-520-rhel7 (EOL) rhscl/postgresql-94-rhel7 (EOL) rhscl/python-34-rhel7 (EOL) rhscl/ror-41-rhel7 (EOL) rhscl/ruby-22-rhel7 (EOL) rhscl/s2i-base-rhel7 Images marked as End of Life (EOL) are no longer supported.
[ "~]USD scl enable rh-perl526 'perl hello.pl' Hello, World!", "~]USD scl enable python27 rh-postgresql10 bash", "~]USD echo USDX_SCLS python27 rh-postgresql10", "~]# service rh-postgresql96-postgresql start Starting rh-postgresql96-postgresql service: [ OK ] ~]# chkconfig rh-postgresql96-postgresql on", "~]# systemctl start rh-postgresql10-postgresql.service ~]# systemctl enable rh-postgresql10-postgresql.service", "~]USD scl enable rh-mariadb102 \"man rh-mariadb102\"" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.4_release_notes/chap-Usage
3.13. Attaching an ISO Image to a Virtual Machine
3.13. Attaching an ISO Image to a Virtual Machine This Ruby example attaches a CD-ROM to a virtual machine and changes it to an ISO image in order to install the guest operating system. # Get the reference to the "vms" service: vms_service = connection.system_service.vms_service # Find the virtual machine: vm = vms_service.list(search: 'name=myvm')[0] # Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Locate the service that manages the CDROM devices of the VM: cdroms_service = vm_service.cdroms_service # List the first CDROM device: cdrom = cdroms_service.list[0] # Locate the service that manages the CDROM device you just found: cdrom_service = cdroms_service.cdrom_service(cdrom.id) # Change the CD of the VM to 'my_iso_file.iso'. By default this # operation permanently changes the disk that is visible to the # virtual machine after the boot, but it does not have any effect # on the currently running virtual machine. If you want to change the # disk that is visible to the current running virtual machine, change # the `current` parameter's value to `true`. cdrom_service.update( OvirtSDK4::Cdrom.new( file: { id: 'CentOS-7-x86_64-DVD-1511.iso' } ), current: false ) For more information, see VmService:cdroms_service .
[ "Get the reference to the \"vms\" service: vms_service = connection.system_service.vms_service Find the virtual machine: vm = vms_service.list(search: 'name=myvm')[0] Locate the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Locate the service that manages the CDROM devices of the VM: cdroms_service = vm_service.cdroms_service List the first CDROM device: cdrom = cdroms_service.list[0] Locate the service that manages the CDROM device you just found: cdrom_service = cdroms_service.cdrom_service(cdrom.id) Change the CD of the VM to 'my_iso_file.iso'. By default this operation permanently changes the disk that is visible to the virtual machine after the next boot, but it does not have any effect on the currently running virtual machine. If you want to change the disk that is visible to the current running virtual machine, change the `current` parameter's value to `true`. cdrom_service.update( OvirtSDK4::Cdrom.new( file: { id: 'CentOS-7-x86_64-DVD-1511.iso' } ), current: false )" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/ruby_sdk_guide/attaching_iso_image_to_a_virtual_machine
Appendix A. Spring Boot Maven plugin
Appendix A. Spring Boot Maven plugin Spring Boot Maven plugin provides the Spring Boot support in Maven and allows you to package the executable jar or war archives and run an application in-place . A.1. Spring Boot Maven plugin goals The Spring Boot Maven plugin includes the following goals: spring-boot:run runs your Spring Boot application. spring-boot:repackage repackages your .jar and .war files to be executable. spring-boot:start and spring-boot:stop both are used to manage the lifecycle of your Spring Boot application. spring-boot:build-info generates build information that can be used by the Actuator. A.2. Using Spring Boot Maven plugin You can find general instructions on how to use the Spring Boot Plugin at: https://docs.spring.io/spring-boot/docs/current/maven-plugin/reference/htmlsingle/#using . The following examples illustrates the usage of the spring-boot-maven-plugin for Spring Boot. Spring Boot 2 Example For more information on Spring Boot Maven Plugin, refer the https://docs.spring.io/spring-boot/docs/current/maven-plugin/reference/htmlsingle/ link. A.2.1. Using Spring Boot Maven plugin for Spring Boot 2 The following example illustrates the usage of the spring-boot-maven-plugin for Spring Boot 2. Example <project> <modelVersion>4.0.0</modelVersion> <groupId>com.redhat.fuse</groupId> <artifactId>spring-boot-camel</artifactId> <version>1.0-SNAPSHOT</version> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <!-- configure the Fuse version you want to use here --> <fuse.bom.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.bom.version> <!-- maven plugin versions --> <maven-compiler-plugin.version>3.7.0</maven-compiler-plugin.version> <maven-surefire-plugin.version>2.19.1</maven-surefire-plugin.version> </properties> <build> <defaultGoal>spring-boot:run</defaultGoal> <plugins> <plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>USD{fuse.bom.version}</version> <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </project>
[ "<project> <modelVersion>4.0.0</modelVersion> <groupId>com.redhat.fuse</groupId> <artifactId>spring-boot-camel</artifactId> <version>1.0-SNAPSHOT</version> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <!-- configure the Fuse version you want to use here --> <fuse.bom.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.bom.version> <!-- maven plugin versions --> <maven-compiler-plugin.version>3.7.0</maven-compiler-plugin.version> <maven-surefire-plugin.version>2.19.1</maven-surefire-plugin.version> </properties> <build> <defaultGoal>spring-boot:run</defaultGoal> <plugins> <plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>USD{fuse.bom.version}</version> <executions> <execution> <goals> <goal>repackage</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ea-repository</id> <url>https://maven.repository.redhat.com/earlyaccess/all</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </project>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/fuse_on_openshift_guide/spring-boot-maven-plugin
11.2. Run Red Hat JBoss Data Grid in a Cluster
11.2. Run Red Hat JBoss Data Grid in a Cluster The clustered quickstarts for Red Hat JBoss Data Grid are based on the quickstarts found in https://github.com/infinispan/infinispan-quickstart/tree/master/clustered-cache . Report a bug 11.2.1. Compile the Project Use Maven to compile your project with the following command: 23154%2C+Getting+Started+Guide-6.608-09-2016+09%3A22%3A31JBoss+Data+Grid+6Documentation6.6.1 Report a bug 11.2.2. Run the Clustered Cache with Replication Mode To run Red Hat JBoss Data Grid's replication mode example of a clustered cache, launch two nodes from different consoles. Procedure 11.1. Run the Clustered Cache with Replication Mode Use the following command to launch the first node: Use the following command to launch the second node: Result JGroups and JBoss Data Grid initialized on both nodes. After approximately fifteen seconds, the cache entry log message appears on the console of the first node. Report a bug 11.2.3. Run the Clustered Cache with Distribution Mode To run Red Hat JBoss Data Grid's distribution mode example of a clustered cache, launch three nodes from different consoles. Procedure 11.2. Run the Clustered Cache with Distribution Mode Use the following command to launch the first node: Use the following command to launch the second node: Use the following command to launch the third node: Result JGroups and JBoss Data Grid initialized on the three nodes. After approximately fifteen seconds, the ten entries added by the third node are visible as they are distributed to the first and second nodes. Report a bug 11.2.4. Configure the Cluster Use the following steps to add and configure your cluster: Procedure 11.3. Configure the Cluster Add the default configuration for a new cluster. Customize the default cluster configuration according to the requirements of your network. This is done declaratively (using XML) or programmatically. Configure the replicated or distributed data grid. Report a bug 11.2.4.1. Add the Default Cluster Configuration Add a cluster configuration to ensure that Red Hat JBoss Data Grid is aware that a cluster exists and is defined. The following is a default configuration that serves this purpose: Example 11.2. Default Configuration Note Use the new GlobalConfigurationBuilder().clusteredDefault() to quickly create a preconfigured and cluster-aware GlobalConfiguration for clusters. This configuration can also be customized. Report a bug 11.2.4.2. Customize the Default Cluster Configuration Depending on the network requirements, you may need to customize your JGroups configuration. Programmatic Configuration: Use the following GlobalConfiguration code to specify the name of the file to use for JGroups configuration: Replace jgroups.xml with the desired file name. The jgroups.xml file is located at Infinispan-Quickstart/clustered-cache/src/main/resources/ . Note To bind JGroups solely to your loopback interface (to avoid any configured firewalls), use the system property -Djgroups.bind_addr="127.0.0.1" . This is particularly useful to test a cluster where all nodes are on a single machine. Declarative Configuration: Use the following XML snippet in the infinispan.xml file to configure the JGroups properties to use Red Hat JBoss Data Grid's XML configuration: Report a bug 11.2.4.3. Configure the Replicated Data Grid Red Hat JBoss Data Grid's replicated mode ensures that every entry is replicated on every node in the data grid. This mode offers security against data loss due to node failures and excellent data availability. These benefits are at the cost of limiting the storage capacity to the amount of storage available on the node with the least memory. Programmatic Configuration: Use the following code snippet to programmatically configure the cache for replication mode (either synchronous or asynchronous): Declarative Configuration: Edit the infinispan.xml file to include the following XML code to declaratively configure the cache for replication mode (either synchronous or asynchronous): Use the following code to initialize and return a DefaultCacheManager with the XML configuration file: Note JBoss EAP includes its own underlying JMX. This can cause a collision when using the sample code with JBoss EAP and display an error such as org.infinispan.jmx.JmxDomainConflictException: Domain already registered org.infinispan . To avoid this, configure global configuration as follows: Report a bug 11.2.4.4. Configure the Distributed Data Grid Red Hat JBoss Data Grid's distributed mode ensures that each entry is stored on a subset of the total nodes in the data grid. The number of nodes in the subset is controlled by the numOwners parameter, which sets how many owners each entry has. Distributed mode offers increased storage capacity but also results in increased access times and less durability (protection against node failures). Adjust the numOwners value to set the desired trade off between space, durability and availability. Durability is further improved by JBoss Data Grid's topology aware consistent hash, which locates entry owners across a variety of data centers, racks and nodes. Programmatic Configuration: Programmatically configure the cache for distributed mode (either synchronous or asynchronous) as follows: Declarative Configuration: Edit the infinispan.xml file to include the following XML code to declaratively configure the cache for distributed mode (either synchronous or asynchronous): Report a bug
[ "mvn clean compile dependency:copy-dependencies -DstripVersion", "java -cp target/classes/:target/dependency/* org.infinispan.quickstart.clusteredcache.replication.Node0", "java -cp target/classes/:target/dependency/* org.infinispan.quickstart.clusteredcache.replication.Node1", "java -cp target/classes/:target/dependency/* org.infinispan.quickstart.clusteredcache.distribution.Node0", "java -cp target/classes/:target/dependency/* org.infinispan.quickstart.clusteredcache.distribution.Node1", "java -cp target/classes/:target/dependency/* org.infinispan.quickstart.clusteredcache.distribution.Node2", "new ConfigurationBuilder() .clustering().cacheMode(CacheMode.REPL_SYNC) .build()", "new GlobalConfigurationBuilder().transport().addProperty(\"configurationFile\", \"jgroups.xml\") .build()", "<global> <transport> <properties> <property name=\"configurationFile\" value=\"jgroups.xml\"/> </properties> </transport> </global>", "private static EmbeddedCacheManager createCacheManagerProgramatically() { return new DefaultCacheManager( new GlobalConfigurationBuilder() .transport().addProperty(\"configurationFile\", \"jgroups.xml\") .build(), new ConfigurationBuilder() .clustering().cacheMode(CacheMode.REPL_SYNC) .build() ); }", "<infinispan xsi:schemaLocation=\"urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"urn:infinispan:config:6.0\"> <global> <transport> <properties> <property name=\"configurationFile\" value=\"jgroups.xml\"/> </properties> </transport> </global> <default> <clustering mode=\"replication\"> <sync/> </clustering> </default> </infinispan>", "private static EmbeddedCacheManager createCacheManagerFromXml() throws IOException { return new DefaultCacheManager(\"infinispan.xml\");}", "GlobalConfiguration glob = new GlobalConfigurationBuilder() .clusteredDefault() .globalJmxStatistics() .allowDuplicateDomains(true) .enable() .build();", "new ConfigurationBuilder() .clustering() .cacheMode(CacheMode.DIST_SYNC) .hash().numOwners(2) .build()", "<default> <clustering mode=\"distribution\"> <sync/> <hash numOwners=\"2\"/> </clustering> </default>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/sect-run_red_hat_jboss_data_grid_in_a_cluster
Chapter 4. Configuring traffic ingress
Chapter 4. Configuring traffic ingress 4.1. Configuring SSL/TLS and Routes Support for OpenShift Container Platform edge termination routes have been added by way of a new managed component, tls . This separates the route component from SSL/TLS and allows users to configure both separately. EXTERNAL_TLS_TERMINATION: true is the opinionated setting. Note Managed tls means that the default cluster wildcard certificate is used. Unmanaged tls means that the user provided key and certificate pair is be injected into the route. The ssl.cert and ssl.key are now moved to a separate, persistent secret, which ensures that the key and certificate pair are not regenerated upon every reconcile. The key and certificate pair are now formatted as edge routes and mounted to the same directory in the Quay container. Multiple permutations are possible when configuring SSL/TLS and routes, but the following rules apply: If SSL/TLS is managed , then your route must also be managed . If SSL/TLS is unmanaged then you must supply certificates directly in the config bundle. The following table describes the valid options: Table 4.1. Valid configuration options for TLS and routes Option Route TLS Certs provided Result My own load balancer handles TLS Managed Managed No Edge route with default wildcard cert Red Hat Quay handles TLS Managed Unmanaged Yes Passthrough route with certs mounted inside the pod Red Hat Quay handles TLS Unmanaged Unmanaged Yes Certificates are set inside of the quay pod, but the route must be created manually 4.1.1. Creating the config bundle secret with the SSL/TLS cert and key pair Use the following procedure to create a config bundle secret that includes your own SSL/TLS certificate and key pair. Procedure Enter the following command to create config bundle secret that includes your own SSL/TLS certificate and key pair: USD oc create secret generic --from-file config.yaml=./config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret
[ "oc create secret generic --from-file config.yaml=./config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/configuring-traffic-ingress
4.2. Cloning a Virtual Machine
4.2. Cloning a Virtual Machine Before proceeding with cloning, shut down the virtual machine. You can clone the virtual machine using virt-clone or virt-manager . 4.2.1. Cloning Guests with virt-clone You can use virt-clone to clone virtual machines from the command line. Note that you need root privileges for virt-clone to complete successfully. The virt-clone command provides a number of options that can be passed on the command line. These include general options, storage configuration options, networking configuration options, and miscellaneous options. Only the --original is required. To see a complete list of options, enter the following command: The virt-clone man page also documents each command option, important variables, and examples. The following example shows how to clone a guest virtual machine called "demo" on the default connection, automatically generating a new name and disk clone path. Example 4.1. Using virt-clone to clone a guest # virt-clone --original demo --auto-clone The following example shows how to clone a QEMU guest virtual machine called "demo" with multiple disks. Example 4.2. Using virt-clone to clone a guest # virt-clone --connect qemu:///system --original demo --name newdemo --file /var/lib/libvirt/images/newdemo.img --file /var/lib/libvirt/images/newdata.img 4.2.2. Cloning Guests with virt-manager This procedure describes cloning a guest virtual machine using the virt-manager utility. Procedure 4.2. Cloning a Virtual Machine with virt-manager Open virt-manager Start virt-manager . Launch the Virtual Machine Manager application from the Applications menu and System Tools submenu. Alternatively, run the virt-manager command as root. Select the guest virtual machine you want to clone from the list of guest virtual machines in Virtual Machine Manager . Right-click the guest virtual machine you want to clone and select Clone . The Clone Virtual Machine window opens. Figure 4.1. Clone Virtual Machine window Configure the clone To change the name of the clone, enter a new name for the clone. To change the networking configuration, click Details . Enter a new MAC address for the clone. Click OK . Figure 4.2. Change MAC Address window For each disk in the cloned guest virtual machine, select one of the following options: Clone this disk - The disk will be cloned for the cloned guest virtual machine Share disk with guest virtual machine name - The disk will be shared by the guest virtual machine that will be cloned and its clone Details - Opens the Change storage path window, which enables selecting a new path for the disk Figure 4.3. Change storage path window Clone the guest virtual machine Click Clone .
[ "virt-clone --help" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/cloning-a-vm
Chapter 2. Preparing overcloud nodes
Chapter 2. Preparing overcloud nodes The overcloud deployment that is used to demonstrate how to integrate with a Red Hat Ceph Storage cluster consists of Controller nodes with high availability and Compute nodes to host workloads. The Red Hat Ceph Storage cluster has its own nodes that you manage independently from the overcloud by using the Ceph management tools, not through director. For more information about Red Hat Ceph Storage, see the product documentation for Red Hat Ceph Storage . 2.1. Configuring the existing Red Hat Ceph Storage cluster To configure your Red Hat Ceph Storage cluster, you create object storage daemon (OSD) pools, define capabilities, and create keys and IDs directly on the Ceph Storage cluster. You can execute commands from any machine that can reach the Ceph Storage cluster and has the Ceph command line client installed. Procedure Log in to the external Ceph admin node. Open an interactive shell to access Ceph commands: Create the following RADOS Block Device (RBD) pools in your Ceph Storage cluster, relevant to your environment: Storage for OpenStack Block Storage (cinder): Storage for OpenStack Image Storage (glance): Storage for instances: Storage for OpenStack Block Storage Backup (cinder-backup): If your overcloud deploys the Shared File Systems service (manila) with Red Hat Ceph 5 (Ceph package 16) or later, you do not need to create data and metadata pools for CephFS. You can create a filesystem volume. For more information, see Management of MDS service using the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide . Create a client.openstack user in your Ceph Storage cluster with the following capabilities: cap_mgr: allow * cap_mon: profile rbd cap_osd: profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups Note the Ceph client key created for the client.openstack user: The key value in the example, AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw== , is your Ceph client key. If your overcloud deploys the Shared File Systems service with CephFS, create the client.manila user in your Ceph Storage cluster. The capabilities required for the client.manila user depend on whether your deployment exposes CephFS shares through the native CephFS protocol or the NFS protocol. If you expose CephFS shares through the native CephFS protocol, the following capabilities are required: cap_mgr: allow rw cap_mon: allow r If you expose CephFS shares through the NFS protocol, the following capabilities are required: cap_mgr: allow rw cap_mon: allow r cap_osd: allow rw pool=manila_data The specified pool name must be the value set for the ManilaCephFSDataPoolName parameter, which defaults to manila_data . Note the manila client name and the key value to use in overcloud deployment templates: Note the file system ID of your Ceph Storage cluster. This value is specified in the fsid field, under the [global] section of the configuration file for your cluster: Note Use the Ceph client key and file system ID, and the Shared File Systems service client IDs and key when you create the custom environment file. Additional resources Creating a custom environment file Red Hat Ceph Storage releases and corresponding Ceph package versions Red Hat Ceph Storage Configuration Guide
[ "[user@ceph ~]USD sudo cephadm shell", "ceph osd pool create volumes <pgnum>", "ceph osd pool create images <pgnum>", "ceph osd pool create vms <pgnum>", "ceph osd pool create backups <pgnum>", "ceph auth add client.openstack mgr 'allow *' mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups'", "ceph auth list [client.openstack] key = <AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw==> caps mgr = \"allow *\" caps mon = \"profile rbd\" caps osd = \"profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=backups\"", "ceph auth add client.manila mgr 'allow rw' mon 'allow r'", "ceph auth add client.manila mgr 'allow rw' mon 'allow r' osd 'allow rw pool=manila_data'", "ceph auth get-key client.manila <AQDQ991cAAAAABAA0aXFrTnjH9aO39P0iVvYyg==>", "[global] fsid = 4b5c8c0a-ff60-454b-a1b4-9747aa737d19" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/integrating_the_overcloud_with_an_existing_red_hat_ceph_storage_cluster/assembly-preparing-overcloud-nodes_existing-ceph
Chapter 1. JBoss EAP XP for the latest MicroProfile capabilities
Chapter 1. JBoss EAP XP for the latest MicroProfile capabilities 1.1. Installing JBoss EAP XP 5.0 without a pre-existing JBoss EAP 8.0 server If you want to install JBoss EAP XP 5.0 without first pre-installing JBoss EAP 8.0 server, follow the procedure below. Prerequisites You have access to the internet. You have created an account on the Red Hat customer portal and are logged in. You have downloaded the jboss-eap-installation-manager . Procedure Open the terminal emulator and navigate to the jboss-eap-installation-manager directory. Install JBoss EAP XP by running the following command from the jboss-eap-installation-manager directory : 1.2. Adding JBoss EAP XP 5.0 feature packs to an existing JBoss EAP 8.0 installation You can add an additional JBoss EAP XP 5.0 feature pack to an existing JBoss EAP installation using the jboss-eap-installation-manager . Prerequisites You have an account on the Red Hat Customer Portal and are logged in. You have reviewed the supported configurations for JBoss EAP XP 5.0. You have installed a supported JDK. You have downloaded the jboss-eap-installation-manager . For more information about downloading jboss-eap-installation-manager , see the Installation Guide . You have downloaded or installed JBoss EAP 8.0 using one of the supported methods. For more information about downloading [ProductShortName], see the Installation Guide . Note Installing the JBoss EAP XP extension will automatically perform a server update to receive the latest component updates. Procedure Open the terminal emulator and navigate to the jboss-eap-installation-manager directory. Run this script from the jboss-eap-installation-manager directory to subscribe the server to the JBoss EAP XP channel by executing: Install JBoss EAP XP extension by executing: 1.3. Adding JBoss EAP XP 5.0 feature packs to an existing JBoss EAP 8.0 installation offline You can add additional JBoss EAP XP 5.0 feature pack to an existing JBoss EAP installation offline using the jboss-eap-installation-manager . Prerequisites You have reviewed the supported configurations for JBoss EAP XP 5.0. You have installed a supported JDK. You have downloaded the jboss-eap-installation-manager . For more information about downloading the jboss-eap-installation-manager , see the Installation Guide . You have downloaded or installed JBoss EAP 8.0 using one of the supported methods. For more information about downloading the JBoss EAP 8.0, see the Installation Guide . You have downloaded and extracted the latest offline repositories for JBoss EAP 8.0 and JBoss EAP XP 5.0. Procedure Open the terminal emulator and navigate to the jboss-eap-installation-manager directory. Run this script from the jboss-eap-installation-manager directory to subscribe the server to the JBoss EAP XP channel by executing: Install JBoss EAP XP and use the --repositories parameter to specify the offline repositories: Note The feature pack will be added to the JBoss EAP installation passed in the --dir option . 1.4. Updating JBoss EAP XP installation using the jboss-eap-installation-manager You can update JBoss EAP XP periodically if new updates are available after you have downloaded and installed it. Prerequisites You have access to the internet. You have installed a supported JDK. You have downloaded the jboss-eap-installation-manager . For more information about downloading jboss-eap-installation , see the Installation Guide . You have downloaded or installed JBoss EAP XP 5.0 using one of the supported methods. For more information, see the Installation Guide . Procedure Extract the jboss-eap-installation-manager you have downloaded. Open the terminal emulator and navigate to the jboss-eap-installation-manager directory you have extracted. Run this script from the jboss-eap-installation-manager directory to check for available updates: Update JBoss EAP by running the following command: Syntax Example 1.5. Updating JBoss EAP XP installation offline using the jboss-eap-installation-manager You can use the jboss-eap-installation-manager to update the JBoss EAP XP 5.0 installation offline. Prerequisites You have installed a supported JDK. You have downloaded the jboss-eap-installation-manager . For more information about downloading jboss-eap-installation-manager , see the Installation Guide . You have downloaded or installed JBoss EAP XP 5.0 using one of the supported methods. For more information, see the Installation Guide . You have downloaded and extracted the latest offline repositories for JBoss EAP 8.0 and JBoss EAP XP 5.0. Procedure Stop the JBoss EAP server. Open the terminal emulator and navigate to the jboss-eap-installation-manager directory. Run this script from the jboss-eap-installation-manager directory to update the server components: Additional resources For more information about how you can perform a two phase update operation offline see Updating feature packs on an offline JBoss EAP server . 1.6. Reverting your JBoss EAP XP server to JBoss EAP You can use the jboss-eap-installation-manager to revert your JBoss EAP XP installation. Prerequisites You have access to the internet. You have installed a supported JDK. You have downloaded the jboss-eap-installation-manager . For more information about downloading jboss-eap-installation-manager , see the installation guide . You have downloaded or installed JBoss EAP XP 5.0 using one of the supported methods. For more information, see the Installation Guide . Procedure Open the terminal emulator and navigate to the jboss-eap-installation-manager directory. Run this script from the jboss-eap-installation-manager directory to investigate the history of all feature packs added to your JBoss EAP XP server: Stop the JBoss EAP XP server. Revert your to a version before JBoss EAP XP extension has been added: Additional resources For more information about how you can perform a two phase revert operation see Reverting installed feature packs
[ "./bin/jboss-eap-installation-manager.sh install --profile eap-xp-5.0 --dir eap-xp-5", "./bin/jboss-eap-installation-manager.sh channel add --channel-name eap-xp-5.0 --repositories=mrrc-ga::https://maven.repository.redhat.com/ga --manifest org.jboss.eap.channels:eap-xp-5.0 --dir eap-xp-5.0", "./bin/jboss-eap-installation-manager.sh feature-pack add --fpl org.jboss.eap.xp:wildfly-galleon-pack --dir eap-xp-5.0", "./bin/jboss-eap-installation-manager.sh channel add --channel-name eap-xp-5.0 --repositories=mrrc-ga::https://maven.repository.redhat.com/ga --manifest org.jboss.eap.channels:eap-xp-5.0 --dir eap-xp-5.0", "./bin/jboss-eap-installation-manager.sh feature-pack add --fpl org.jboss.eap.xp:wildfly-galleon-pack --dir eap-xp-5.0 --repositories <JBOSS_EAP_XP_OFFLINE_REPO_PATH>,<JBOSS_EAP_8.0_OFFLINE_REPO_PATH>", "./bin/jboss-eap-installation-manager.sh update list --dir eap-xp-5.0", "./bin/jboss-eap-installation-manager.sh update perform --dir eap-xp-5.0", "./bin/jboss-eap-installation-manager.sh update perform --dir eap-xp-5.0 Updates found: org.wildfly.galleon-plugins:wildfly-galleon-plugins 6.3.1.Final-redhat-00001 ==> 6.3.2.Final-redhat-00001 org.wildfly.wildfly-http-client:wildfly-http-transaction-client 2.0.1.Final-redhat-00001 ==> 2.0.2.Final-redhat-00001", "./bin/jboss-eap-installation-manager.sh update perform --dir eap-xp-5.0 --repositories <JBOSS_EAP_XP_OFFLINE_REPO_PATH>,<FEATURE_PACK_OFFLINE_REPO>,<JBOSS_EAP_8.0_OFFLINE_REPO_PATH>", "./bin/jboss-eap-installation-manager.sh history --dir eap-xp-5.0", "./bin/jboss-eap-installation-manager.sh revert perform --revision <REVISION_HASH> --dir eap-xp-5.0" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_xp_5.0/jboss_eap_xp_for_the_latest_microprofile_capabilities
7.3.2. Useful Websites
7.3.2. Useful Websites http://sourceware.org/lvm2 - LVM2 webpage, which contains an overview, link to the mailing lists, and more. http://tldp.org/HOWTO/LVM-HOWTO/ - LVM HOWTO from the Linux Documentation Project.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/logical_volume_manager_lvm-additional_resources-useful_websites
Chapter 3. Configuring realms
Chapter 3. Configuring realms Once you have an administrative account for the Admin Console, you can configure realms. A realm is a space where you manage objects, including users, applications, roles, and groups. A user belongs to and logs into a realm. One Red Hat build of Keycloak deployment can define, store, and manage as many realms as there is space for in the database. 3.1. Using the Admin Console You configure realms and perform most administrative tasks in the Red Hat build of Keycloak Admin Console. Prerequisites You need an administrator account. See Creating the first administrator . Procedure Go to the URL for the Admin Console. For example, for localhost, use this URL: http://localhost:8080/admin/ Login page Enter the username and password you created on the Welcome Page or the add-user-keycloak script in the bin directory. This action displays the Admin Console. Admin Console Note the menus and other options that you can use: Click the menu labeled Master to pick a realm you want to manage or to create a new one. Click the top right list to view your account or log out. Hover over a question mark ? icon to show a tooltip text that describes that field. The image above shows the tooltip in action. Click a question mark ? icon to show a tooltip text that describes that field. The image above shows the tooltip in action. Note Export files from the Admin Console are not suitable for backups or data transfer between servers. Only boot-time exports are suitable for backups or data transfer between servers. 3.2. The master realm In the Admin Console, two types of realms exist: Master realm - This realm was created for you when you first started Red Hat build of Keycloak. It contains the administrator account you created at the first login. Use the master realm only to create and manage the realms in your system. Other realms - These realms are created by the administrator in the master realm. In these realms, administrators manage the users in your organization and the applications they need. The applications are owned by the users. Realms and applications Realms are isolated from one another and can only manage and authenticate the users that they control. Following this security model helps prevent accidental changes and follows the tradition of permitting user accounts access to only those privileges and powers necessary for the successful completion of their current task. Additional resources See Dedicated Realm Admin Consoles if you want to disable the master realm and define administrator accounts within any new realm you create. Each realm has its own dedicated Admin Console that you can log into with local accounts. 3.3. Creating a realm You create a realm to provide a management space where you can create users and give them permissions to use applications. At first login, you are typically in the master realm, the top-level realm from which you create other realms. When deciding what realms you need, consider the kind of isolation you want to have for your users and applications. For example, you might create a realm for the employees of your company and a separate realm for your customers. Your employees would log into the employee realm and only be able to visit internal company applications. Customers would log into the customer realm and only be able to interact with customer-facing apps. Procedure Point to the top of the left pane. Click Create Realm . Add realm menu Enter a name for the realm. Click Create . Create realm The current realm is now set to the realm you just created. You can switch between realms by clicking the realm name in the menu. 3.4. Configuring SSL for a realm Each realm has an associated SSL Mode, which defines the SSL/HTTPS requirements for interacting with the realm. Browsers and applications that interact with the realm honor the SSL/HTTPS requirements defined by the SSL Mode or they cannot interact with the server. Procedure Click Realm settings in the menu. Click the General tab. General tab Set Require SSL to one of the following SSL modes: External requests Users can interact with Red Hat build of Keycloak without SSL so long as they stick to private IP addresses such as localhost , 127.0.0.1 , 10.x.x.x , 192.168.x.x , and 172.16.x.x . If you try to access Red Hat build of Keycloak without SSL from a non-private IP address, you will get an error. None Red Hat build of Keycloak does not require SSL. This choice applies only in development when you are experimenting and do not plan to support this deployment. All requests Red Hat build of Keycloak requires SSL for all IP addresses. 3.5. Configuring email for a realm Red Hat build of Keycloak sends emails to users to verify their email addresses, when they forget their passwords, or when an administrator needs to receive notifications about a server event. To enable Red Hat build of Keycloak to send emails, you provide Red Hat build of Keycloak with your SMTP server settings. Procedure Click Realm settings in the menu. Click the Email tab. Email tab Fill in the fields and toggle the switches as needed. Template From From denotes the address used for the From SMTP-Header for the emails sent. From display name From display name allows to configure a user-friendly email address aliases (optional). If not set the plain From email address will be displayed in email clients. Reply to Reply to denotes the address used for the Reply-To SMTP-Header for the mails sent (optional). If not set the plain From email address will be used. Reply to display name Reply to display name allows to configure a user-friendly email address aliases (optional). If not set the plain Reply To email address will be displayed. Envelope from Envelope from denotes the Bounce Address used for the Return-Path SMTP-Header for the mails sent (optional). Connection & Authentication Host Host denotes the SMTP server hostname used for sending emails. Port Port denotes the SMTP server port. Encryption Tick one of these checkboxes to support sending emails for recovering usernames and passwords, especially if the SMTP server is on an external network. You will most likely need to change the Port to 465, the default port for SSL/TLS. Authentication Set this switch to ON if your SMTP server requires authentication. When prompted, supply the Username and Password . The value of the Password field can refer a value from an external vault . 3.6. Configuring themes For a given realm, you can change the appearance of any UI in Red Hat build of Keycloak by using themes. Procedure Click Realm setting in the menu. Click the Themes tab. Themes tab Pick the theme you want for each UI category and click Save . Login theme Username password entry, OTP entry, new user registration, and other similar screens related to login. Account theme Each user has a User Account Management UI. Admin console theme The skin of the Red Hat build of Keycloak Admin Console. Email theme Whenever Red Hat build of Keycloak has to send out an email, it uses templates defined in this theme to craft the email. Additional resources The Server Developer Guide describes how to create a new theme or modify existing ones. 3.7. Enabling internationalization Every UI screen is internationalized in Red Hat build of Keycloak. The default language is English, but you can choose which locales you want to support and what the default locale will be. Procedure Click Realm Settings in the menu. Click the Localization tab. Enable Internationalization . Select the languages you will support. Localization tab The time a user logs in, that user can choose a language on the login page to use for the login screens, Account Console, and Admin Console. Additional resources The Server Developer Guide explains how you can offer additional languages. All internationalized texts which are provided by the theme can be overwritten by realm-specific texts on the Localization tab. 3.7.1. User locale selection A locale selector provider suggests the best locale on the information available. However, it is often unknown who the user is. For this reason, the previously authenticated user's locale is remembered in a persisted cookie. The logic for selecting the locale uses the first of the following that is available: User selected - when the user has selected a locale using the drop-down locale selector User profile - when there is an authenticated user and the user has a preferred locale set Client selected - passed by the client using for example ui_locales parameter Cookie - last locale selected on the browser Accepted language - locale from Accept-Language header Realm default If none of the above, fall back to English When a user is authenticated an action is triggered to update the locale in the persisted cookie mentioned earlier. If the user has actively switched the locale through the locale selector on the login pages the users locale is also updated at this point. If you want to change the logic for selecting the locale, you have an option to create custom LocaleSelectorProvider . For details, please refer to the Server Developer Guide . 3.8. Controlling login options Red Hat build of Keycloak includes several built-in login page features. 3.8.1. Enabling forgot password If you enable Forgot password , users can reset their login credentials if they forget their passwords or lose their OTP generator. Procedure Click Realm settings in the menu. Click the Login tab. Login tab Toggle Forgot password to ON . A Forgot Password? link displays in your login pages. Forgot password link Specify Host and From in the Email tab in order for Keycloak to be able to send the reset email. Click this link to bring users where they can enter their username or email address and receive an email with a link to reset their credentials. Forgot password page The text sent in the email is configurable. See Server Developer Guide for more information. When users click the email link, Red Hat build of Keycloak asks them to update their password, and if they have set up an OTP generator, Red Hat build of Keycloak asks them to reconfigure the OTP generator. Depending on security requirements of your organization, you may not want users to reset their OTP generator through email. To change this behavior, perform these steps: Procedure Click Authentication in the menu. Click the Flows tab. Select the Reset Credentials flow. Reset credentials flow If you do not want to reset the OTP, set the Reset OTP requirement to Disabled . Click Authentication in the menu. Click the Required actions tab. Ensure Update Password is enabled. Required Actions 3.8.2. Enabling Remember Me A logged-in user closing their browser destroys their session, and that user must log in again. You can set Red Hat build of Keycloak to keep the user's login session open if that user clicks the Remember Me checkbox upon login. This action turns the login cookie from a session-only cookie to a persistence cookie. Procedure Click Realm settings in the menu. Click the Login tab. Toggle the Remember Me switch to On . Login tab When you save this setting, a remember me checkbox displays on the realm's login page. Remember Me 3.8.3. ACR to Level of Authentication (LoA) Mapping In the login settings of a realm, you can define which Authentication Context Class Reference (ACR) value is mapped to which Level of Authentication (LoA) . The ACR can be any value, whereas the LoA must be numeric. The acr claim can be requested in the claims or acr_values parameter sent in the OIDC request and it is also included in the access token and ID token. The mapped number is used in the authentication flow conditions. Mapping can be also specified at the client level in case that particular client needs to use different values than realm. However, a best practice is to stick to realm mappings. For further details see Step-up Authentication and the official OIDC specification . 3.8.4. Update Email Workflow (UpdateEmail) With this workflow, users will have to use an UPDATE_EMAIL action to change their own email address. The action is associated with a single email input form. If the realm has email verification disabled, this action will allow to update the email without verification. If the realm has email verification enabled, the action will send an email update action token to the new email address without changing the account email. Only the action token triggering will complete the email update. Applications are able to send their users to the email update form by leveraging UPDATE_EMAIL as an AIA (Application Initiated Action). Note UpdateEmail is Technology Preview and is not fully supported. This feature is disabled by default. To enable start the server with --features=preview or --features=update-email Note If you enable this feature and you are migrating from a version, enable the Update Email required action in your realms. Otherwise, users cannot update their email addresses. 3.9. Configuring realm keys The authentication protocols that are used by Red Hat build of Keycloak require cryptographic signatures and sometimes encryption. Red Hat build of Keycloak uses asymmetric key pairs, a private and public key, to accomplish this. Red Hat build of Keycloak has a single active key pair at a time, but can have several passive keys as well. The active key pair is used to create new signatures, while the passive key pair can be used to verify signatures. This makes it possible to regularly rotate the keys without any downtime or interruption to users. When a realm is created, a key pair and a self-signed certificate is automatically generated. Procedure Click Realm settings in the menu. Click Keys . Select Passive keys from the filter dropdown to view passive keys. Select Disabled keys from the filter dropdown to view disabled keys. A key pair can have the status Active , but still not be selected as the currently active key pair for the realm. The selected active pair which is used for signatures is selected based on the first key provider sorted by priority that is able to provide an active key pair. 3.9.1. Rotating keys We recommend that you regularly rotate keys. Start by creating new keys with a higher priority than the existing active keys. You can instead create new keys with the same priority and making the keys passive. Once new keys are available, all new tokens and cookies will be signed with the new keys. When a user authenticates to an application, the SSO cookie is updated with the new signature. When OpenID Connect tokens are refreshed new tokens are signed with the new keys. Eventually, all cookies and tokens use the new keys and after a while the old keys can be removed. The frequency of deleting old keys is a tradeoff between security and making sure all cookies and tokens are updated. Consider creating new keys every three to six months and deleting old keys one to two months after you create the new keys. If a user was inactive in the period between the new keys being added and the old keys being removed, that user will have to re-authenticate. Rotating keys also applies to offline tokens. To make sure they are updated, the applications need to refresh the tokens before the old keys are removed. 3.9.2. Adding a generated key pair Use this procedure to generate a key pair including a self-signed certificate. Procedure Select the realm in the Admin Console. Click Realm settings in the menu. Click the Keys tab. Click the Providers tab. Click Add provider and select rsa-generated . Enter a number in the Priority field. This number determines if the new key pair becomes the active key pair. The highest number makes the key pair active. Select a value for AES Key size . Click Save . Changing the priority for a provider will not cause the keys to be re-generated, but if you want to change the keysize you can edit the provider and new keys will be generated. 3.9.3. Rotating keys by extracting a certificate You can rotate keys by extracting a certificate from an RSA generated key pair and using that certificate in a new keystore. Prerequisites A generated key pair Procedure Select the realm in the Admin Console. Click Realm Settings . Click the Keys tab. A list of Active keys appears. On a row with an RSA key, click Certificate under Public Keys . The certificate appears in text form. Save the certificate to a file and enclose it in these lines. ----Begin Certificate---- <Output> ----End Certificate---- Use the keytool command to convert the key file to PEM Format. Remove the current RSA public key certificate from the keystore. keytool -delete -keystore <keystore>.jks -storepass <password> -alias <key> Import the new certificate into the keystore keytool -importcert -file domain.crt -keystore <keystore>.jks -storepass <password> -alias <key> Rebuild the application. mvn clean install wildfly:deploy 3.9.4. Adding an existing key pair and certificate To add a key pair and certificate obtained elsewhere select Providers and choose rsa from the dropdown. You can change the priority to make sure the new key pair becomes the active key pair. Prerequisites A private key file. The file must be PEM formatted. Procedure Select the realm in the Admin Console. Click Realm settings . Click the Keys tab. Click the Providers tab. Click Add provider and select rsa . Enter a number in the Priority field. This number determines if the new key pair becomes the active key pair. Click Browse... beside Private RSA Key to upload the private key file. If you have a signed certificate for your private key, click Browse... beside X509 Certificate to upload the certificate file. Red Hat build of Keycloak automatically generates a self-signed certificate if you do not upload a certificate. Click Save . 3.9.5. Loading keys from a Java Keystore To add a key pair and certificate stored in a Java Keystore file on the host select Providers and choose java-keystore from the dropdown. You can change the priority to make sure the new key pair becomes the active key pair. For the associated certificate chain to be loaded it must be imported to the Java Keystore file with the same Key Alias used to load the key pair. Procedure Select the realm in the Admin Console. Click Realm settings in the menu. Click the Keys tab. Click the Providers tab. Click Add provider and select java-keystore . Enter a number in the Priority field. This number determines if the new key pair becomes the active key pair. Enter a value for Keystore . Enter a value for Keystore Password . Enter a value for Key Alias . Enter a value for Key Password . Click Save . 3.9.6. Making keys passive Procedure Select the realm in the Admin Console. Click Realm settings in the menu. Click the Keys tab. Click the Providers tab. Click the provider of the key you want to make passive. Toggle Active to Off . Click Save . 3.9.7. Disabling keys Procedure Select the realm in the Admin Console. Click Realm settings in the menu. Click the Keys tab. Click the Providers tab. Click the provider of the key you want to make passive. Toggle Enabled to Off . Click Save . 3.9.8. Compromised keys Red Hat build of Keycloak has the signing keys stored just locally and they are never shared with the client applications, users or other entities. However, if you think that your realm signing key was compromised, you should first generate new key pair as described above and then immediately remove the compromised key pair. Alternatively, you can delete the provider from the Providers table. Procedure Click Clients in the menu. Click security-admin-console . Scroll down to the Access settings section. Fill in the Admin URL field. Click the Advanced tab. Click Set to now in the Revocation section. Click Push . Pushing the not-before policy ensures that client applications do not accept the existing tokens signed by the compromised key. The client application is forced to download new key pairs from Red Hat build of Keycloak also so the tokens signed by the compromised key will be invalid. Note REST and confidential clients must set Admin URL so Red Hat build of Keycloak can send clients the pushed not-before policy request.
[ "----Begin Certificate---- <Output> ----End Certificate----", "keytool -delete -keystore <keystore>.jks -storepass <password> -alias <key>", "keytool -importcert -file domain.crt -keystore <keystore>.jks -storepass <password> -alias <key>", "mvn clean install wildfly:deploy" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_administration_guide/configuring_realms
16.2. Configuring a DHCPv4 Server
16.2. Configuring a DHCPv4 Server The dhcp package contains an Internet Systems Consortium (ISC) DHCP server. First, install the package as the superuser: Installing the dhcp package creates a file, /etc/dhcp/dhcpd.conf , which is merely an empty configuration file: The sample configuration file can be found at /usr/share/doc/dhcp-< version >/dhcpd.conf.sample . You should use this file to help you configure /etc/dhcp/dhcpd.conf , which is explained in detail below. DHCP also uses the file /var/lib/dhcpd/dhcpd.leases to store the client lease database. See Section 16.2.2, "Lease Database" for more information. 16.2.1. Configuration File The first step in configuring a DHCP server is to create the configuration file that stores the network information for the clients. Use this file to declare options and global options for client systems. The configuration file can contain extra tabs or blank lines for easier formatting. Keywords are case-insensitive and lines beginning with a hash sign (#) are considered comments. There are two types of statements in the configuration file: Parameters - State how to perform a task, whether to perform a task, or what network configuration options to send to the client. Declarations - Describe the topology of the network, describe the clients, provide addresses for the clients, or apply a group of parameters to a group of declarations. The parameters that start with the keyword option are referred to as options . These options control DHCP options; whereas, parameters configure values that are not optional or control how the DHCP server behaves. Parameters (including options) declared before a section enclosed in curly brackets ({ }) are considered global parameters. Global parameters apply to all the sections below it. Important If the configuration file is changed, the changes do not take effect until the DHCP daemon is restarted with the command service dhcpd restart . Note Instead of changing a DHCP configuration file and restarting the service each time, using the omshell command provides an interactive way to connect to, query, and change the configuration of a DHCP server. By using omshell , all changes can be made while the server is running. For more information on omshell , see the omshell man page. In Example 16.1, "Subnet Declaration" , the routers , subnet-mask , domain-search , domain-name-servers , and time-offset options are used for any host statements declared below it. For every subnet which will be served, and for every subnet to which the DHCP server is connected, there must be one subnet declaration, which tells the DHCP daemon how to recognize that an address is on that subnet . A subnet declaration is required for each subnet even if no addresses will be dynamically allocated to that subnet . In this example, there are global options for every DHCP client in the subnet and a range declared. Clients are assigned an IP address within the range . Example 16.1. Subnet Declaration To configure a DHCP server that leases a dynamic IP address to a system within a subnet, modify Example 16.2, "Range Parameter" with your values. It declares a default lease time, maximum lease time, and network configuration values for the clients. This example assigns IP addresses in the range 192.168.1.10 and 192.168.1.100 to client systems. Example 16.2. Range Parameter To assign an IP address to a client based on the MAC address of the network interface card, use the hardware ethernet parameter within a host declaration. As demonstrated in Example 16.3, "Static IP Address Using DHCP" , the host apex declaration specifies that the network interface card with the MAC address 00:A0:78:8E:9E:AA always receives the IP address 192.168.1.4. Note that you can also use the optional parameter host-name to assign a host name to the client. Example 16.3. Static IP Address Using DHCP All subnets that share the same physical network should be declared within a shared-network declaration as shown in Example 16.4, "Shared-network Declaration" . Parameters within the shared-network , but outside the enclosed subnet declarations, are considered to be global parameters. The name of the shared-network must be a descriptive title for the network, such as using the title 'test-lab' to describe all the subnets in a test lab environment. Example 16.4. Shared-network Declaration As demonstrated in Example 16.5, "Group Declaration" , the group declaration is used to apply global parameters to a group of declarations. For example, shared networks, subnets, and hosts can be grouped. Example 16.5. Group Declaration Note You can use the provided sample configuration file as a starting point and add custom configuration options to it. To copy this file to the proper location, use the following command as root : ... where <version_number> is the DHCP version number. For a complete list of option statements and what they do, see the dhcp-options man page.
[ "~]# yum install dhcp", "~]# cat /etc/dhcp/dhcpd.conf # DHCP Server Configuration file. see /usr/share/doc/dhcp*/dhcpd.conf.sample", "subnet 192.168.1.0 netmask 255.255.255.0 { option routers 192.168.1.254; option subnet-mask 255.255.255.0; option domain-search \"example.com\"; option domain-name-servers 192.168.1.1; option time-offset -18000; # Eastern Standard Time range 192.168.1.10 192.168.1.100; }", "default-lease-time 600; max-lease-time 7200; option subnet-mask 255.255.255.0; option broadcast-address 192.168.1.255; option routers 192.168.1.254; option domain-name-servers 192.168.1.1, 192.168.1.2; option domain-search \"example.com\"; subnet 192.168.1.0 netmask 255.255.255.0 { range 192.168.1.10 192.168.1.100; }", "host apex { option host-name \"apex.example.com\"; hardware ethernet 00:A0:78:8E:9E:AA; fixed-address 192.168.1.4; }", "shared-network name { option domain-search \"test.redhat.com\"; option domain-name-servers ns1.redhat.com, ns2.redhat.com; option routers 192.168.0.254; #more parameters for EXAMPLE shared-network subnet 192.168.1.0 netmask 255.255.252.0 { #parameters for subnet range 192.168.1.1 192.168.1.254; } subnet 192.168.2.0 netmask 255.255.252.0 { #parameters for subnet range 192.168.2.1 192.168.2.254; } }", "group { option routers 192.168.1.254; option subnet-mask 255.255.255.0; option domain-search \"example.com\"; option domain-name-servers 192.168.1.1; option time-offset -18000; # Eastern Standard Time host apex { option host-name \"apex.example.com\"; hardware ethernet 00:A0:78:8E:9E:AA; fixed-address 192.168.1.4; } host raleigh { option host-name \"raleigh.example.com\"; hardware ethernet 00:A1:DD:74:C3:F2; fixed-address 192.168.1.6; } }", "~]# cp /usr/share/doc/dhcp- <version_number> /dhcpd.conf.sample /etc/dhcp/dhcpd.conf" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-dhcp-configuring-server
14.3. Booleans
14.3. Booleans SELinux is based on the least level of access required for a service to run. Services can be run in a variety of ways; therefore, you need to specify how you run your services. Use the following Booleans to set up SELinux: smbd_anon_write Having this Boolean enabled allows smbd to write to a public directory, such as an area reserved for common files that otherwise has no special access restrictions. samba_create_home_dirs Having this Boolean enabled allows Samba to create new home directories independently. This is often done by mechanisms such as PAM. samba_domain_controller When enabled, this Boolean allows Samba to act as a domain controller, as well as giving it permission to execute related commands such as useradd , groupadd , and passwd . samba_enable_home_dirs Enabling this Boolean allows Samba to share users' home directories. samba_export_all_ro Export any file or directory, allowing read-only permissions. This allows files and directories that are not labeled with the samba_share_t type to be shared through Samba. When the samba_export_all_ro Boolean is enabled, but the samba_export_all_rw Boolean is disabled, write access to Samba shares is denied, even if write access is configured in /etc/samba/smb.conf , as well as Linux permissions allowing write access. samba_export_all_rw Export any file or directory, allowing read and write permissions. This allows files and directories that are not labeled with the samba_share_t type to be exported through Samba. Permissions in /etc/samba/smb.conf and Linux permissions must be configured to allow write access. samba_run_unconfined Having this Boolean enabled allows Samba to run unconfined scripts in the /var/lib/samba/scripts/ directory. samba_share_fusefs This Boolean must be enabled for Samba to share fusefs file systems. samba_share_nfs Disabling this Boolean prevents smbd from having full access to NFS shares through Samba. Enabling this Boolean will allow Samba to share NFS volumes. use_samba_home_dirs Enable this Boolean to use a remote server for Samba home directories. virt_use_samba Allow virtual machine access to CIFS files. Note Due to the continuous development of the SELinux policy, the list above might not contain all Booleans related to the service at all times. To list them, enter the following command: Enter the following command to view description of a particular Boolean: Note that the additional policycoreutils-devel package providing the sepolicy utility is required for this command to work.
[ "~]USD getsebool -a | grep service_name", "~]USD sepolicy booleans -b boolean_name" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-samba-booleans
Chapter 77. Kubernetes Service Account
Chapter 77. Kubernetes Service Account Since Camel 2.17 Only producer is supported The Kubernetes Service Account component is one of the Kubernetes Components which provides a producer to execute Kubernetes Service Account operations. 77.1. Dependencies When using kubernetes-service-accounts with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 77.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 77.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 77.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 77.3. Component Options The Kubernetes Service Account component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 77.4. Endpoint Options The Kubernetes Service Account endpoint is configured using URI syntax: with the following path and query parameters: 77.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 77.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 77.5. Message Headers The Kubernetes Service Account component supports 5 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesServiceAccountsLabels (producer) Constant: KUBERNETES_SERVICE_ACCOUNTS_LABELS The service account labels. Map CamelKubernetesServiceAccountName (producer) Constant: KUBERNETES_SERVICE_ACCOUNT_NAME The service account name. String CamelKubernetesServiceAccount (producer) Constant: KUBERNETES_SERVICE_ACCOUNT A service account object. ServiceAccount 77.6. Supported producer operation listServiceAccounts listServiceAccountsByLabels getServiceAccount createServiceAccount updateServiceAccount deleteServiceAccount 77.7. Kubernetes ServiceAccounts Produce Examples listServiceAccounts: this operation lists the service account on a kubernetes cluster. from("direct:list"). toF("kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccounts"). to("mock:result"); This operation returns a List of services from your cluster. listServiceAccountsByLabels: this operation lists the service account by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SERVICE_ACCOUNTS_LABELS, labels); } }); toF("kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccountsByLabels"). to("mock:result"); This operation returns a List of Services from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 77.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-service-accounts:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccounts\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SERVICE_ACCOUNTS_LABELS, labels); } }); toF(\"kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccountsByLabels\"). to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-service-account-component-starter